摘要
一位新闻记者-机器人与机器学习的新闻编辑-每日新闻-关于人工智能的研究结果在一份新的报告中讨论。根据NewsRx通讯员从数学和计算机科学研究所传来的消息,研究表明:“机器学习支持的智能应用在许多领域的广泛任务中取得了令人惊叹的性能,然而,如何理解训练过的算法做出特定的决策仍然是一个问题。”这项研究的财政支持者包括Coordenacao De Aperfeicoamento De Pessoal De Nivel Superior-Brasil(CAPES)-金融代码001;圣保罗研究基金会;国家科学技术发展委员会;Fapesp;Cnpq。新闻记者从数学与计算机科学研究所的研究中获得了一句话:“鉴于人们对勒阿宁模型应用的兴趣越来越大,在处理可能影响用户生活的敏感环境时出现了一些担忧。这些模型决策机制的复杂性使它们成为所谓的‘黑匣子’。”其中,人类对自动化决策过程背后逻辑的理解并不是微不足道的。此外,导致模型提供特定预测的推理比性能指标更重要,性能指标引入了可解释性和模型准确性之间的权衡。解释智能计算机决策可以被视为证明其可靠性和建立信任的一种方式。从这个意义上说,解释是验证预测的关键工具,可以发现先前隐藏在模型复杂结构中的错误和偏见,为更负责任的应用打开了广阔的可能性。在本综述中,我们提供了可解释人工智能(XAI)的理论基础,阐明分散的定义,确定研究目标、挑战,以及未来与将不透明的机器学习输出转化为更透明的决策有关的研究路线。
Abstract
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News-Research findings on artificial intell igence are discussed in a new report. According to news originating from the Ins titute of Mathematics and Computer Science by NewsRx correspondents, research st ated, "Intelligent applications supported by Machine Learning have achieved rema rkable performance rates for a wide range of tasks in many domains. However, und erstanding why a trained algorithm makes a particular decision remains problemat ic." Financial supporters for this research include Coordenacao De Aperfeicoamento De Pessoal De Nivel Superior-brasil (Capes)-finance Code 001; Sao Paulo Research F oundation; National Council For Scientific And Technological Development; Fapesp ; Cnpq. The news journalists obtained a quote from the research from Institute of Mathem atics and Computer Science: "Given the growing interest in the application of le arning-based models, some concerns arise in the dealing with sensible environmen ts, which may impact users' lives. The complex nature of those models' decision mechanisms makes them the so-called 'black boxes,' in which the understanding of the logic behind automated decision-making processes by humans is not trivial. Furthermore, the reasoning that leads a model to provide a specific prediction c an be more important than performance metrics, which introduces a trade-off betw een interpretability and model accuracy. Explaining intelligent computer decisio ns can be regarded as a way to justify their reliability and establish trust. In this sense, explanations are critical tools that verify predictions to discover errors and biases previously hidden within the models' complex structures, open ing up vast possibilities for more responsible applications. In this review, we provide theoretical foundations of Explainable Artificial Intelligence (XAI), cl arifying diffuse definitions and identifying research objectives, challenges, an d future research lines related to turning opaque machine learning outputs into more transparent decisions."