摘要
一位新闻记者兼机器人与机器学习的工作人员新闻编辑每日新闻-人工智能的新研究是一篇报道的主题。根据NewsRx记者在中国人民共和国济南的新闻报道,研究表明:“可解释人工智能(XAI)已经被越来越多地研究,以提高黑盒人工智能模型的透明度,促进更好的用户理解和信任。开发一个忠实于模型且对我们来说可信的XAI既是必要也是挑战。”新闻记者引用了山东大学的一篇文章:“本文研究了将人类注意力知识嵌入到基于salien cy的计算机视觉模型XAI方法中是否能提高其可信度和可信度。”本文首先通过扩展现有的基于梯度的XAI方法来产生特定对象的Explanation,这些方法以人的注意力作为客观可信度量,获得了更高的解释可信度,有趣的是,所有现有的XAI方法应用于目标检测模型时,通常产生的显著性Map比来自同一目标检测任务的人的注意图更不忠实于模型。提出了人类注意引导的XAI(HAG-XAI),通过使用可训练的激活函数和平滑核使XAI Salienc Y图和人类注意图之间的相似性最大化来学习如何最好地结合模型中的解释信息,以提高解释的合理性。和ImageNet数据集,并与典型的基于梯度和基于扰动的XAI方法进行了比较。
Abstract
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – New research on Artificial Intelligenc e is the subject of a report. According to news reporting from Jinan, People’s R epublic of China, by NewsRx journalists, research stated, “Explainable artificia l intelligence (XAI) has been increasingly investigated to enhance the transpare ncy of black-box artificial intelligence models, promoting better user understan ding and trust. Developing an XAI that is faithful to models and plausible to us ers is both a necessity and a challenge.” The news correspondents obtained a quote from the research from Shandong Univers ity, “This work examines whether embedding human attention knowledge into salien cy-based XAI methods for computer vision models could enhance their plausibility and faithfulness. Two novel XAI methods for object detection models, namely Ful lGrad-CAM and FullGrad-CAM++, were first developed to generate object-specific e xplanations by extending the current gradient-based XAI methods for image classi fication models. Using human attention as the objective plausibility measure, th ese methods achieve higher explanation plausibility. Interestingly, all current XAI methods when applied to object detection models generally produce saliency m aps that are less faithful to the model than human attention maps from the same object detection task. Accordingly, human attention-guided XAI (HAG-XAI) was pro posed to learn from human attention how to best combine explanatory information from the models to enhance explanation plausibility by using trainable activatio n functions and smoothing kernels to maximize the similarity between XAI salienc y map and human attention map. The proposed XAI methods were evaluated on widely used BDD-100K, MS-COCO, and ImageNet datasets and compared with typical gradien t-based and perturbation-based XAI methods.”