XAI架构探索与实践
Exploration and practice of XAI architecture
夏正勋 1唐剑飞 1杨一帆 1罗圣美 2张燕 1谭锋镭 1谭圣儒1
作者信息
- 1. 星环信息科技(上海)股份有限公司,上海 200233
- 2. 中孚信息股份有限公司,江苏 南京 211899
- 折叠
摘要
可解释AI(explainable AI,XAI)是可信AI技术的重要组成.当前,业界对XAI的技术点展开了深入的研究,但在工程化实施方面尚缺少系统性研究.提出了一种通用的XAI技术架构,从原子解释生成、核心能力增强、业务组件嵌入、可信解释应用4个方面入手,设计了XAI基础能力层、XAI核心能力层、XAI业务组件层、XAI应用层4个层级,通过各技术层之间的分工协作,XAI工程化的落地实施得到了全流程保障.基于该XAI架构,可以灵活地引入新的技术模块,支撑XAI的产业化应用,为XAI在行业中的推广提供了一定的参考.
Abstract
XAI(explainable AI)is an important component of trusted AI.In-depth research on the technology points of XAI has been carried out in the current industry,but systematic research on engineering implementation is lacking.This paper proposed a general XAI technical architecture,which started from the follow four aspects:atomic interpretation generation,core competence enhancement,business component embedding and trusted interpretation application.We designed four levels:XAI foundation layer,XAI core competence layer,XAI business component layer and XAI application layer.Through the division of labor and cooperation among the technical layers,the implementation of XAI engineering was guaranteed throughout the whole process.Based on the XAI architecture presented in this paper,new technical modules can be introduced flexibly to support the industrialization application of XAI,providing certain reference for the promotion of XAI in the industry.
关键词
可解释AI/可信AI/XAI架构Key words
explainable AI/trusted AI/XAI architecture引用本文复制引用
出版年
2024