首页|事后解释可以消除认识的不透明性吗

事后解释可以消除认识的不透明性吗

扫码查看
基于深度学习模型的人工智能系统被广泛应用于各个领域,却由于不透明性导致了信任问题.计算科学家试图开发解释黑箱模型的工具以缓解这一矛盾.对可解释技术的认识,有助于区分因果解释与事后解释:因果解释要求对模型机制的完全认识,而可解释性技术对黑箱模型的解释并非总是关于模型内部细节的说明,但它是对因果解释无法获得时的补救措施,仍具有启发性的认识论价值.事后解释使用的近似方法是科学模型哲学研究的重要组成部分,而建构经验论也为事后解释之于模型机制的认识意义或价值提供了支持.
Can Post-Hoc Explanations Eliminate the Epistemic Opacity
Artificial intelligence systems based on deep learning models are widely applied across various domains,yet their opacity has led to trust issues.Computational scientists are attempting to develop tools to interpret black-box models in order to alleviate these contradictions.By analyzing interpretable technology,this article emphasizes the differentiation between causal explanation and post hoc explanation,along with its significance:Causal explanation requires a complete understanding of the model mechanism,while interpretable technology's elucidation of black boxes does not always concern internal model details,but serves as a remedy when causal explanation is unattainable,still holding heuristic epistemological value.The approximation methods used in post hoc explanation are a vital component of the philosophical study of scientific models,while constructive empiricism also provides support for the understanding of the meaning or value of post hoc explanations in the model mechanism.

opacityexplainable artificial intelligence(XAI)post hoc explanationdispositional causal explanation

贾玮晗、董春雨

展开 >

北京师范大学哲学学院

北京师范大学哲学学院、价值与文化中心(北京100875)

不透明性 可解释人工智能 事后解释 倾向性因果解释

国家社会科学基金重点项目国家社会科学基金一般项目国家社会科学基金一般项目

18AZX00822ZXB0088423BZX103

2024

探索与争鸣
上海市社会科学界联合会

探索与争鸣

CSSCICHSSCD北大核心
影响因子:0.72
ISSN:1004-2229
年,卷(期):2024.(6)
  • 2