首页|基于扩散模型的无条件反事实解释生成方法

基于扩散模型的无条件反事实解释生成方法

扫码查看
反事实解释通过对输入数据实施最小且具解释性的改动改变模型输出,揭示影响模型决策的关键因素.现有基于扩散模型的反事实解释方法依赖条件生成,需要额外获取与分类相关的语义信息,难以保证语义信息质量并增加计算成本.针对上述问题,文中基于生成扩散模型中的DDIMs(Denoising Diffusion Implicit Models),提出基于扩散模型的无条件反事实解释生成方法.首先,利用隐式去噪扩散模型在反向去噪过程中展现的一致性,将噪声图像视为隐变量以控制输出生成,从而使扩散模型适用于无条件的反事实解释生成流程.然后,充分利用隐式去噪扩散模型在过滤高频噪声和分布外扰动方面的优势,重塑无条件的反事实解释生成流程,生成具有解释性的语义改动.在不同数据集上的实验表明,文中方法的多项指标值较优.
Diffusion Models Based Unconditional Counterfactual Explanations Generation
Counterfactual explanations alter the model output by implementing minimal and interpretable modifications to input data,revealing key factors influencing model decisions.Existing counterfactual explanation methods based on diffusion models rely on conditional generation,requiring additional semantic information related to classification.However,ensuring semantic quality of the semantic information is challenging and computational costs are increased.To address these issues,an unconditional counterfactual explanation generation method based on the denoising diffusion implicit model(DDIM)is proposed.By leveraging the consistency exhibited by DDIM during the reverse denoising process,noisy images are treated as latent variables to control the generated outputs,thus making the diffusion model suitable for unconditional counterfactual explanation generation workflows.Then,the advantages of DDIM in filtering high-frequency noise and out-of-distribution perturbations are fully utilized,thereby reconstructing the unconditional counterfactual explanation workflow to generate semantically interpretable modifications.Extensive experiments on different datasets demonstrate that the proposed method achieves superior results across multiple metrics.

Deep LearningInterpretabilityCounterfactual ExplanationDiffusion ModelAdversarial Attack

仲智、王宇、祝子烨、李云

展开 >

南京邮电大学计算机学院 南京 210023

中国药科大学理学院 南京 211198

深度学习 可解释性 反事实解释 扩散模型 对抗攻击

2024

模式识别与人工智能
中国自动化学会,国家智能计算机研究开发中心,中国科学院合肥智能机械研究所

模式识别与人工智能

CSTPCD北大核心
影响因子:0.954
ISSN:1003-6059
年,卷(期):2024.37(11)