Counterfactual Explanation of Anomalous Objects Considering Causal Constraints
Most existing anomaly detection methods focus on algorithm efficiency and accuracy while overlooking the interpretability of detected anomalous objects.Counterfactual explanation,a research hot spot in interpretable machine learning,aims to explain model decisions by perturbing the features of the instances under study and generating counterfactual examples.In practical applications,there may be causal relationships among features.However,most existing counterfactual-based interpretability methods concentrate on how to generate more diverse counterfactual examples,overlooking the causal relationships among features.Consequently,unreasonable counterfactual explanations may be produced.To address this issue,this study proposes an algorithm to interpret anomaly via reasonable counterfactuals(IARC)that consider causal constraints.In the process of generating counterfactual explanations,the proposed method incorporates the causality between features into the objective function to evaluate the feasibility of each perturbation,and employs an improved genetic algorithm for solution optimization,thereby generating rational counterfactual explanations.Additionally,a novel measurement metric is introduced to quantify the degree of contradiction in the generated counterfactual explanations.Comparative experiments and detailed case studies are conducted on multiple real-world datasets,benchmarking the proposed method against several state-of-the-art methods.The experimental results demonstrate that the proposed method can generate highly rational counterfactual explanations for anomalous objects.
model interpretabilityanomaly detectioncounterfactual explanationgenetic algorithmcausality