首页|Towards human-compatible XAI: Explaining data differentials with concept induction over background knowledge

Towards human-compatible XAI: Explaining data differentials with concept induction over background knowledge

扫码查看
Concept induction, which is based on formal logical reasoning over description logics, has been used in ontology engineering in order to create ontology (TBox) axioms from the base data (ABox) graph. In this paper, we show that it can also be used to explain data differentials, for example in the context of Explainable AI (XAI), and we show that it can in fact be done in a way that is meaningful to a human observer. Our approach utilizes a large class hierarchy, curated from the Wikipedia category hierarchy, as background knowledge. To make the explanations easily understandable for non-specialists, the complex description logic explanations generated by our concept induction system (ECU) were presented as a word list consisting of the concept names occurring in the highest rated system responses.

Concept inductionExplainable AIClass hierarchy

Cara Leigh Widmer、Md Kamruzzaman Sarker、Srikanth Nadella、Joshua Fiechter、Ion Juvina、Brandon Minnery、Pascal Hitzler、Joshua Schwartz、Michael Raymer

展开 >

Kairos Research, LLC, USA

Bowie State University, USA

Kairos Research, LLC, USA||Wright State University, USA

Kansas State University, USA

展开 >

2023

Journal of web semantics: Science, services and agents on the world wide web
  • 56