Explainability and Perceived Fairness in AI Algorithmic Decision-Making Systems
Recently,in the academic community,especially in the field of human-computer interaction(HCI),there has increasingly focused on the perceived fairness of artificial intelligence,that is,how designers,end users,decision makers,and various other stakeholders perceive fairness.Although the computational definition of fairness and transparency is a very popular research topic today,there is an understanding of the necessity to look at algo-rithmic systems from a broader perspective,which also involves their social impact(such as adherence to social norms,moral judgments,and user perception).This paper aims to understand the impact of explanations provided by algorithmic systems on non-expert comprehension and fairness perception.Our goal is to investigate whether and how explanations affect users'fairness perception in system results.Explanation can be used to improve the transparency,credibility,and perception of fairness of algorithm systems,and the best explanation should be to create personalized explanations based on system characteristics,as well as the users'demographic and personality characteristics.