首页|Human-Centered Explainability for Intelligent Vehicles-A User Study

Human-Centered Explainability for Intelligent Vehicles-A User Study

扫码查看
Advances in artificial intelligence (Al) are leading to an increased use of algorithm-generated user-adaptivity in everyday systems. Explainable Al aims to make algorithmic decision-making more transparent to humans. As future vehicles become more intelligent and user-adaptive, explainabil-ity will play an important role in ensuring that drivers understand the Al system's functionalities and outputs. However, when integrating explainability into in-vehicle features there is a lack of knowledge about user needs and requirements and how to address them. We conducted a study with 59 participants focusing on how end-users evaluate explainability in the context of user-adaptive comfort and infotainment features. Results show that explanations foster perceived understandability and transparency of the system, but that the need for explanation may vary between features. Additionally, we found that insufficiently designed explanations can decrease acceptance of the system. Our findings underline the requirement for a user-centered approach in explainable Al and indicate approaches for future research.

Human-AI interactionexplainable Aluser-adaptiveintelligent vehiclesuser studies

Julia Graefe、Selma Paden、Doreen Engelhardt、Klaus Bengler

展开 >

Department of Mechanical Engineering, TUM School of Engineering and Design, Technical University of Munich, Munich, Germany

University of Applied Sciences Merseburg, Merseburg, Germany

AUDI AG, Ingolstadt, Germany

2023

International Journal of Human-Computer Interaction

International Journal of Human-Computer Interaction

EI
ISSN:1044-7318
年,卷(期):2023.39(16/20)
  • 41