首页|Human-Centered Explainability for Intelligent Vehicles-A User Study
Human-Centered Explainability for Intelligent Vehicles-A User Study
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
Taylor & Francis
Advances in artificial intelligence (Al) are leading to an increased use of algorithm-generated user-adaptivity in everyday systems. Explainable Al aims to make algorithmic decision-making more transparent to humans. As future vehicles become more intelligent and user-adaptive, explainabil-ity will play an important role in ensuring that drivers understand the Al system's functionalities and outputs. However, when integrating explainability into in-vehicle features there is a lack of knowledge about user needs and requirements and how to address them. We conducted a study with 59 participants focusing on how end-users evaluate explainability in the context of user-adaptive comfort and infotainment features. Results show that explanations foster perceived understandability and transparency of the system, but that the need for explanation may vary between features. Additionally, we found that insufficiently designed explanations can decrease acceptance of the system. Our findings underline the requirement for a user-centered approach in explainable Al and indicate approaches for future research.