The Impact of AI Transparency and Reliability on Human-AI Collaborative Decision-Making Performance and Trust
Objective To explore the impact of AI transparency and reliability on human decision-making be-havior and trust in human-AI collaboration.Methods An experiment on human-AI collaborative income judgment which adopted a two-factor mixed design with AI transparency(low,medium,and high)as the between-group vari-able,and AI reliability(60%,75%,and 90%)as the within-group variable was conducted.54 participants(27 males,27 females)were enrolled and the following data were collected:decision-making types(four situations:cor-rect acceptance,incorrect acceptance,correct rejection,and incorrect rejection),decision-making time,trust,human accuracy and AI compliance.Results When AI reliability was high(75%and 90%),transparency had no signifi-cant impact on human decision-making,while when AI reliability was low(60%),higher levels of transparency in-creased human compliance rates with AI.But further analysis suggested that the compliance rate only increased when AI made correct advice,but not when AI made mistakes.In addition,transparency had no significant impact on correct rejection.Conclusion Transparency increased appropriate reliance towards AI.However,increasing transparency lev-el did not significantly improve human's ability to identify AI errors.