摘要
可持续信任是保障人机高频交互和高效协作的关键要素.基于计算扎根理论,文章针对 7235条ChatGPT用户反馈进行编码分析,并由此提炼可持续信任的影响因素.研究发现,机器因素(可用性、易用性、可供性和安全性等)占比最大,其次为用户要素(技术恐惧、需求适配、媒介素养、心理预期),而任务因素(关键性失误、任务复杂度、违规成本)占比偏低.关键任务失败、机器安全性、用户需求适配度对用户信任水平影响最为显著.在人机之间建立清晰而互补的职责界限,用算法的可解释性对冲输出的不确定性,同时通过奖惩机制的"界面化",引导用户调整心理预期,有助于可持续信任的动态校准.
Abstract
Sustainable trust is a crucial element in ensuring high-frequency interactions and efficient collaboration be-tween humans and machines.This paper,grounded in the computational grounded theory,conducts a coding anal-ysis of 7235 user feedback from ChatGPT to distill the influencing factors of sustainable trust.The findings reveal that machine factors(such as usability,accessibility,availability,and security)have the highest proportion,fol-lowed by user elements(technological fear,demand adaptation,media literacy,and psychological expectations),while task factors(critical errors,task complexity,violation costs)have a relatively lower proportion.Critical mis-step,machine security,and user demand adaptability significantly impact user trust levels.Establishing clear and complementary responsibilities between humans and machines,leveraging algorithmic interpretability to mitigate output uncertainty,and guiding users to adjust psychological expectations through a"gamified"reward and penalty mechanism contribute to the dynamic calibration of sustainable trust.