首页|"相信一段程序":ChatGPT生成内容的认知途径与算法信任建构

"相信一段程序":ChatGPT生成内容的认知途径与算法信任建构

扫码查看
随着智能技术的发展,聊天机器人逐渐成为信息的合成者与传播者,影响着人机交互中的信任关系.用户选择认知机器生成内容的途径不同,对生成内容的信任程度也有所不同.由于聊天机器人存在生成虚假信息和造成信息失序的风险,信息的透明性、真实性与客观性原则受到冲击.具体表现为信源不确定、信息污染与信息可计算,在加剧用户信任脆弱性的基础上造成信息回避、技术漂迁和技术梦游以及同理心被操控等信息认知与辨识的伦理风险,这些发现强调解决算法驱动的聊天机器人中的道德问题的重要性.基于对上述问题的反思,本文依托责任伦理分析道德选择、道德行为与道德交往的模式,探寻造成信任脆弱性的原因,通过减少虚假信息寻求人机交互中的信任重建.
Believing in a Procedure:Ethical Risks and Responsibility Remodeling in ChatGPT Generated Content
With the advancement of intelligent technology,chatbots have evolved into synthesizers and distributors of information,influencing trust dynamics in Human-Computer Interaction.Users approach content generated by chatbots with varying degrees of cognitive and emotional trust.However,the risk of misinformation and information disorder introduced by chatbots can compromise the principles of transparency,authenticity,and objectivity in information dissemination.This impact is evident in uncertainties regarding information sources,information pollution,and the computability of information,exacerbating user trust vulnerabilities.Consequently,ethical risks in information cognition and identification arise,including information avoidance,technology drift,technology sleepwalking,and empathy manipulation.Those findings emphasize the importance of ethics for chatbots Reflecting on these issues,this article employs responsibility ethics to analyze moral choices,behaviors,and communication patterns.By addressing the fragility of trust,particularly through mitigating false information,it seeks to reconstruct trust in Human-Computer Interaction.

chatbotethical risksalgorithm trustethics of responsibilitytrust vulnerability

刘海明、李佳怿

展开 >

重庆大学 新闻学院,重庆 401331

聊天机器人 认知途径 算法信任 责任伦理 信任脆弱性

2024

传媒观察
新华日报报业集团

传媒观察

CSSCI
影响因子:0.241
ISSN:1672-3406
年,卷(期):2024.485(5)