首页|大语言模型的信任建构

大语言模型的信任建构

扫码查看
以ChatGPT为代表的AI大语言模型技术快速兴起,在颠覆现在内容生产方式和智能技术范式的同时,也由于幻觉、虚假内容等问题带来了信任危机.该技术甚至因为信任危机问题遭到抵制和封杀.尽管业界已在可信AI方面积极开展了大量的技术实践,但公众对AI的信任度仍未显著提升.因此,要解决信任问题,不仅需要厘清信任与可信任的关系,还需要从大语言模型的技术本质出发进行探究.对大语言模型技术的信任应是认知信任,认知信任不仅包含技术信任与人际信任的动态交互,而且是建立在有效监督基础上具有合理性的信任.大语言模型信任的建构路线主要包括以可解释性为核心的信任要素体系,以政府主导的AI治理体系为基础、多元主体协同的信任主体和信任环境,以及培养人们正确信任观的信任认知三个模块.
The Trust Construction of Large Language Models
The rapid rise of AI large language model(LLM)technologies,represented by ChatGPT,is revolutionizing content pro-duction and intelligent technology paradigms.However,issues such as hallucinations and false content have led to a crisis of trust,with the technology even facing resistance and bans.Despite the industry's active technical practices in the field of trustworthy AI,public trust in AI has not significantly improved.Therefore,to address the trust issue,it is not only necessary to clarify the relationship be-tween trust and trustworthiness but also to consider the technological essence of LLMs.As an epistemic technology,trust in LLMs should be epistemic trust,which includes the dynamic interaction of technical trust and interpersonal trust,and is based on reasonable trust established through effective supervision.Accordingly,a trust construction route for LLMs is proposed,which mainly includes a trust element system with being explainable at its core,a trust subject and environment with a government-led AI governance system as the basis and the collaboration of multi-subjects,and the trust cognition that cultivates a correct trust perspective among people.

artificial intelligencelarge language modelstrusttrustworthy

胡晓萌、陈力源、刘正源

展开 >

清华大学社会科学学院(北京 100084)

同济大学上海市人工智能社会治理协同创新中心(上海 200082)

人工智能 大语言模型 信任 可信任

中国博士后科学基金面上项目(第七十三批)

2023M732381

2024

中州学刊
河南省社会科学院

中州学刊

CSTPCDCSSCICHSSCD北大核心
影响因子:0.854
ISSN:1003-0751
年,卷(期):2024.(5)
  • 26