首页|生成式人工智能传播中的偏向与规制——以ChatGPT为例

生成式人工智能传播中的偏向与规制——以ChatGPT为例

扫码查看
ChatGPT的诞生标志着人类信息传播技术出现了重大突破.它凭借预训练和大模型获得了强大的"泛化能力",可以基于上下文内容推断出最合适的答案.然而,作为基于概率推断的生成式人工智能,ChatGPT也存在着一定的缺陷.从内部推断逻辑来看,基于人类文本知识进行预训练的模型,会复刻人类社会的偏见与缺陷,在传播过程中会进一步强化偏向,忽视边缘群体.受制于外部环境的介入,ChatGPT也会受到资本力量和政治立场的影响.对于生成式人工智能的发展,要从训练数据和模型设计阶段防止偏向的发生,同时也要发挥行业力量、制度优势和用户技术素养,共同引导其健康发展.
The Bias and Regulation in the Dissemination of Generative Artificial Intelligence:A Case Study of ChatGPT
The emergence of ChatGPT represents a significant breakthrough in human information dis-semination technology.With its powerful"generalization capabilities"achieved through pre-training and the use of large-scale models,ChatGPT is able to provide contextually appropriate responses based on the input it receives.However,as a generative artificial intelligence that relies on probabilistic inference,ChatGPT is not immune to inherent flaws.From an internal reasoning perspective,models pre-trained on human textual knowledge tend to replicate societal biases and shortcomings,which can be further exacerbated during dissem-ination and potentially marginalize underrepresented groups.In addition,external environmental factors,such as capital forces and political stances,can also influence ChatGPT.To ensure the healthy development of gen-erative AI,it is essential to prevent bias from occurring at both the training data and model design stage.Fur-thermore,concerted efforts from industry stakeholders,leveraging institutional advantages and enhancing us-ers'technical literacy,are crucial for guiding its progress in a positive direction.

ChatGPTArtificial IntelligencePropagation BiasRegulation

周茂君、郭斌

展开 >

武汉大学媒体发展研究中心

武汉大学新闻与传播学院,湖北武汉,430072

ChatGPT 人工智能 传播偏向 规制

国家社会科学基金项目

17BXW094

2024

学习与实践

学习与实践

CSSCICHSSCD北大核心
ISSN:
年,卷(期):2024.(1)
  • 7