首页|大语言模型"数据为王":训练数据的价值、迷思与数字传播的未来挑战

大语言模型"数据为王":训练数据的价值、迷思与数字传播的未来挑战

Data Ruling Large Language Models:The Values,Myths,and Challenges of Training Data in Future Digital Communications

扫码查看
伴随着ChatGPT的问世和流行,关于生成式人工智能的意涵和影响迅速成为学界和业界的关注焦点.在这场由大语言模型引领的非监督性深度学习浪潮中,一个核心议题就是训练数据.对训练数据的规模和质量的追求,演绎了"万模大战"形势下的"数据为王"法则.而在训练数据的价值、功能和误读的背后,是对数据概念的改写、对数据可供性的迷信和对数据所有权的争夺.训练数据的具体架构和内部机制引发了智能传播生态的重建和信息生产秩序的重构,在这一变革之中也蕴藏着大语言模型时代的数字危机,其具体体现为蒸馏式传播的偏见再生产、过滤式传播的信息保守化和随机性传播的意义之消散.大语言模型及其训练数据急需破除规模迷思,着重思考如何让数据切实成为社会技术系统的一部分.
With the advent and popularity of ChatGPT,the implications and impacts of generative artificial intelligence have rapidly become focal points of attention in both academic and industrial circles.Within this wave of unsupervised deep learning led by large language models,a central issue revolves around training data.The pursuit of the scale and quality of training data epitomizes the dictum of"data as king"amidst the landscape of the"model war".Behind the values,functions,and misconceptions of training data lies a rewriting of the concept of data,a superstition regarding data affordance,and a struggle for data ownership.The specific structure and internal mechanism of training data have triggered the reconstruction of the intelligent communication ecosystem and the formation of a new information production order.The transformation also harbors a digital crisis caused by large language models,manifested in the reproduction of biases under distilled communications,the concretization of information under filtered communications,and the dissipation of meaning under stochastic communications.Both training data and large language models urgently need to dispel the myth of scale and focus more on how to integrate data effectively into social-technical systems.

large language modeltraining datagenerative AIChatGPTintelligent communications

胡泳、刘纯懿

展开 >

北京大学新闻与传播学院,北京 100871

大语言模型 训练数据 生成式AI ChatGPT 智能传播

2024

西北师大学报(社会科学版)
西北师范大学

西北师大学报(社会科学版)

CSSCICHSSCD北大核心
影响因子:0.607
ISSN:1001-9162
年,卷(期):2024.61(3)
  • 48