首页|总体国家安全观视域下大语言模型的语言安全风险逻辑

总体国家安全观视域下大语言模型的语言安全风险逻辑

扫码查看
语言是维系国家政治文化共同体的纽带,也是国家文化制度的基础.在总体国家安全观的理论体系中,文化安全是国家安全全局的保障要素,而语言安全从属于文化安全中的表征性文化符号安全,是判断文化安全状态的重要指标.在人工智能时代,大语言模型的架构者可以通过训练数据监督和微调数据使用,让模型带有特定的集体价值观念和意识形态属性,在与社会的海量交互中进行渗透,对语言安全产生两方面风险:一是直接风险,大语言模型中自然语言的主导权引发文化主权之争,世界主要经济体竞相开发本土化模型,加剧"语言安全化".二是间接风险,带有意识形态属性的大语言模型潜移默化地侵蚀主流社会价值观和核心价值意识形态,进而削弱国民对标准通用语的认同,造成"语言不安全".语言安全风险具有长期隐蔽累积的特性,一旦显现则难以弥补.因此,我们需要树立底线思维、防微杜渐,加强人工智能顶层设计,积极推动国家通用语言模型的开发和应用,完善人工智能领域法律法规,警惕价值偏见风险,增强价值对齐能力,从而有效维护国家语言安全和文化主权.
The Logic of Large Language Models'Language Security Risks From the Perspective of a Holistic Approach to National Security
Language is the thread that weaves together the political and cultural community of a na-tion-state,forming the cornerstone of a nation's cultural institution.In the theoretical framework of a holistic approach to national security,cultural security functions as a guarantee element of holistic national security,with language security being subordinate to the security of representa-tional cultural symbols within cultural security.It also serves as an important indicator for assess-ing the state of cultural security.In the age of artificial intelligence,through the supervision of training data and application of fine-tuning data by architects of Large Language Models(LLMs),models will be able to carry specific collective values and ideological attributes,infil-trating through massive interactions with society,which will generate two kinds of risk.The di-rect risk is that the dominance of natural language in the LLMs triggers disputes over cultural sovereignty.Major economies around the world are committed to developing localized LLMs,in-tensifying"language securitization".The indirect risk is that the LLMs with ideological attri-butes will subtly erode the mainstream social values and core value ideology,thus weakening the national identification with the lingua franca and causing"language insecurity".The risks of lan-guage security are characterized by long-term,hidden accumulation,and once they become a threat to cultural security,they are difficult to mitigate.Therefore,we need to adopt a bottom-line mindset,and actively promote the development and application of the local LLMs.Then we should strengthen the top-level design of AI,improve the laws and regulations in the field of AI,remain vigilant against the risk of value bias,and enhance our ability to value alignment,in order to effectively safeguard the national language security and cultural sovereignty.

a holistic approach to national securitylanguage securitycultural securityartifi-cial intelligence securityLarge Language Models(LLMs)

刘佳妮、王月禾

展开 >

中国人民大学国际关系学院 北京 100872

总体国家安全观 语言安全 文化安全 人工智能安全 大语言模型

2024

外语与外语教学
大连外国语学院

外语与外语教学

CSTPCDCSSCICHSSCD北大核心
影响因子:2.036
ISSN:1004-6038
年,卷(期):2024.(5)