首页|生成式大模型引发的隐私风险及治理路径

生成式大模型引发的隐私风险及治理路径

扫码查看
大语言模型(简称大模型)正引发人工智能技术的新一轮革命。由于大模型具有平台性、海量数据依赖性、用户交互频繁、易受到攻击等特征,其隐私泄露风险更加引人担忧。重点探讨了大模型引发的隐私泄露来源、内在机理。在梳理了国内外对大模型隐私风险的治理经验后,提出了包含政策、标准、数据、技术和生态在内的五维治理框架。最后,展望了大模型在多模态、智能体、具身智能、边缘智能等发展趋势下,可能面临的新型隐私泄露风险。
Privacy risks induced by generative large language models and governance paths
Large language models(LLM)are leading to a new revolution in artificial intelligence technology.As LLM have the characteristics of platform,mass data dependence,frequent user interaction,and easy to be attacked,the risk of privacy disclosure is more worrying.The source and internal mechanism of privacy leakage caused by LLM were fo-cused.After reviewing the domestic and foreign experience in the governance of LLM privacy risks,a five-dimensional governance framework that included policies,standards,data,technology,and ecology were proposed.Finally,this paper also looks forward to the new privacy disclosure risks that LLM may face under the development trend of multimodal,agent,embodied intelligence,edge intelligence,etc.

large language modelprivacydata leakagegovernment

李亚玲、蔡京京、柏洁明

展开 >

之江实验室发展战略与合作中心,浙江 杭州,311121

浙江省哲学社会科学试点实验室,浙江 杭州,311121

之江实验室智能社会治理实验室,浙江 杭州,311121

大语言模型 隐私 数据泄露 治理

国家重点研发计划浙江省科技厅软科学计划研究项目

2022YFC33031032024C35035

2024

智能科学与技术学报

智能科学与技术学报

CSTPCD
ISSN:
年,卷(期):2024.6(3)