首页|生成式人工智能应用场景下个人信息保护的法律风险及应对

生成式人工智能应用场景下个人信息保护的法律风险及应对

扫码查看
数据已成为新的产权要素和产业基础,而具备了一定理性能力的生成式人工智能ChatGPT在大数据喂养、人为数据标识处理和信息再输出三个层次对个人信息权益构成了新的侵权范式.尽管《个人信息保护法》《网络安全法》等法律规范已确立起一系列个人信息保护手段,但在生成式人工智能服务与应用端仍面临着解释和适用上的难题.为此,我国需要在保障个人信息权益的前提下鼓励发展人工智能技术——在总体上要树立起基于风险的个人信息保护原则,并尝试以"风险预知—解构—化解"为基本路径,在微观上优化"知情-同意"规则,以"设计保护"原则强化企业内部自治,并灵活调整侵权责任追究机制.
Legal Risks and Countermeasures of Personal Information Protection in the Application Scenario of Generative AI
Data have long become a new element of property rights and industrial base,and ChatGPT,a generative Al with certain rational ability,has constituted a new infringement paradigm for personal information rights and interests at three levels:big data feeding,artificial data identification processing,and information re-output.Although the Personal Infor-mation Protection Law,Cyber Security Law and other legal norms have established a series of personal information protec-tion measures,there are still difficulties in the interpretation and application of generative AI services and applications.China needs to encourage the development of AI on the premise of protecting the rights and interests of personal informa-tion.It is necessary to establish the principle of risk-based personal information protection,and try to take"risk predic-tion-deconstruction-resolution"as the basic path,optimize the"informed-consent"rule at the micro level,strengthen the internal autonomy of enterprises with the principle of"protection by design",and flexibly adjust the infringement liabil-ity mechanism.

Personal Information Protection LawChatGPTgenerative AIrisk control

张淇

展开 >

山西财经大学法学院,山西太原 030006

《个人信息保护法》 ChatGPT 生成式人工智能 风险控制

2025

太原学院学报(社会科学版)
太原大学

太原学院学报(社会科学版)

影响因子:0.196
ISSN:1671-5977
年,卷(期):2025.26(1)