首页|生成式人工智能的人权风险及其法律治理

生成式人工智能的人权风险及其法律治理

扫码查看
人工智能技术的自我赋权与其受制约程度较弱的特性,催生了强大的数字权力,存在滑向"智能利维坦"的风险.若不将人工智能技术的开发和应用纳入法治轨道,将会造成数字技术的异化,导致数字权力的扩张,妨害平等、自由等基本人权.生成式人工智能的出现,颠覆了传统人工智能的发展方向和底层逻辑,标志着人工智能从专注于特定任务向通用人工智能的跃迁.但这种转变加剧了人工智能风险的不确定性,给以往依赖的事前治理和分散治理模式带来了前所未有的挑战.从人权视角出发,结合生成式人工智能技术的发展现状,对人工智能治理进行整体性制度设计显得尤为迫切和关键.应通过强化伦理规约、立法规范发展、实行敏捷治理等措施,推动人工智能治理细节的深化.更进一步,从长远目标来看,构建一套契合中国国情的数字人权理论框架和制度体系,是防止数字权力不当扩张的必由之路.
The Human Rights Risks of Generative Artificial Intelligence and Legal Governance
The inherent tension between the self-empowering nature of artificial intelligence(AI)and its need for regulation has given rise to a potent force:digital power.Shielded by digital capital and the"black box"nature of algorithms,this power permeates all facets of society,subtly shifting the balance of authority from human to technological control and raising the specter of an"intelligent Leviathan."The advent of generative AI signifies a paradigm shift from task-specific,decision-making AI to a more versatile and general-purpose form.While this evolution unlocks unprecedented creative and autonomous potential,it also disrupts the established trajectory and funda-mental logic of AI development.This shift presents significant challenges in aligning AI with ethical principles,legal frameworks,and societal values.Reforming and inno-vating AI governance strategies is thus paramount.Failure to bring AI development and deployment under the rule of law risks technological alienation,unchecked expan-sion of digital power,and the erosion of fundamental human rights such as equality and freedom.Two primary models currently guide AI governance:ex-ante governance based on risk prevention and decentralized governance based on specific elements and scenarios.The former prioritizes comprehensive planning and risk mitigation,while the latter focuses on precise control within defined application contexts.However,the rapid evolution of AI,characterized by increasing uncertainty and the emergence of generative AI with its generalized capabilities,cross-modal functionality,and emer-gent intelligence,challenges these traditional governance approaches.Element-based,scenario-focused governance and risk prevention-centric strategies may prove insuffi-cient in addressing the unique challenges presented by this new generation of AI.To effectively counter the potential"Leviathanization"of intelligent technologies,criti-cally reassessing and re-calibrating current AI governance frameworks are essential.A human rights-centered approach,carefully considering the current state of generative AI,should guide the development of flexible,adaptive,and holistic institutional designs.Strengthening ethical guidelines,enacting comprehensive legislation,and implementing agile governance mechanisms are crucial steps towards a more nuanced and effective approach to AI governance.These measures will help ensure technologi-cal advancement goes hand in hand with respecting and protecting fundamental human rights.Looking ahead,establishing a robust theoretical framework and institutional system for digital human rights,tailored to the specific context of China,is crucial to prevent the undue expansion of digital power and ensure a future where AI serves humanity.

generative artificial intelligenceChatGPTtechnological powerdigital human rightsagile governance

王彬、强蕴慧

展开 >

南开大学法学院

南开大学人权研究中心(国家人权教育与研究基地)

阿姆斯特丹大学

生成式人工智能 ChatGPT 技术权力 数字人权 敏捷治理

2024

人权法学
西南政法大学

人权法学

ISSN:2097-0749
年,卷(期):2024.3(6)