首页|人工智能伦理治理范式:从价值对齐到价值共生

人工智能伦理治理范式:从价值对齐到价值共生

扫码查看
当前人工智能伦理治理的主导范式是价值对齐,其目标是确保机器价值与人类价值一致.价值对齐范式主要采取了表征主义和行为主义的AI方案,但这些方案因为面临着常识问题的挑战,难以精准捕捉和编码复杂的人类价值观.为了解决常识问题,需要引入具身-生成AI的技术方案,让它可以把握世界中的相关性,并可以自下而上地自主生成价值观.然而,如果这种自主生成的机器价值观敌对于人类,则有可能给人类带来生存风险.有鉴于此,本文提出了一个"价值共生"的替代范式,旨在实现机器价值与人类价值的和谐共生,它包含了两条AI设计原则:生存利益上的相互受益和价值观上的相互承认.
The Paradigm of Ethical Governance of Artificial Intelligence:From Value Alignment to Value Symbiosis
The current dominant paradigm in the ethical governance of artificial intelligence is Value Alignment,which aims to ensure that machine values are consistent with human values.The Value Alignment paradigm has mainly adopted representationalist and behaviourist AI approaches,but these approaches are difficult to accurately capture and encode complex human values because of the challenge of the commonsense knowledge problem.In order to solve this problem,technological solutions based on embodied-enactive AI need to be introduced to grasp the relevance in the world and autonomously generate values from the bottom up.However,if such autonomously generated machine values are hostile to humans,it may pose an existential risk to humans.Given this,this paper proposes an alternative paradigm of Value Symbiosis,which aims to achieve a harmonious symbiosis between machine and human values.It consists of two AI design principles:mutual benefit in survival interests and mutual recognition in values.

Human-machine alignmentExistential riskEmbodied-enactive AlMutual recognitionThe commonsense knowledge problem

夏永红

展开 >

北京师范大学文理学院,广东珠海,519087

人机对齐 生存风险 具身-生成AI 相互承认 常识问题

2025

自然辩证法通讯
中国科学院研究生院

自然辩证法通讯

北大核心
影响因子:0.374
ISSN:1000-0763
年,卷(期):2025.47(1)