首页|生成式人工智能的风险与治理——兼论如何打破"科林格里奇困境"

生成式人工智能的风险与治理——兼论如何打破"科林格里奇困境"

扫码查看
生成式人工智能发展突飞猛进,其治理不可避免地面临着"科林格里奇困境".打破"科林格里奇困境",须从构建人类命运共同体和人类文明发展的角度去考量,遵循人工智能发展的伦理安全和价值原则,防范和化解各类风险,完善现有治理框架,实现人工智能的创新发展和安全可控.我国人工智能治理呈现安全与发展兼顾的特点,但也存在立法不完善、机制不健全等问题.一方面,要通过建构性技术评估和道德物化设计将伦理价值介入生成式人工智能技术发展生命周期;另一方面,进一步加强立法保障,建立完善治理主体,针对数据、算法和生成内容三方面构建起以行政监管为主的多元协同治理框架.这是打破"科林格里奇困境"的中国治理路径.
The Risks and Governance of Generative Artificial Intelligence:Addressing the"Collingridge Dilemma"
With the rapid development of global artificial intelligence(AI)technologies,their profound and tangible impacts on socioeconomic progress and human civilization are undeniable.AI has become a battleground for the technological competitions among nations and a key indicator of comprehensive national power and competitiveness.The future of humanity hinges on the development and governance of AI,as the risks and challenges it presents are a shared concern of the international community.The"Collingridge Dilemma"highlights the regulatory balancing act between innovation and control,which is a critical issue for the high-quality and ethical advancement of AI.To break free from this dilemma,there is an urgent need to reach consensus on human ethical values,improve supervisory governance system,and achieve balance between standardization and development.Firstly,identifying the risks posed by generative AI and determining the areas and methods of governance are prerequisites to overcoming the challenge.Risks extend beyond algorithmic and data issues to include national security,public safety,social trust systems,and employment,necessitating a more comprehensive and forward-looking risk assessment.Secondly,the"Collingridge Dilemma"possesses both epistemological and axiological dimensions,encompassing profound value orientation issues.It is imperative to ensure that AI remains conducive to the advancement of human civilization.China must incorporate value considerations into the governance of generative AI,adhere to a people-centered approach,and promote the values of socialism to advocate"intelligence for good".This involves drawing on constructive technological assessments that incorporate values like fairness,justice,and harmony into the algorithm design and moralization of generated content to uphold the social trust system.Finally,the existing governance framework must be refined to mitigate the potential risks of generative AI.On a macro level:further improve governing subjects,making clear the principal roles of State Scientific and Technological Commission of the People's Republic of China in fulfilling its management functions by establishing an AI Security Review Committee(or Bureau)responsible for reviewing and supervising AI safety.Specialized institutions under unified leadership are essential for enhancing governance efficiency,reducing costs,and facilitating policy implementation.Strengthen legislative support,expedite the development of AI safety laws and regulations,and build a comprehensive governance mechanism encompassing preventive review,intervention,and post-event punishment.On a micro level:introduce access systems for preventive review,establish new safe harbor regulations to clarify responsible parties and methods of accountability,and shift data governance focus to building data-sharing mechanisms,exploring public data development,and compelling private entities to open their data for the public good,setting the stage for a future market in data sharing.Through these macro and micro-level regulatory measures,ensure that AI development is safe,trustworthy,and controllable,which aligns with common values of peace,development,justice,and aspiration for goodness in order to promote the progress of human civilization.

generative artificial intelligencerisk assessmentvalue orientationgovernance framework

沈芳君

展开 >

浙江外国语学院 "一带一路"学院,浙江 杭州 310023

生成式人工智能 风险审视 价值取向 治理建构

中共浙江省委政法委员会、浙江省法学会2023年度重点立项课题

2023NA13

2024

浙江大学学报(人文社会科学版)
浙江大学

浙江大学学报(人文社会科学版)

CSTPCDCSSCICHSSCD北大核心
影响因子:1.431
ISSN:1008-942X
年,卷(期):2024.54(6)