首页|医疗人工智能治理中的人权保障

医疗人工智能治理中的人权保障

Human Rights Protection in Medical AI Governance

扫码查看
我国医疗人工智能治理需引入人权保障进路.欧盟以人权作为基本价值准则对医疗人工智能进行严格监管,以技术自我监管为主导的美国存在较为严重的侵犯人权的风险,英国和我国则尝试对技术创新和伦理影响进行平衡,但仍存在人权保障理念定位偏差、贯彻机制不甚清晰、忽视医疗场景特殊性等问题.我国应基于客观价值秩序的人权理念,发挥其明确基本权利与义务、限制过度监管与促进医疗基础设施建设、凝聚共识和推动合作治理的独特功能.在个人层面,应关注程序性权利保障,将医疗人工智能的使用纳入知情同意范围,设定人机共同决策的正当程序,关注不平等对待的投诉举报.在国家层面,应坚持以过程为中心的动态监管机制,以精准纵向监管为原则,探索动态医疗器械上市许可制度,对医疗机构和医疗系统生产商设置场景化的过错推定责任,依法增强数据、算力、模型的可用性.在社会层面,应建设全链条-多主体对话长效机制,推动公众参与,推进医疗主体自律监管,发挥关键医疗机构和一致性承诺制度的作用.
Integrating human rights protections is essential for medical AI gover-nance in China.Using human rights as a cornerstone for medical AI legislation,the EU categorizes AI based on its potential to infringe on fundamental rights.It explores cross-sectoral,unified oversight and strict product liability,which risks double regula-tion and inefficiency.The US emphasizes industry self-regulation and relies on spe-cific government agencies for governance,potentially increasing the risk of human rights violations.The UK prioritizes ethical boundaries for medical innovation,focus-ing on real-world risks rather than the technology itself.China,while advocating for responsible AI development and acknowledging the need for risk management in medi-cal AI,grapples with misaligned principles on human rights,unclear implementation mechanisms,and a need for more consideration for the particular healthcare context.Human rights,distinct from technological,ethical,or legal approaches,offer a univer-sally recognized source of rights and shared understanding.China should adopt a human rights framework based on objective values to clarify fundamental rights and obligations,prevent excessive regulation,promote healthcare infrastructure develop-ment,foster shared values,and encourage collaborative governance.At the individual level,procedural rights should be guaranteed.Transparency regarding AI's risks,pos-sibility of interpretation and operation,and regulation is crucial,as is incorporating medical AI usage into informed consent.A fair process for joint decision-making between doctors and AI is necessary,embedding the principle of equality into algo-rithm design.Mechanisms for reporting unequal treatment should be robust.At the national level,a process-centric,dynamic regulatory mechanism is key.Precision-based,vertical regulation should be the norm.Exceptionally,companies should be per-mitted to use real-world data instead of clinical trial evidence for medical device approval,with mandatory continuous monitoring.Context-specific liability frame-works should be established for medical institutions and AI developers,supplemented by negligence and reversed rules of burden of proof.Legal frameworks should enhance data,computing power,and model accessibility,incorporating certain medical AI technologies into healthcare coverage.Societally,public engagement and self-regu-lation are paramount.A long-term,multi-stakeholder dialogue mechanism encompass-ing patients,the public,businesses,scientists,and the government is essential.Lever-aging the influence of key medical institutions and AI developers,along with soft law mechanisms like consistency commitments,can promote self-regulation within the medical field.

medical artificial intelligencehuman rightsfundamental rightsdynamic regulationpublic participation

石佳友、李晶晶

展开 >

中国人民大学法学院

医疗人工智能 人权 基本权利 动态监管 公众参与

2024

人权法学
西南政法大学

人权法学

ISSN:2097-0749
年,卷(期):2024.3(6)