Integrating human rights protections is essential for medical AI gover-nance in China.Using human rights as a cornerstone for medical AI legislation,the EU categorizes AI based on its potential to infringe on fundamental rights.It explores cross-sectoral,unified oversight and strict product liability,which risks double regula-tion and inefficiency.The US emphasizes industry self-regulation and relies on spe-cific government agencies for governance,potentially increasing the risk of human rights violations.The UK prioritizes ethical boundaries for medical innovation,focus-ing on real-world risks rather than the technology itself.China,while advocating for responsible AI development and acknowledging the need for risk management in medi-cal AI,grapples with misaligned principles on human rights,unclear implementation mechanisms,and a need for more consideration for the particular healthcare context.Human rights,distinct from technological,ethical,or legal approaches,offer a univer-sally recognized source of rights and shared understanding.China should adopt a human rights framework based on objective values to clarify fundamental rights and obligations,prevent excessive regulation,promote healthcare infrastructure develop-ment,foster shared values,and encourage collaborative governance.At the individual level,procedural rights should be guaranteed.Transparency regarding AI's risks,pos-sibility of interpretation and operation,and regulation is crucial,as is incorporating medical AI usage into informed consent.A fair process for joint decision-making between doctors and AI is necessary,embedding the principle of equality into algo-rithm design.Mechanisms for reporting unequal treatment should be robust.At the national level,a process-centric,dynamic regulatory mechanism is key.Precision-based,vertical regulation should be the norm.Exceptionally,companies should be per-mitted to use real-world data instead of clinical trial evidence for medical device approval,with mandatory continuous monitoring.Context-specific liability frame-works should be established for medical institutions and AI developers,supplemented by negligence and reversed rules of burden of proof.Legal frameworks should enhance data,computing power,and model accessibility,incorporating certain medical AI technologies into healthcare coverage.Societally,public engagement and self-regu-lation are paramount.A long-term,multi-stakeholder dialogue mechanism encompass-ing patients,the public,businesses,scientists,and the government is essential.Lever-aging the influence of key medical institutions and AI developers,along with soft law mechanisms like consistency commitments,can promote self-regulation within the medical field.
medical artificial intelligencehuman rightsfundamental rightsdynamic regulationpublic participation