The Challenges and the Criminal Law Response of Generative Artificial Intelligence to Data Security
With the application of generative artificial intelligence in various fields,data security protection faces new challenges.In the path selection for data security protection,the traditional model anchors rights on the data subject,establishing a data security system involving criminal law,civil law,and other areas based on informing-obtaining consent as the boundary of the data subject's rights.However,it must be acknowledged that with the development of artificial intelligence,this data protection model based on the rights of the data subject gradually exhibits cognitive and structural difficulties,necessitating a shift in approach.Compared to a protection model centered on the rights of the data subject,criminal law can consider from a risk perspective,using Beck's Risk Society Theory as a theoretical reference.This involves shifting from assuming the data subject possesses full rationality to acknowledging limited rationality,focusing data security protection more on the prevention and control of data security risks rather than solely on safeguarding the rights of the data subject.Moreover,in terms of the substantive aspects of data security protection,actions by generative artificial intelligence that infringe upon data legal interests can be summarized as behaviors carried out by individuals using or targeting generative AI,and actions"autonomously"taken by generative AI during its evolution and operation.Criminal law should primarily consider risk factors and evaluate the two types of behaviors mentioned above in conjunction with existing regulations.
Generative Artificial IntelligenceRisk PreventionData SecurityCriminal Law Regulation