Which path to be adopted to regulate Artificial Intelligence(AI)activities,is the central question of AI legislation.The risk management approach suffers from the difficulties of assessing and categorizing risks and letting damages happen,which should not be considered a perfect choice for AI legislation.Unlike previous scientific and technological activities,AI activities have the dual attributes of professional technology activities and enabling technology activities.The AI law,which takes AI activities as the object of regulation,should not be guided by one single theory,but should comply with the dual position of scientific law and application law.Under the dimension of scientific law,the AI law should respect science autonomy and help internalize science ethics in AI research activities,while breaking down institutional barriers and designing facilitative rules to help the development of AI technology.Under the positioning of application law,the AI law should concern functional alienation caused by technology-enabled scenarios.The AI law should,on the one hand,rely on the abstract rights and obligations tools of the legal system,especially through the provision of new types of rights,to build an elastic normative framework to respond to the differences in the sequence of values in different application scenarios;on the other hand,follow the steps of experimentalist governance with the help of regulatory sandboxes,enabling legislation and others,dynamically adjusting regulatory programs to meet the flexible governance needs of AI-enabled application activities.