Regulation,theory and practical challenges for AI ethics:a historiographical review and examination of problem domains
AI entities face ethical risks in areas such as decision-making,ownership and privacy rights,as well as in the ethical capacity aspect of"pseudo-subject behaviors"such as independent content learning and independent behaviors.In order to regulate development and prevent ethical risks,various institutions,organizations and enterprises have established their guidelines and standards.Foreign scholars are exploring the feasibility of AI ethics,which involves the ethical complexities of AI as an agent,the behavioral form of AI and the moral rights and responsibilities of AI.Domestically,the research paths mainly consist of three aspects:metaphysically analyzing the status of AI,proposing frameworks to address AI ethical issues and discussing ethical issues in specific AI application areas.An academic historiographical review and examination of the problem domain of AI ethics from the perspectives of risk regulation,theoretical expansion and practical challenges necessitates a focus on three major frontiers:(1)Advancing a systematic framework of AI ethical theory and principles;(2)Establishing an open problem-solving approach that facilitates communication between"theory-practice"and"global-local"viewpoints;(3)Forming new approaches to research on the ethical form of AI.