The value alignment of artificial intelligence stems from their autonomy,uncertainty,and risk attributes.Value alignment requires analyzing the moral attributes and regulatory role of artificial intelligence agents,as well as their necessity and possibility.In order to achieve the goal of value alignment of artificial intelligence,it is necessary to properly handle the relation-ship between ethical consensus and diverse values,abstract value rules and specific application scenarios of artificial intelligence technology,and the ultimate ethical goals of human beings and short-term value pursuits.On this basis,basic moral principles and ethical bottom lines should be established for the development of artificial intelligence agents,clarifying the design boundar-ies of artificial intelligence agents,avoiding interference and harm to human values and rights,avoiding AI systems deviating from human values,and guiding and regulating the development direction of artificial intelligence technology.