Humanistic Considerations in Value Alignment for Large-Scale Artificial Intelligence Models
The iteration of artificial intelligence(AI)technologies continues,with models such as ChatGPT and Sora,advancing towards general AI.To better calibrate large AI models,countries are beginning to explore technical pathways for value alignment.However,value alignment is no easy task,given the significant differences in value standards across human societies.How can we achieve alignment?What should the standard be?Current approaches to value alignment mostly rely on technocratic"cybernetics",but this may introduce new ethical risks and hasten the approach of the"singularity".Therefore,it is necessary for large model alignment algorithms to break away from the traditional"subject-object dichotomy"and transcend the technocratic mindset,in order to address the unique challenges posed by the alignment of moral values.This pursuit seeks a dynamic balance between technology and the humanities and aims to construct a"symbiotic relationship"between humans and machines from a techno-humanist perspective,achieving harmonious development between humans and AI technology.