Value Alignment and Human Value Consensus and Its Survival Rationality
The reason why Value Alignment has become a central issue in the current development of AI stems directly from human anxiety about the risk of AI development,but in fact there are minimum program(preventing the worse condition)and maximum program(obtaining the better condition),and the overall purpose is to enhance human well-being and make human life better.In other words,the question to be addressed is"What is good".Value Alignment highlights the question of how human value consensus is possible as a prerequisite for it,and the move towards it must increasingly emphasize the status of AI as a proposed subject.Based on the AI era,from the perspective of the explanatory power,inclusiveness,and plasticity of the human val-ue consensus narrative,the paradigm of the common values of humanity is clearly superior to the paradigm of the universal value.Value Alignment and its highlighted human value consensus issue require us to comprehend and adhere to the essence of the common value of all human beings,i.e.,human ex-istential rationality,to take coexistence as the premise,to take common-sense as the basis,to move towards inter-subjectivity,to apply horizontal ra-tionality,and to construct public reason.The rationality of survival that human practice appeals to under the condition of uncertainty requires not only effi-cient intelligence,but also the wisdom to achieve the good.
value alignmentthe common values of humanitysurvival rationalityArtificial Intelligencewisdom