Innate approaches to bias and conflict management in AI development and their knowledge representation
The development of AI technology shows the characteristic oligopoly dominance and hierarchical differentiation.Differences in the geographical,cultural and educational backgrounds of developers,as well as the social,cultural and political attributes of training data,are exacerbating knowledge and ethical monopolies,thus amplifying cultural biases and value conflicts.To combat this,it requires the synchronous linkages between ethics and technology and their collaborative development.The innate approach to ethical governance has philosophical and technological theoretical foundations.First,taking ethical principles as the logical starting point for AI rather than as evaluation criteria;for example,creating AI with moral agency is an effective strategy for preventing and resolving biases and conflicts.The pretraining-finetuning technical paradigm and finetuning dataset form the technological basis for the innate approach.Second,using ontology as the basis for the structured representation of ethical knowledge and semantic reasoning,and designing technical pathways and ethical datasets for further finetuning large models provides a methodological and roadmap for ethically optimizing AI to construct moral agency.The example of ontological knowledge representation of Chinese ethics demonstrates how the innate approaches to managing biases and conflicts in AI are possible.