The Bias and Regulation in the Dissemination of Generative Artificial Intelligence:A Case Study of ChatGPT
The emergence of ChatGPT represents a significant breakthrough in human information dis-semination technology.With its powerful"generalization capabilities"achieved through pre-training and the use of large-scale models,ChatGPT is able to provide contextually appropriate responses based on the input it receives.However,as a generative artificial intelligence that relies on probabilistic inference,ChatGPT is not immune to inherent flaws.From an internal reasoning perspective,models pre-trained on human textual knowledge tend to replicate societal biases and shortcomings,which can be further exacerbated during dissem-ination and potentially marginalize underrepresented groups.In addition,external environmental factors,such as capital forces and political stances,can also influence ChatGPT.To ensure the healthy development of gen-erative AI,it is essential to prevent bias from occurring at both the training data and model design stage.Fur-thermore,concerted efforts from industry stakeholders,leveraging institutional advantages and enhancing us-ers'technical literacy,are crucial for guiding its progress in a positive direction.