With the rapid development of artificial intelligence(AI)technology,general large models(GLMs)have become a significant research focus in the AI field.GLMs typically possess an extensive number of parameters,are trained on massive datasets and exhibit robust learning and reasoning capabilities.These models demonstrate outstanding performance in various tasks,including natural language processing,image recognition,and code generation.This paper reviews the evolution of GLMs and the key technology nodes,from the early rule-based systems and traditional machine learning models to the rise of deep learning,the introduction of the Transformer architecture,and the advancements in the GPT series and other GLMS over the world.Despite the significant progress,GLMs face numerous challenges,such as high computational resource demands,data bias,ethical issues,and model interpretability and transparency.This paper analyzes these challenges and explores five key future development directions for GLMs:model optimization,multimodal learning,emotionally intelligent models,data and knowledge dual-driven models,and ethical and societal impacts.By adopting these strategies,GLMs are expected to achieve broader and deeper applications,driving continuous progress in AI technology.
general large modelsartificial intelligencedeep learningtransformer architectureGPT series