Design and implementation of an online programming experiment platform integrated with AI large language models
[Objective]In the era of rapid technological advancement,computer science has emerged as a cornerstone of higher education,with programming skills being crucial for students to solve problems and innovate in this field.However,traditional online programming platforms have limitations due to inadequate feedback,disregard for code style and standards,and issues with code readability and efficiency,potentially hindering students'educational outcomes and developmental potential.Thus,this paper designs and implements an online programming laboratory platform integrated with artificial intelligence(AI)large language models,specifically Zhipu Qingyan GLM4 and OpenAI's GPT4.[Methods]Using WebSocket technology,the platform enables real-time communication between clients and servers,offering real-time code suggestions,program security scanning,and optimization of judgment results.This integration of large language models into a collaborative human-machine programming approach significantly enhances the efficiency and quality of student programming.During the real-time code suggestion phase,the large language models provide functionalities such as problem analysis,syntax checking,error detection,intelligent coding suggestions,test case generation,and code explanations.At this stage,the system not only checks for compilation errors but also provides feedback on aspects such as code style,execution efficiency,and readability.In the program security scanning phase,the models accurately identify security vulnerabilities and potential risks within the code,enhancing server security,improving the efficiency of security checks,and reducing the server's security burden while educating students on the importance of software safety.In the results analysis and optimization phase,the large language models conduct in-depth error analysis or provide optimization suggestions based on the judgment servers'results.[Results]This study sampled 334 problems covering the entire range of a first-year programming curriculum.It performed a quantitative accuracy analysis of the code generated by GLM4 and GPT4,with both models showing excellent performance.Notably,when prompts included problem information,judging language standards,test data,and server judgment results,GPT4 achieved an accuracy rate of 97.9%,which was slightly higher than that of GLM4.The paper also discussed the potential risks of large language model misuse and measures in system design to mitigate these risks,ensuring that students maintain their ability to think independently and innovate with AI assistance.In response to the profound impact of AI on higher education,this research explored the establishment of a student-centered,capability-oriented multifaculture evaluation system in programming courses supported by large language models.[Conclusions]This system shifts the focus from traditional outcome-based evaluations to those that foster comprehensive skill and process development.Through real-time analysis,diagnostics,and optimization suggestions provided by AI large language models,teaching effectiveness is significantly enhanced.Moreover,student interaction with the system is strengthened,facilitating a deeper understanding of programming concepts and enhancing problem-solving skills.The integrated AI model platform developed in this project effectively boosts students'programming skills and innovative thinking,promotes autonomous learning capabilities,and confirms the value of AI models in programming education,providing new perspectives and strategies for the application of AI in higher education technology.
large language modelsprogramming experiment platformprogram designartificial intelligence