Academic writing is one of the main applications of ChatGPT.This paper focuses on the core journal articles in the field of intelligence.Starting from three dimensions:word,sentence,and paragraph,text processing methods such as part-of-speech tagging and n-gram are used to compare the introductions of articles produced by ChatGPT and humans.Furthermore,determining whether the academic content was generated by ChatGPT is treated as a binary classification task.Naive Bayes,Support Vector Machine,and Random Forest algorithms are employed for text classification experiments,and the SHAP method is used to analyze the significance of textual structural features.The study shows that ChatGPT has weaknesses in describing factual information with specific dates and in referencing policy documents or research reports.The introductions generated by ChatGPT are relatively consistent in length,and its academic writing tends to be more"rule-following"compared to human authors.Plagiarism detection tools typically struggle to accurately identify the originality of content produced by ChatGPT.However,classification models are better at distinguishing whether the introductions were generated by ChatGPT.Average sentence length,lexical diversity,and total text length are the most significant textual structural features that influence classification results.