Generative AI uses massive unlabeled data and synthetic data for continuous training,relies on machine learning technologies such as deep neural networks to gradually form autonomous behavioral capabilities,output novel results,and become more and more widely used,which is profoundly changing the way people interact with each other,and the resource-intensive nature of model development is also promoting the formation of complex value chains.The technological leap of generative AI in the operation node has raised regulatory challenges such as copyright infringement,data bias,excessive energy consumption,unpredictable risks,disinformation dissemination,and difficulty in determining damages.The EU AIA responds urgently by bringing generative AI into the category of'AI systems'through the transition of'AI general systems',with'general Al models'as the conceptual center;the inputting side sets compliance obligations based on both data quantity and data quality;the processing side introduces the criterion for judging the degree of autonomy of'high impact ability',and embeds'artificial intelligence with systemic risk'into the risk classification and grading system;obligations such as'detection,disclosure and transparency'are designed to regulate the spread of disinformation on the output side;the deployment side also specifically designs a dedicated article for the allocation of responsibilities along the value chain.Although EU legislation has made efforts to address the risks of generative AI,there is still room for further improvement in such areas as'certainty of abstract definition','methods for measuring the effectiveness of data training','distinction between advanced and small models','determination of systemic damage',and'impact of API interfaces and open source models on value distribution'.