With the continuous development of deep learning,pre-trained language models have achieved great results in the field of natural language processing.Of course,automatic text summarization,as an important research direction in natural lan-guage processing,also benefits from large-scale pre-trained language models.In particular,a large-scale pre-training language model is used to generate an abstractive summarization that can accurately reflect the main idea of the original text.However,there are still some problems in current research,such as insufficient understanding of the semantic information of the original document,unable to effectively represent polysemy,the generated abstract has repeated content,and the logicality is not strong.In order to al-leviate the above problems,this paper proposes a new generative text summarization model TextRank-BERT-PGN-Coverage(TB-PC)based on BERT pre-trained language model.The model uses classical Encoder-Decoder framework to pre-train weights and generate abstracts.In this experiment,CNN/Daily Mail dataset is used as the experimental dataset.Experimental results show that compared with the existing research results in this field,the model proposed in this paper achieves a better experimental result.