Since the emergence of recommendation systems,further development of recommendation algorithms has been constrained by limited data.To reduce the impact of data sparsity and enhance the utilization of nonrated data,text-recommendation models based on neural networks have been successively proposed.However,mainstream convolutional and recurrent neural networks have clear disadvantages as concerns text semantic understanding and capturing long-distance relationships.To better explore the deep latent features between users and items,and further improve the quality of recommendations,a sequence recommendation method based on RoBERTa and a Graph-enhanced Transformer(RGT)is proposed.This model incorporates textual comment data by first utilizing a pre-trained RoBERTa model to capture the semantic features of words in the comment text,thereby modeling the personalized interests of the user.Subsequently,based on historical interaction information between users and items,a graph attention mechanism network model with the temporal characteristics of item associations is constructed.Using the graph-enhanced Transformer method,the feature representations of various items learned by the graph model are sequentially input to the Transformer encoding layer.Finally,the obtained output vectors,along with the previously captured semantic and computed global representations of the item association graph,are input into a fully connected layer to capture the global interest preferences of the user and achieve prediction ratings for items.The experimental results on three groups of real Amazon public datasets demonstrate that the proposed recommendation model significantly improves the Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)compared to several existing classical text recommendation models,such as DeepFM and ConvMF.Compared to the optimal comparison model,the highest increases are 4.7%and 5.3%,respectively.