Text Retrieval Sorting Method Based on RoBERTa Model
In response to the growing demand for rapid retrieval and sharing of information,the per-trained model of Robustly optimized BERT approach(RoBERTa)is used to train the existing data.Based on the language learning model of Transformer self-attention mechanism,the text embedding vector is generated,and the text vector is used as the context representation of the full text.The key search words are vectorized,and the distance between the vector and other vectors is calculated by Euclidean distance.And quick sort is used to find the most similar vector output display,so as to solve the application requirements of content-based and contextual semantic search.