Long text semantic similarity calculation combining hybrid feature extraction and deep learning
Text semantic similarity calculation is a crucial task in natural language processing,but current research on similarity mostly focuses on short texts rather than long texts.Compared to short texts,long texts are semantically rich but their semantic information tends to be scattered.To address the issue of scattered semantic information in long texts,a feature extraction method is proposed to ex-tract the main semantic information from long texts.The extracted semantic information is then fed into a BERT pre-training model using a sliding window overlap approach to obtain text vector representa-tions.A bidirectional long short-term memory network is then utilized to model the contextual semantic relationships of long texts,mapping them into a semantic space.The model's representation ability is further enhanced through the addition of a linear layer.Finally,finetuning is performed by maximizing the inner product of similar semantic vectors and minimizing the cross-entropy loss function.Experi-ment results show that this method achieves F1 scores of 0.84 and 0.91 on the CNSE and CNSS data-sets,outperforming the baseline models.
long text semantic similarityfeature extractionBERT pre-training modelsemantic space