To solve the problem in abductive natural language reasoning task(aNLI),where is a cer-tain degree of independence between correct hypotheses and inconsistent contributions to reasoning,a balanced positive sample softmax focal loss is designed.This loss function adjusts the influence of correct hypothesis probability and balances sample loss weight.In addition,in aNLI,the correlation between positive and negative samples is often reflected in specific phrases,which are essential to judge the rationality of the sample.Therefore,a multi-level attention model is designed to achieve deep attention to phrase-level features through multi-level attention mechanism refinement.This new model was named aNLI:Multi-level Attention with Balanced Loss(MAT-Ball)model.The experi-mental results show that MAT-Ball has achieved the highest performance on the RoBERTa-large pre-trained model,with ACC and AUC results increased by about 1%and 0.5%respectively compared to publicly available codes.Meanwhile,the efficiency and robustness of the proposed method are dem-onstrated by comparing the performance in terms of low resources and loss convergence.
natural language reasoningabductive reasoningpre-training modelattention mecha-nism