Math Word Problems Solving Model Based on Analogical Learning
Currently,research on Math Word Problems(MWP)based on analogical learning mostly selects samples according to semantic similarity or shallow logic.These studies suffer from issues of insufficient sample matching and limited sample selection within their datasets.To address these issues,this study proposes a novel MWP with Analogical Learning(MWP-AL)model.The model mainly performs analogical learning of MWP from two perspectives.From the perspective of text encoding,samples are selected by limiting them to three dimensions:cosine similarity,tree-top nodes,and tree depth.This method selects samples from both semantic and deep logical perspectives,resulting in a better match between the obtained samples and the original question.From the perspective of solving equations,samples are constructed by logically modifying them for different types of equations.This method is not limited to selecting samples from a dataset and has strong generalization ability.Analogical learning is performed on the two samples by calculating the cross-entropy loss function.Experimental results show that adding the MWP-AL model to the two baseline models improves the accuracy of the English dataset MathQA and the Chinese dataset Math23K by 1.8,2.5,and 2.8,respectively,and 1.3 percentage points.At the same time,the accuracy has been improved compared to other baseline models.
analogical learningMath Word Problems(MWP)solvingsemantic similaritysample screeningsample construction