Research Progress of Large Language Models in Mathematical Reasoning
This review comprehensively outlines the current state,underlying mechanisms,and trends in the applications of Large Language Model(LLM)in terms of mathematical reasoning capabilities.Moreover,it provides references for future research in this area.This review incorporates data from 122 publications related to mathematical reasoning with LLMs and systematically describes different types of mathematical reasoning problems and their datasets.It explores the principles,application values,and issues of various techniques from two perspectives-strategies to enhance model reasoning capabilities and methods of Chain-of-Thought(CoT)prompting.Through qualitative analysis,the review provides a thorough overview of the progress of research in the field of mathematical reasoning with LLMs and suggests potential directions for future research.However,rapid developments in research related to large models may mean that this review does not cover all pertinent studies.Methods such as CoT prompting,fine-tuning,the utilization of programming languages and other external tools,and verification mechanisms can effectively enhance the mathematical reasoning capabilities of LLMs.In particular,CoT prompting techniques are becoming a major focus of current research in LLMs.Future studies could further enhance the reasoning capabilities of LLMs and develop new methods for solving mathematical problems.
Large Language Model(LLM)mathematical reasoningChain-of-Thought(CoT)GPT-4fine-tuning