首页|大型预训练语言模型基础逻辑能力测评研究

大型预训练语言模型基础逻辑能力测评研究

扫码查看
针对数量问题、集合关系、量词问题和常识推理等4类基本逻辑推理问题,构建少样本学习模板,对数据集进行自动生成扩展,设计了11个逻辑推理子任务.采用语境学习和提示微调2种少样本学习方法,从模型、测试、任务3个维度测试了GPT-Neo-1.3B、GPT-J-6B、GPT-3-Curie、GPT-3-Davinci等模型的逻辑推理能力.结果表明,GPT-3模型在数量问题、量词问题和常识推理问题方面相对优秀,GPT-Neo与GPT-J模型在集合关系问题上更具优势.相较于语境学习,对预训练模型进行提示微调能显著提升预测能力.
Research on the evaluation of basic logic ability of large-scale pre-trained language models
For four basic logical reasoning abilities of quantity problem,set relationship,quantifier prob-lem and common sense reasoning,we construct few-shot learning sample templates for few-sort learning,which contain 11 logical reasoning subtasks.Two few-shot learning methods of in-context learning and prompt tuning are used to test the logical reasoning ability of GPT-Neo-1.3B and other models from the three dimensions of model,test method and task.The experimental results show that GPT-3 is relatively excellent in quantity problem,quantifier problem and common sense reasoning problem,GPT-Neo and GPT-J have more advantages in set-relation problem.Compared with in-context learning,the pre-trained models can significantly improve the prediction ability by prompt tuning.

natural language processingpre-trained language modelsin-context learningprompt-tun-ingfew-shot learning

倪睿康、肖达、高鹏

展开 >

曲阜师范大学网络空间安全学院,273165,山东省曲阜市

北京邮电大学网络空间安全学院,100876,北京市

自然语言处理 预训练语言模型 语境学习 提示微调 少样本学习

中国博士后科学基金山东省自然科学基金曲阜师范大学科研基金

2023M732022ZR2021QF061167/602801

2024

曲阜师范大学学报(自然科学版)
山东曲阜师范大学

曲阜师范大学学报(自然科学版)

影响因子:0.299
ISSN:1001-5337
年,卷(期):2024.50(3)
  • 15