提示学习旨在利用提示模板减小语言模型的预训练任务和下游任务间的差距.其难点在于提示模板的设计,为此,文中在构造提示模板的过程中,提出一个通过自动搜索离散提示对连续提示优化的新方法.其中,自动搜索提示基于双向Transformer编码器(Bidirectional Encoder Representation from Transformers,BERT)的预训练任务掩码语言模型训练,连续提示优化是训练自动搜索输出的离散提示在连续空间内的映射张量,根据损失函数对提示模板进行训练.实验表明,在公共基准Super-GLUE中,基于提示学习的BERT相比于原始的BERT模型在准确率和F1值上均有显著的提升.
Prompt learning enhance the understanding ability of BERT
Prompt learning aims to reduce the gap between the pre-training task and the downstream task of the language model using prompt templates.The difficulty lies in the design of prompt templates.This pa-per proposes a new method for optimizing continuous prompts by automatically searching discrete prompts in the process of constructing prompt templates.The automatic search prompt is based on the Bidirectional En-coder Representation from Transformers(BERT)pre-trained task mask language model training.The con-tinuous prompt optimization is to train the mapping tensor of the discrete prompt templates of the auto-search output in the continuous space.The prompt templates are trained according to the loss function.Experi-ments show that the prompt-based learning BERT in the public benchmark SuperGLUE shows significant im-provements in accuracy and F1 values compared to the original BERT model.
prompt learningBidirectional Encoder Representation from Transformersnatural language processingcontinuous prompt optimizationmask language model