首页|PCP-tuning:面向小样本学习的个性化连续提示调优

PCP-tuning:面向小样本学习的个性化连续提示调优

扫码查看
随着"提示学习"的兴起,预训练语言模型在少样本学习中取得了显著的表现,其中的关键问题是如何为每个训练样本构建合适的提示.近年来研究人员提出了一系列提示构造方法,有的构造离散型的提示,有的构造连续型的提示,但通常都是将一个提示应用到整个数据集上.然而,实验结果表明,很难找到一个能够适用于任务中所有样本的提示.为此,提出了一种用于小样本学习的个性化连续型提示调优方法(PCP-tuning),其目的是根据数据集中每个样本的语义来生成个性化的连续型提示.同时,还提出了两种校准技术来控制生成的连续型提示的分布,以获得更好的下游任务表现.最后在10个基准任务上进行大量实验,证明了新方法的优越性能.
PCP-tuning:Personalized Continuous Prompt Tuning for Few-Shot Learning
Pre-trained language models have achieved remarkable performance in few-shot learning with the rise of"prompt learning",where the key problem is how to construct a suitable prompt for each example.Sample and prompt will be combined as a new input to language model(LM).A series of prompt construction methods have been proposed recently,some of these methods are for discrete prompt construction,and some focus on continuous prompt construction,both of them normally apply a unified prompt to all examples.However,the results show that it is hard to find a perfect unified prompt that works for all examples in a task,one prompt can only help LM assign the correct class to some samples in the downstream classification task and give the wrong result to others.To this end,we propose a novel personalized continuous prompt tuning(PCP-tuning)method to learn personalized prompts that are tailored to each sample's semantic for few-shot learning.Two calibration techniques are proposed to control the distribution of generated prompts for better prompts.Extensive experimental results on ten benchmark tasks demonstrate the superior performance of our method.

natural language processinglarge scale pre-trained modelsprompt learningtext classification

刘汀、蔡少填、陈小军、章秦

展开 >

深圳大学计算机科学与技术系,广东深圳 518071

自然语言处理 大型预训练模型 提示学习 文本分类

国家自然科学基金广东省自然科学基金面上项目深圳市基础研究面上项目

922701222023A1515012584JCYJ20210324093000002

2024

新疆大学学报(自然科学版)(中英文)
新疆大学

新疆大学学报(自然科学版)(中英文)

CSTPCD
影响因子:0.13
ISSN:2096-7675
年,卷(期):2024.41(1)
  • 3