首页|PromptWE:一种融合解释的提示学习事实核查模型

PromptWE:一种融合解释的提示学习事实核查模型

扫码查看
针对事实核查任务中如何利用生成的解释更好地辅助真实性判断的问题,该文提出一种融合解释的提示学习事实核查模型PromptWE.该模型在生成任务中通过证据筛选和摘要生成更易理解的解释,然后在分类任务中将解释融合进提示学习模型的提示模板中,从而将解释与预训练模型储备的知识相结合,以提高真实性判别的准确率.该模型在2个数据集上的F1值比SOTA模型高5%,表明模型生成的解释能提升模型判别信息真假的能力.此外,为了说明高质量的解释在分类任务中的重要性,该文将数据集中的专家证据直接作为解释,融合进提示模板中进行了提示学习训练,比融合模型生成的解释的F1值提高了 16%,证明了高质量解释能有效激发通用语言模型在事实核查任务上的能力.
PromptWE:A fact-checking method based on prompt learning with explanations
[Objective]In the contemporary"We Media"era,the simplification of news production and dissemination has elevated every individual to the status of news producer and disseminator,and a large amount of false information also follows.Despite the increasing and abundant information on the Internet,the regulation of false information is relatively weak.Consequently,fact-checking is becoming more and more important work,while traditional related work tends to simply label predictions without explaining the reason for the label.The generated explanation in a few studies is also relatively primitive which is hard to comprehend.Because Fact-checking demands a substantial amount of common sense,reasoning,and background knowledge about claims.Prompt learning may further utilize common sense and reasoning ability in pre-trained language models.It may also incorporate the relevant information or additional details within the explanation for claims.In all,it is essential to generate high-quality smooth explanations and further leverage generated explanations for improving classification performance through prompt learning.[Methods]To address this multifaceted challenge,we propose the PromptWE model(Prompt With Evidence)that uses the prompt learning paradigm to integrate auto-generated explanations with claims.We not only provide natural language explanations that enhance the explainability of the classification result but also further improve the model performance by combining explanation into prompt learning.The model performs hierarchical evidence distillation on many related new reports for every claim to obtain relevant evidence,then uses the BART-CNN model to summarize these incoherent pieces of evidence into one smooth explanation.Consequently,it integrates the claim and explanation into six self-designed templates for prompt learning.Finally,we ensemble the result from different templates to predict the authenticity of the news.Moreover,we replace the generated explanation with the professional explanation from the dataset to investigate the impact of expert evidence on the prompt learning models.[Results]Our method achieves good results on two fact-checking datasets:Liar-RAW and RAWFC.Its F1 score is 5%higher than the state-of-the-art model on both datasets at least.We also find that ensemble learning with multiple templates can effectively improve the F1 score of the model.For explanation generation,the model has a higher ROUGE-2 score than the former model.After integrating professional evidence into the prompt templates,the model achieves significant improvement in the classification results on the two datasets,with a maximum improvement of 15%when compared to the results of the PromptWE model.Also,we find that for multi-class classification task,the model with integrated professional evidence exhibited exhibits significant performance improvement on more challenging categories,such as half-true and mostly true.[Conclusions]Related experiments indicate that incorporating extracted explanations as supplementary background knowledge about claims,along with the common sense and reasoning abilities learned from pre-trained models,into prompt learning templates can further enhance classification performance for claim veracity.Moreover,sequentially employing the methods of hierarchical evidence extraction and text summarization makes explanations more concise,coherent,and comprehensible.Also,the explanation extracted from unrelated evidence is better suited for integration into prompt learning methods.The further improvement in classification performance after incorporating professional evidence underscores that this approach could swiftly identify accurate and informative prompt templates,facilitating subsequent more efficient utilization of general large models like ChatGPT.

fact-checkingprompt learningexplanation generation

张翔然、李璐旸

展开 >

浙江大学工程师学院,杭州 310015

北京外国语大学信息科学技术学院,北京 100089

事实核查 提示学习 解释生成

北京外国语大学"双一流"建设科研项目

SYL2020ZX006

2024

清华大学学报(自然科学版)
清华大学

清华大学学报(自然科学版)

CSTPCD北大核心
影响因子:0.586
ISSN:1000-0054
年,卷(期):2024.64(5)
  • 27