首页|基于预训练模型的受控文本生成研究综述

基于预训练模型的受控文本生成研究综述

扫码查看
自然语言生成(NLG)作为人工智能的一个分支,近年来随着预训练语言模型(PLMs)的发展取得了显著进展.NLG旨在根据多种输入源(如文本、图像、表格和知识库)生成连贯、有意义的文本.研究者通过架构扩展、微调和提示学习等方法提升了PLMs的性能.然而,NLG在处理非结构化输入和低资源语言生成方面仍面临挑战,尤其是在缺乏足够训练数据的环境中.为探讨NLG的最新发展、应用前景以及所面临的挑战,通过文献分析,提出PLMs性能改进策略,并展望未来研究方向.研究表明,尽管存在诸多限制,但NLG在内容创作、自动新闻报导、对话系统等领域已展现出潜力.随着技术的不断进步,NLG在自然语言处理和人工智能领域将扮演更重要的角色.
Overview of Controlled Text Generation Based on Pre-trained Models
Natural language generation(NLG),a branch of artificial intelligence,has seen significant progress in recent years,particularly with the development of Pre-trained language models(PLMs).NLG aims to generate coherent and meaningful text based on various input sources such as texts,images,tables,and knowledge bases.Researchers have enhanced the performance of PLMs through methods like archi-tectural expansion,fine-tuning,and prompt learning.However,NLG still faces challenges in dealing with unstructured inputs and generating text in low-resource languages,especially in environments lacking sufficient training data.This study explores the latest developments in NLG,its application prospects,and the challenges it faces.By analyzing existing literature,we propose strategies to improve the performance of PLMs and anticipate future research directions.Our findings indicate that despite limitations,NLG has shown potential in areas such as con-tent creation,automated news reporting,and conversational systems.The conclusion is that,with technological advancements,NLG will play an increasingly significant role in natural language processing and other related fields of artificial intelligence.

artificial intelligencenatural language generationcontrolled text generationpre-trained language modelsprompt learning

周强伟、施水才、王洪俊

展开 >

北京信息科技大学 计算机学院,北京 100101

拓尔思信息技术股份有限公司,北京 100096

人工智能 自然语言生成 受控文本生成 预训练语言模型 提示学习

2024

软件导刊
湖北省信息学会

软件导刊

影响因子:0.524
ISSN:1672-7800
年,卷(期):2024.23(4)
  • 52