首页|Pre-trained models for natural language processing: A survey

Pre-trained models for natural language processing: A survey

扫码查看
Recently,the emergence of pre-trained models (PTMs) has brought natural language processing (NLP) to a new era.In this survey,we provide a comprehensive review of PTMs for NLP.We first briefly introduce language representation learning and its research progress.Then we systematically categorize existing PTMs based on a taxonomy from four different perspectives.Next,we describe how to adapt the knowledge of PTMs to downstream tasks.Finally,we outline some potential directions of PTMs for future research.This survey is purposed to be a hands-on guide for understanding,using,and developing PTMs for various NLP tasks.

deep learningneural networknatural language processingpre-trained modeldistributed representationword embeddingself-supervised learninglanguage modelling

QIU XiPeng、SUN TianXiang、XU YiGe、SHAO YunFan、DAI Ning、HUANG XuanJing

展开 >

School of Computer Science, Fudan University, Shanghai 200433, China

Shanghai Key Laboratory of Intelligent Information Processing, Shanghai 200433, China

This work was supported by the National Natural Science Foundation of ChinaThis work was supported by the National Natural Science Foundation of ChinaShanghai Municipal Science and Technology Major Project

61751201616721622018SHZDZX01

2020

中国科学:技术科学(英文版)
中国科学院

中国科学:技术科学(英文版)

CSTPCDCSCDSCIEI
影响因子:1.056
ISSN:1674-7321
年,卷(期):2020.63(10)
  • 70
  • 233