首页|融合TF-IDF算法和预训练模型的文本数据增强

融合TF-IDF算法和预训练模型的文本数据增强

扫码查看
针对自然语言处理领域的数据增强问题,首次提出了一种基于TF-IDF算法和预训练语言模型BERT融合的文本数据增强方法.首先,改进传统的基于随机策略的词元选择方法,避免对语义起到关键作用的词元进行改写,利用TF-IDF算法提取样本的非核心词,得到替换的目标词元;之后针对现有算法在生成新数据时,依赖输入样本而导致的增强样本多样化受限问题,融合BERT模型预测目标替换词元,并使用预测的结果替换目标词元.实验结果表明,基于TF-IDF和BERT预训练模型融合的文本数据增强算法有效提升深度学习模型的性能达 5.8%,优于现有的文本数据增强算法.
Textual Data Augmentation Blending TF-IDF and Pre-Trained Model
To improve the performance of textual data augmentation(TDA)in the field of natural language pro-cessing,a novel TDA algorithm is proposed by blending the TF-IDF algorithm and the BERT pre-trained language model.First,different from the traditional random selection strategy of the token selection method,the proposed method uses the TF-IDF algorithm to extract the most uninformative words into tokens and avoids rewriting tokens that play a key role in semantics.Then,since most existing data augmentation methods depend on input samples,lead-ing to the limited diversification of augmented samples,the pre-trained language model BERT is blended into the pro-posed method to predict the token and replace the tokens with the predicted results.Experimental results demonstrate that the proposed TDA algorithm efficiently improves the performance of the deep learning models by 5.8%,and the proposed method is superior to the existing TDA algorithms.

Natural language processingDeep learningTextual data augmentationPre-trained language model

胡荣笙、车文刚、张龙、戴庞达

展开 >

昆明理工大学信息工程与自动化学院,云南 昆明 650500

自然语言处理 深度学习 文本数据增强 预训练语言模型

国家自然科学基金安徽省科技重大专项

62102395202003a05020020

2024

计算机仿真
中国航天科工集团公司第十七研究所

计算机仿真

CSTPCD
影响因子:0.518
ISSN:1006-9348
年,卷(期):2024.41(5)
  • 2