首页|基于特征空间增强重放和偏差校正的类增量学习方法

基于特征空间增强重放和偏差校正的类增量学习方法

扫码查看
网络不断学习新的知识时会遭受灾难性遗忘,增量学习方法可通过存储少量旧数据重放以实现增量学习的可塑性与稳定性的平衡.然而,存储旧任务的数据会有内存限制及隐私泄露的问题.针对该问题,文中提出基于特征空间增强重放和偏差校正的类增量学习方法,用于缓解灾难性遗忘.首先,每类存储一个中间层特征均值作为其代表的原型,并冻结低层特征提取网络,避免原型"漂移".在增量学习阶段,存储的原型通过几何平移变换增强重放的方式维持先前任务的决策边界.然后,通过偏差校正为每个任务学习分类权重,进一步纠正方法分类偏向于新任务的问题.在4个基准数据集上的实验表明文中方法性能较优.
Class-Incremental Learning Method Based on Feature Space Augmented Replay and Bias Correction
The problem of catastrophic forgetting arises when the network learns new knowledge continuously.Various incremental learning methods are proposed to solve this problem and one mainstream approach is to balance the plasticity and stability of incremental learning through storing a small amount of old data and replaying it.However,storing data from old tasks can lead to memory limitations and privacy breaches.To address this issue,a class-incremental learning method based on feature space augmented replay and bias correction is proposed to alleviate catastrophic forgetting.Firstly,the mean feature of an intermediate layer for each class is stored as its representative prototype and the low-level feature extraction network is frozen to prevent prototype drift.In the incremental learning stage,the stored prototypes are enhanced and replayed through geometric translation transformation to maintain the decision boundaries of the previous task.Secondly,bias correction is proposed to learn classification weights for each task,further correcting the problem of model classification bias towards new tasks.Experiments on four benchmark datasets show that the proposed method outperforms the state-of-the-art algorithms.

Class Incremental LearningContinuous LearningCatastrophic ForgettingFeature Rep-resentationFeature Enhancement

孙晓鹏、余璐、徐常胜

展开 >

天津理工大学计算机科学与工程学院 天津 300382

中国科学院自动化研究所多模态人工智能系统全国重点实验室 北京 100190

类增量学习 持续学习 灾难性遗忘 特征表示 特征增强

2024

模式识别与人工智能
中国自动化学会,国家智能计算机研究开发中心,中国科学院合肥智能机械研究所

模式识别与人工智能

CSTPCD北大核心
影响因子:0.954
ISSN:1003-6059
年,卷(期):2024.37(8)