Class-Incremental Learning Method Based on Feature Space Augmented Replay and Bias Correction
The problem of catastrophic forgetting arises when the network learns new knowledge continuously.Various incremental learning methods are proposed to solve this problem and one mainstream approach is to balance the plasticity and stability of incremental learning through storing a small amount of old data and replaying it.However,storing data from old tasks can lead to memory limitations and privacy breaches.To address this issue,a class-incremental learning method based on feature space augmented replay and bias correction is proposed to alleviate catastrophic forgetting.Firstly,the mean feature of an intermediate layer for each class is stored as its representative prototype and the low-level feature extraction network is frozen to prevent prototype drift.In the incremental learning stage,the stored prototypes are enhanced and replayed through geometric translation transformation to maintain the decision boundaries of the previous task.Secondly,bias correction is proposed to learn classification weights for each task,further correcting the problem of model classification bias towards new tasks.Experiments on four benchmark datasets show that the proposed method outperforms the state-of-the-art algorithms.
Class Incremental LearningContinuous LearningCatastrophic ForgettingFeature Rep-resentationFeature Enhancement