EvolveNet:Adaptive Self-Supervised Continual Learning without Prior Knowledge
Unsupervised Continual Learning(UCL)refers to the ability to learn over time while remembering previous patterns without supervision.Although significant progress has been made in this direction,existing works often assume strong prior knowledge about forthcoming data(e.g.,knowing class boundaries),which may not be obtainable in complex and unpredictable open environments.Inspired by real-world scenarios,a more practical problem setting called unsupervised online continual learning without prior knowledge is proposed in this paper.The proposed setting is challenging because the data are non-i.i.d.and lack external supervision or prior knowledge.To address these challenges,a method called EvolveNet is intriduced,which is an adaptive unsupervised continual learning approach capable of purely extracting and memorizing representations from data streams.EvolveNet is designed around three main components:adversarial pseudo-supervised learning loss,self-supervised forgetting loss,and online memory update for uniform subset selection.The design of these three components aims to synergize and maximize learning performance.We conduct comprehensive experiments on five public datasets with EvolveNet.The results show that EvolveNet outperforms existing algorithms in all settings,achieving significantly improved accuracy on CIFAR-10,CIFAR-100,and TinyImageNet datasets,as well as performing best on the multimodal datasets Core-50 and iLab-20M for incremental learning.The cross-dataset generalization experiments are also conducted,demonstrating EvolveNet's robustness in generalization.Finally,we open-source the EvolveNet model and core code on GitHub,facilitating progress in unsupervised continual learning and providing a useful tool and platform for the research community.