Long-tailed Visual Recognition Method Based on Multi-classifier Graded Distillation
In order to enhance model performance in the long-tailed visual recognition domain,this paper proposes a multi-classifier graded distillation framework.The framework comprises rotation self-supervised pre-training and multi-classifier distillation.Rotation self-supervised pre-training treats each image equally by predicting image rotations,and mitigates the impact of long-tailed labels on the model.Multi-classifier systematically distills the knowledge from the teacher model to the student model through three specifically optimized classifiers.Extensive experiment results are conducted on open-source long-tailed image recognition datasets,and comparisons are made with existing methods.The experimental results demonstrate that the proposed method achieves notable improvements in long-tailed image visual recognition.
knowledge distillationlong-tailed distributionimage recognitionDeep Learning model