首页|SPARE: Self-supervised part erasing for ultra-fine-grained visual categorization
SPARE: Self-supervised part erasing for ultra-fine-grained visual categorization
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Elsevier
This paper presents SPARE, a self-supervised part erasing framework for ultra-fine-grained visual categorization. The key insight of our model is to learn discriminative representations by encoding a self supervised module that performs random part erasing and prediction on the contextual position of the erased parts. This drives the network to exploit intrinsic structure of data, i.e., understanding and recognizing the contextual information of the objects, thus facilitating more discriminative part-level representation. This also enhances the learning capability of the model by introducing more diversified training part segments with semantic meaning. We demonstrate that our approach is able to achieve strong performance on seven publicly available datasets covering ultra-fine-grained visual categorization and finegrained visual categorization tasks. (c) 2022 Elsevier Ltd. All rights reserved.
Self-Supervised part erasingUltra-fine-grained visual categorizationFine-grained visual categorizationRandom part erasingWeakly-supervised part segmentation