首页|Multi-complementary and unlab ele d learning for arbitrary losses and models
Multi-complementary and unlab ele d learning for arbitrary losses and models
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NSTL
Elsevier
A weakly-supervised learning framework named as complementary-label learning has been proposed re-cently, where each sample is equipped with a single complementary label that denotes one of the classes the sample does not belong to. However, the existing complementary-label learning methods cannot learn from the easily accessible unlabeled samples and samples with multiple complementary labels, which are more informative. In this paper, to remove these limitations, we propose the novel multi-complementary and unlabeled learning framework that allows unbiased estimation of classification risk from samples with any number of complementary labels and unlabeled samples, for arbitrary loss functions and models. We first give an unbiased estimator of the classification risk from samples with multiple complementary labels, and then further improve the estimator by incorporating unlabeled samples into the risk formu-lation. The estimation error bounds show that the proposed methods are in the optimal parametric con-vergence rate. We also propose a risk correction scheme for alleviating over-fitting caused by negative empirical risk. Finally, the experiments on both linear and deep models show the effectiveness of our proposed methods. (c) 2021 Elsevier Ltd. All rights reserved.