首页|Sequential Cooperative Distillation for Imbalanced Multi-Task Learning

Sequential Cooperative Distillation for Imbalanced Multi-Task Learning

扫码查看
Multi-task learning(MTL)can boost the performance of individual tasks by mutual learning among multi-ple related tasks.However,when these tasks assume diverse complexities,their corresponding losses involved in the MTL objective inevitably compete with each other and ultimately make the learning biased towards simple tasks rather than complex ones.To address this imbalanced learning problem,we propose a novel MTL method that can equip multiple ex-isting deep MTL model architectures with a sequential cooperative distillation(SCD)module.Specifically,we first intro-duce an efficient mechanism to measure the similarity between tasks,and group similar tasks into the same block to allow their cooperative learning from each other.Based on this,the grouped task blocks are sorted in a queue to determine the learning sequence of the tasks according to their complexities estimated with the defined performance indicator.Finally,a distillation between the individual task-specific models and the MTL model is performed block by block from complex to simple manner,achieving a balance between competition and cooperation among learning multiple tasks.Extensive experi-ments demonstrate that our method is significantly more competitive compared with state-of-the-art methods,ranking No.1 with average performances across multiple datasets by improving 12.95%and 3.72%compared with OMTL and MTLKD,respectively.

multi-task learning(MIT)imbalanced learningsimilarity estimationknowledge distillationdistillation queue

冯泉、姚佳雨、谢明昆、黄圣君、陈松灿

展开 >

College of Computer Science and Technology,Nanjing University of Aeronautics and Astronautics,Nanjing 211106,China

MIIT Key Laboratory of Pattern Analysis and Machine Intelligence,Nanjing University of Aeronautics and Astronautics Nanjing 211106,China

2024

计算机科学技术学报(英文版)
中国计算机学会

计算机科学技术学报(英文版)

CSTPCD
影响因子:0.432
ISSN:1000-9000
年,卷(期):2024.39(5)