首页|DLP:towards active defense against backdoor attacks with decoupled learning process

DLP:towards active defense against backdoor attacks with decoupled learning process

扫码查看
Deep learning models are well known to be susceptible to backdoor attack,where the attacker only needs to provide a tampered dataset on which the triggers are injected.Models trained on the dataset will passively implant the backdoor,and triggers on the input can mislead the models during testing.Our study shows that the model shows different learning behaviors in clean and poisoned subsets during training.Based on this observation,we propose a general training pipeline to defend against backdoor attacks actively.Benign models can be trained from the unreli-able dataset by decoupling the learning process into three stages,i.e.,supervised learning,active unlearning,and active semi-supervised fine-tuning.The effectiveness of our approach has been shown in numerous experiments across various backdoor attacks and datasets.

Deep learningBackdoor attackActive defense

Zonghao Ying、Bin Wu

展开 >

State Key Laboratory of Information Security,Institute of Information Engineering,Chinese Academy of Sciences,Beijing,China

School of Cyber Security,University of Chinese Academy of Sciences,Beijing,China

National Nature Science Foundation of ChinaNational Nature Science Foundation of ChinaMajor Technology Program of Hainan,China

62272007U1936119ZDKJ2019003

2024

网络空间安全科学与技术(英文版)

网络空间安全科学与技术(英文版)

EI
ISSN:
年,卷(期):2024.7(1)
  • 59