摘要
扩散模型是有效的纯化方法,在现有分类器执行分类任务之前,使用生成方法去除噪声或对抗性攻击.然而,扩散模型的效率仍然是一个问题,现有的解决方案基于知识蒸馏,由于生成步骤较少,可能会危及生成质量.因此,我们提出TendiffPure,一种用于纯化的张量化和压缩的扩散模型.与知识蒸馏方法不同,我们直接使用张量链分解压缩扩散模型的U-Net骨干网络,减少参数数量,并在多维数据(如图像)中捕获更多的空间信息.空间复杂度从O(N2)减少到O(NR2),其中R≤4为张量序列秩,N为通道数.实验结果表明,基于CIFAR-10、Fashion-MNIST和MNIST数据集,TendiffPure可以更有效地生成高质量的净化结果,并在两种噪声和一次对抗性攻击下优于基线纯化方法.
Abstract
Diffusion models are effective purification methods,where the noises or adversarial attacks are removed using generative approaches before pre-existing classifiers conducting classification tasks.However,the efficiency of diffusion models is still a concern,and existing solutions are based on knowledge distillation which can jeopardize the generation quality because of the small number of generation steps.Hence,we propose TendiffPure as a tensorized and compressed diffusion model for purification.Unlike the knowledge distillation methods,we directly compress U-Nets as backbones of diffusion models using tensor-train decomposition,which reduces the number of parameters and captures more spatial information in multi-dimensional data such as images.The space complexity is reduced from O(N2)to O(NR2)with R ≤ 4 as the tensor-train rank and N as the number of channels.Experimental results show that TendiffPure can more efficiently obtain high-quality purification results and outperforms the baseline purification methods on CIFAR-10,Fashion-MNIST,and MNIST datasets for two noises and one adversarial attack.