首页|基于知识蒸馏的DoH流量分类

基于知识蒸馏的DoH流量分类

扫码查看
为应对传统深度学习方法在DoH流量分类中面临的对大量标注数据的依赖、过拟合风险高和模型解释性差等挑战,提出了一种基于知识蒸馏的DoH流量分类方法。首先,设计了一个包含2个卷积层和2个全连接层的卷积神经网络(convolutional neural network,CNN),用于学生模型与教师模型训练。其次,初始化学生模型和教师模型,使教师模型为学生模型的深度拷贝且参数固定。最后,通过分类损失和一致性损失的加权和进行训练,并使用指数移动平均更新教师模型参数,以提供更稳定的指导。在CIRA-CIC-DoHBrw-2020数据集上的实验结果表明,相较于传统1D-CNN模型,该方法的精确率、召回率、Fl分数分别提升了 0。13、0。63、0。40百分点,证明了知识蒸馏在提升模型性能方面的有效性。
DoH traffic classification based on knowledge distillation
To address the challenges faced by traditional deep learning methods in DoH traffic classification,such as dependence on large amounts of annotated data,high risk of overfitting,and poor model interpretability,a DoH traffic classification method based on knowledge distillation was proposed.Firstly,a convolutional neural network(CNN)with two convolutional layers and two fully connected layers was designed for student model and teacher model training.Secondly,the student model and the teacher model were initialized to make the teacher model a deep copy of the student model with fixed parameters.Finally,the training was performed by the weighted sum of classification loss and consistency loss,and the teacher model parameters were updated using exponential moving average(EMA)to provide more stable guidance.The experimental results on CIRA-CIC-DoHBrw-2020 dataset show that,compared with the traditional 1D-CNN model,this method improves precision,recall and Fl score by 0.13,0.63,and 0.40 percentage points,demonstrating the effectiveness of the knowledge distillation in improving model performance.

knowledge distillationDoH traffic classificationconvolutional neural network(CNN)mean teacher model

谢艳莉、孙璇

展开 >

北京信息科技大学计算机学院,北京 100192

知识蒸馏 DoH流量分类 卷积神经网络 均值教师模型

2024

北京信息科技大学学报(自然科学版)
北京信息科技大学

北京信息科技大学学报(自然科学版)

影响因子:0.363
ISSN:1674-6864
年,卷(期):2024.39(5)