A single image dehazing algorithm based on multi-scale review distillation
Deep learning based on image dehazing models often design and stack efficient feature extraction modules,resulting in complex models and slow inference.Knowledge distillation,which transfers knowledge from the teacher network to an efficient student network,can improve the efficiency of the model without affecting its effectiveness,and has received widespread attention.However,most existing knowledge distillation based on dehazing models focus on knowledge transfer at the same level between the teacher network and the student network,without considering whether the feature transfer is sufficient,resulting in incomplete feature distillation and poor dehazing effect.To alleviate the above issues,this article proposes the multi-scale review distillation network(MRDN),which fully transfers teacher network knowledge to different levels of student networks.Specifically,in order to ensure the ability of students and teachers to mine image hidden features and reconstruct information in the network,hybrid attention blocks(HAB)and hybrid attention block groups(HABs)are designed respectively;then,the attention fusion block(AFB)is used to review the knowledge,which integrates the current and deep level features of the student network to generate intermediate features for distillation;finally,in order to accurately transfer knowledge,the hierarchical content loss block(HCLB)is used to extract multi-scale pyramid features from intermediate features and corresponding hierarchical features of the teacher network,and calculate the losses at each level.The experimental results indicate that our model outperforms state-of-the-art methods.Specifically,MRDN performs better in removing fog on real fog images,and surpasses the best contrastive model(EPDN)in PSNR and SSIM metrics on the SOTS dataset by 9.2%and 7.8%,respectively.
image dehazingknowledge distillationmulti-scale reviewattention fusionhierarchical content loss