首页|基于全局局部协同的非均匀图像去雾方法

基于全局局部协同的非均匀图像去雾方法

扫码查看
近年来,基于卷积神经网络(Convolutional neural network,CNN)的图像去雾方法在合成数据集上取得了显著的进展,但由于真实场景中存在雾分布不均的问题,卷积运算的局部感受野难以有效捕获到上下文指导信息,从而导致全局结构信息丢失。因此,真实场景下的图像去雾任务面临着巨大的挑战。考虑到Transformer具有捕获长距离语义信息依赖关系的优势,有利于引导全局结构信息重建。然而,标准Transformer结构的高计算复杂度阻碍了其在图像恢复中的应用。针对上述提到的问题,提出一个由Transformer和卷积神经网络组成的双分支协同非均匀图像去雾网络Dehazeformer。Transformer分支用于提取全局结构信息,同时设计稀疏自注意力模块(Sparse self-attention modules,SSM)以降低计算复杂度。卷积神经网络分支用于获取局部信息,从而恢复纹理细节。在真实非均匀有雾场景下的实验结果表明,该方法不管是在客观评价还是在主观视觉效果方面均达到优异的性能。
Dehazeformer:Nonhomogeneous Image Dehazing With Collaborative Global-local Network
In recent years,image dehazing methods based on convolutional neural network(CNN)have made re-markable progress in synthetic datasets,but the local receptive field of convolution operation is difficult to effect-ively capture contextual guidance information due to the uneven distribution of haze in the real scene,resulting in the loss of global structure information.Therefore,the image dehazing task in the real scene still faces great chal-lenges.Considering that Transformer has the advantage of capturing long-range semantic information dependency relationships,it can facilitate global structure information reconstruction.However,the high computational com-plexity of the standard Transformer structure hinders its application in image restoration.To solve the problems mentioned above,this paper proposes a double-branch collaborative nonhomogeneous image dehazing network,which is called Dehazeformer and composed of Transformer and convolutional neural network.The Transformer branch is used to extract global structure information,and sparse self-attention modules(SSM)are designed to re-duce computational complexity.Besides,the convolutional neural network branch is used to obtain local informa-tion to recover texture details.Extensive experiments in the real nonhomogeneous haze scene show that the pro-posed method achieves excellent performance in both objective evaluation and subjective visual effects.

Image dehazingconvolutional neural network(CNN)Transformerfeature fusionsparse self-attention

罗小同、杨汶锦、曲延云、谢源

展开 >

厦门大学信息学院计算机科学与技术系 厦门 361005

华东师范大学计算机科学与技术学院 上海 200063

图像去雾 卷积神经网络 Transformer 特征融合 稀疏自注意力

国家自然科学基金国家自然科学基金

6217622462222602

2024

自动化学报
中国自动化学会 中国科学院自动化研究所

自动化学报

CSTPCD北大核心
影响因子:1.762
ISSN:0254-4156
年,卷(期):2024.50(7)