首页|UC-former: A multi-scale image deraining network using enhanced transformer
UC-former: A multi-scale image deraining network using enhanced transformer
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
Elsevier
While convolutional neural networks (CNN) have achieved remarkable performance in single image deraining tasks, it is still a very challenging task due to CNN's limited receptive field and the unreality of the output image。 In this paper, UC-former, an effective and efficient U-shaped architecture based on transformer for image deraining was presented。 In UC-former, there are two core designs to avoid heavy self-attention computation and inefficient communications across encoder and decoder。 First, we propose a novel channel across Transformer block, which computes self-attention between channels。 It significantly reduces the computational complexity of high-resolution rain maps while capturing global context。 Second, we propose a multi-scale feature fusion module between the encoder and decoder to combine low-level local features and high-level non-local features。 In addition, we employ depth-wise convolution and H-Swish non-linear activation function in Transformer Blocks to enhance rain removal authenticity。 Extensive experiments indicate that our method outperforms the state-of-the-art deraining approaches on synthetic and real-world rainy datasets。
Single image derainingMulti-scale feature fusionTransformerSelf-attention
Weina Zhou、Linhui Ye
展开 >
Shanghai Maritime University, Shanghai 201306, China