Multi-spectral image compression by fusing multi-scale feature convolutional neural networks
Unlike ordinary image compression,multispectral image compression needs to remove spatial redundancy as well as inter-spectral redundancy.Recent studies show that the end-to-end convolutional neural network model has a very good performance in image compression,but for multispectral image com-pression,its codecs cannot effectively solve the problem of efficiently extracting spatial and inter-spectral features of multispectral images at the same time,and it neglects the localized feature information of the image.The localized feature information of the image is also neglected.To address the above problems,this paper proposed a multispectral image compression method that incorporates a convolutional neural net-work with multiscale features.The proposed network embeds that can extract spatial and inter-spectral fea-ture information at different scales,and an inter-spectral spatial asymmetric convolution module that can be used to capture local spatial and spectral information.Experiments show that the Peak Signal to Noise Ra-tio(PSNR)metrics of the proposed model are 1-2 dB higher than those of the traditional algorithms such as JPEG2000 and 3D-SPIHT as well as the deep learning methods on the 7-band of Landsat-8 and 8-band of Sentinel-2 datasets.Regarding the Mean Spectral Angle(MSA)metrics,the proposed model is more effective on the Landsat-8 dataset and outperforms the traditional algorithm by about 8×10-3 rad.The pro-posed model outperforms the traditional algorithm by about 2×10-3 rad on the Sentinel-2 dataset.The re-quirements of multispectral image compression for spatial and inter-spectral feature extraction as well as lo-calized feature extraction are satisfied.