Multi-Cover Image Steganography Model Based on Invertible Neural Network
Existing multi-cover image steganography methods often decompose the embedding process of a secret image into encoding and overlay steps.The secret image is encoded as a secret disturbance and overlaid by multiple cover images using spatial operations,thereby embedding the secret image within multiple cover images.These methods employ two separate networks for the reciprocal processes of embedding and extraction,without sharing parameters,which results in high computational resource consumption and a large number of training parameters.To solve this problem,a multicover image steganography model that associates the embedding and extraction processes with the forward and inverse mappings of an invertible neural network is proposed,enabling parameter sharing and effectively reducing the network parameter count.Existing models lack a method for measuring the importance of content-level regions in secret images.To address this problem,the proposed method introduces a spatial attention module at the input of the invertible neural network to enhance the encoding quality,focusing on key regions of the secret image and improving steganographic performance.In addition,an identity verification mechanism is established by allocating a key-based identity information matrix to multiple users,preventing attackers from illegally obtaining secret images.The experimental results demonstrate that the proposed method achieves superior steganographic performance compared to baseline models.The Peak Signal-to-Noise Ratios(PSNRs)of the container and extracted secret images surpassed those of the baseline model by 8.5 dB to 9.4 dB,respectively.The structural similarity index outperformed the baseline model by a margin of 0.012-0.019,and the learned perceptual image patch similarity surpassed the baseline model by a margin of 0.002 9-0.004 7.Moreover,the proposed model required only 17.6%of the parameters of the baseline model.