首页|基于可逆神经网络的多载体图像隐写模型

基于可逆神经网络的多载体图像隐写模型

扫码查看
现有多载体图像隐写方法将秘密图像的嵌入过程拆分为编码和叠加两步,将秘密图像编码为含密扰动,通过空域操作将含密扰动与多张载体图像叠加,在多张载体图像中嵌入秘密图像。这种方法的嵌入和提取这两个互逆过程分别由两个相互独立的网络实现,无法共享参数,这导致计算资源消耗大、训练参数多。为解决这个问题,提出了一种基于可逆神经网络的多载体图像隐写模型,它将嵌入和提取过程分别与可逆神经网络的正向和逆向映射相关联,实现了参数共享,有效减少了网络参数量。此外,现有的模型缺乏对秘密图像重要内容级区域的重要性度量方法。针对此问题,所提算法在可逆神经网络输入端引入了空域注意力模块,以提高编码质量,关注秘密图像中的关键区域,从而提升隐写效果。同时,所提算法为多用户配给基于密钥的身份信息矩阵,建立了身份核验机制,防止攻击者非法获取秘密图像。实验结果表明,所提方法实现了较好的隐写效果,含密图像和提取出的秘密图像的峰值信噪比(PSNR)相比基线模型高8。5 dB~9。4 dB,结构相似度相比基线模型高0。012~0。019,学习感知图像块相似度相比基线模型高0。002 9~0。004 7,参数量仅为基线模型的17。6%。
Multi-Cover Image Steganography Model Based on Invertible Neural Network
Existing multi-cover image steganography methods often decompose the embedding process of a secret image into encoding and overlay steps.The secret image is encoded as a secret disturbance and overlaid by multiple cover images using spatial operations,thereby embedding the secret image within multiple cover images.These methods employ two separate networks for the reciprocal processes of embedding and extraction,without sharing parameters,which results in high computational resource consumption and a large number of training parameters.To solve this problem,a multicover image steganography model that associates the embedding and extraction processes with the forward and inverse mappings of an invertible neural network is proposed,enabling parameter sharing and effectively reducing the network parameter count.Existing models lack a method for measuring the importance of content-level regions in secret images.To address this problem,the proposed method introduces a spatial attention module at the input of the invertible neural network to enhance the encoding quality,focusing on key regions of the secret image and improving steganographic performance.In addition,an identity verification mechanism is established by allocating a key-based identity information matrix to multiple users,preventing attackers from illegally obtaining secret images.The experimental results demonstrate that the proposed method achieves superior steganographic performance compared to baseline models.The Peak Signal-to-Noise Ratios(PSNRs)of the container and extracted secret images surpassed those of the baseline model by 8.5 dB to 9.4 dB,respectively.The structural similarity index outperformed the baseline model by a margin of 0.012-0.019,and the learned perceptual image patch similarity surpassed the baseline model by a margin of 0.002 9-0.004 7.Moreover,the proposed model required only 17.6%of the parameters of the baseline model.

invertible neural networkmulti-cover image steganographyidentity verification mechanismspatial attention moduleparameter-sharing

卞玉星、黄荣、周树波、刘浩

展开 >

东华大学信息科学与技术学院,上海 201620

东华大学数字化纺织服装技术教育部工程研究中心,上海 201620

可逆神经网络 多载体图像隐写 身份核验机制 空域注意力模块 参数共享

2024

计算机工程
华东计算技术研究所 上海市计算机学会

计算机工程

CSTPCD北大核心
影响因子:0.581
ISSN:1000-3428
年,卷(期):2024.50(12)