摘要
目的 现有基于对抗图像的隐写算法大多只能针对一种隐写分析器设计对抗图像,且无法抵御隐写分析残差网络(steganalysis residual network,SRNet)、Zhu-Net等最新基于卷积神经网络隐写分析器的检测.针对这一现状,提出了一种联合多重对抗与通道注意力的高安全性图像隐写方法.方法 采用基于U-Net结构的生成对抗网络生成对抗样本图像,利用对抗网络的自学习特性实现多重对抗隐写网络参数迭代优化,并通过针对多种隐写分析算法的对抗训练,生成更适合内容隐写的载体图像.同时,通过在生成器中添加多个轻量级通道注意力模块,自适应调整对抗噪声在原始图像中的分布,提高生成对抗图像的抗隐写分析能力.其次,设计基于多重判别损失和均方误差损失相结合的动态加权组合方案,进一步增强对抗图像质量,并保障网络快速稳定收敛.结果 实验在BOSS Base 1.01数据集上与当前主流的4种方法进行比较,在使用原始隐写图像训练后,相比于基于U-Net结构的生成式多重对抗隐写算法等其他4种方法,使得当前性能优异的5种隐写分析器平均判别准确率降低了1.6%;在使用对抗图像和增强隐写图像再训练后,相比其他4种方法,仍使得当前性能优异的5种隐写分析器平均判别准确率降低了6.8%.同时也对对抗图像质量进行分析,基于测试集生成的2 000幅对抗图像的平均峰值信噪比(peak signal-to-noise ratio,PSNR)可达到39.925 1 dB,实验结果表明本文提出的隐写网络极大提升了隐写算法的安全性.结论 本文方法在隐写算法安全性领域取得了较优秀的性能,且生成的对抗图像具有很高的视觉质量.
Abstract
Objective The advancement of current steganographic techniques has been facing many challenges.The method of modifying the original image to hide the secret information is traceable,rendering it susceptible to detection by steganalyzers.The coverless steganographic method improves the security of steganography.However,it has limitations,such as small embedding capacity,large image database,and difficulty extracting secret information.The cover image gen-erative steganography method also produces small and unnatural generated images.Introducing adversarial examples pro-vides a new approach to address these limitations by adding subtle perturbations to the original image to form an adversarial image that is not visually distinguishable and causes wrong classification results to be outputted with high confidence.Thus,the security of image steganography is enhanced.However,most existing steganographic algorithms based on adver-sarial examples can only design adversarial samples for one steganalyzer,making them vulnerable to the latest convolu-tional neural network-based steganalyzers,such as SRNet and Zhu-Net.In response to this problem,a high-security image steganography method with the combination of multiple competition and channel attention is proposed in this study.Method In the proposed method,we generate the adversarial noise V using the generator G,which employs the U-Net architecture with added channel-attention modules.Subsequently,the adversarial noise V is added to the original image X to obtain the adversarial image.The pixel space minimum mean square error loss MSE_loss is adopted to train the generator network G.Thus,high-quality and semantically meaningful adversarial images are generated.Then,we generate the stego image from the original image X using the steganography network(SN)and input the original image X and its corresponding stego image into the steganalysis optimization network to optimize its parameters.Moreover,we build multiple steganalysis adversarial networks(SANs)to discriminate the original image X and its adversarial image and assign different scores to the adversarial and original images,providing multiple discriminant losses SDO_loss1.Furthermore,we embed secret mes-sages into the adversarial image through the SN to generate the enhanced stego image.The adversarial image and the enhanced stego image are reinput into the optimized multiple steganalyzers to improve the antisteganalysis performance of the adversarial image.The SAN evaluates the data-hiding capability of the adversarial image and provides multiple dis-criminant losses SDO_loss2.Additionally,the weighted superposition of the MSE_loss,namely,the multiple steganalysis discrimination losses SDO_loss1 and SDO_loss2,is employed as the cumulative loss function of generator G to improve the image quality of the adversarial image and its antisteganalysis ability.Finally,the proposed method enables fast and stable network convergence and high stego image visual quality and antisteganalysis ability.Result First,we select four high-performance deep-learning steganalyzers,namely,Xu-Net,Ye-Net,SRNet,and Zhu-Net,for simultaneous adversarial training to improve the antisteganalysis ability of adversarial images.However,simultaneously conducting experiments with four steganalysis networks may sharply increase the number of model parameters,resulting in slow training speed and long training period.Furthermore,each iteration of adversarial noise is generated according to the gradient feedback of the four steganalysis networks during the adversarial image generation process.A consequence of this approach is that the origi-nal image is subjected to excessive,unnecessary adversarial noise,leading to low-quality adversarial images.In response to this issue,we execute ablation experiments on different steganalysis networks employed in training.These experiments aim to decrease model parameters,reduce training time,and ultimately enhance the quality of adversarial images for their antisteganalysis capability improvement.The role of the generator is to produce adversarial noise,which is subsequently incorporated into the original image to generate adversarial images.Different positions of adversarial noise in the original image can cause distinct perturbations to the steganalysis network,and the quality of the generated adversarial images can be influenced differently.This study introduces ablation experiments by altering the addition of the channel attention mod-ule at various positions of the generator to examine the effectiveness of the channel attention module.The parameters of the generator loss function are fine-tuned by conducting the ablation experiment.Subsequently,we generate 2 000 adversarial images using the proposed model and evaluate the quality of these images.The results reveal that the average peak signal-to-noise ratio(PSNR)value of the 2 000 generated adversarial images is 39.925 1 dB.Furthermore,more than 99.55%of these images have a PSNR value greater than 39 dB,and more than 75%of the generated adversarial images have a PSNR value greater than 40 dB.Additionally,the average structural similarity index measure(SSIM)value of the generated adversarial images is 0.962 5.Among these images,more than 69.85%have an SSIM value greater than 0.955,and more than 55.6%of the adversarial samples have an SSIM value greater than 0.960.These results indicate that compared with the original images,the generated adversarial images exhibit high visual similarity.Finally,we conduct a comparative study of the proposed method with the current state-of-the-art methods on the BOSS Base 1.01 dataset.The experiments are conducted on the BOSS Base 1.01 dataset,and the results are compared with those of the current state-of-the-art methods.Compared with the four methods,the five steganalysis methods show decreased average accuracy by 1.6%after training on the original steganographic images.Compared with other four methods,the five steganalysis methods show decreased aver-age accuracy by 6.8%after further training with adversarial images and enhanced steganographic images.The experimen-tal results indicate that the proposed steganographic method significantly improves the security of the steganographic algo-rithm.Conclusion In this study,we propose a steganographic architecture based on the U-Net framework with lightweight channel attention modules to generate adversarial images,which can resist multiple steganalysis networks.The experiment results demonstrate that the security and generalization of the algorithm we propose exceed those of the compared stegano-graphic methods.
基金项目
国家自然科学基金项目(62272255)
国家重点研发计划(2021YFC3340602)
山东省自然科学基金创新发展联合基金项目(ZR2022LZH011)
山东省科技型中小企业能力提升工程项目(2022TSGC2485)
济南市带头人工作室项目(2020GXRC056)
济南市引进创新团队项目(202228016)
山东省高校青年创新团队项目(2022KJ124)
教育部"春晖计划"科研合作项目(HZKY20220482)
山东省自然科学基金项目(ZR2020MF054)