首页|基于文字边缘失真特征的翻拍图像篡改定位

基于文字边缘失真特征的翻拍图像篡改定位

扫码查看
针对翻拍文档图像的篡改定位问题,提出一种基于文字边缘失真特征的翻拍图像篡改定位方法.从文字边缘分布、边缘梯度以及待检测文本与参考文本在边缘梯度上的差异3个方面构建了文字失真特征,并训练了一个基于深度神经网络的分类器进行决策.同时,为了评估检测方法的性能,构建了一个包含120张合法图像、1 200张翻拍篡改文档图像的数据集.实验结果表明:所提出的方法在跨库实验场景下词汇级别的ROC曲线下面积(area under ROC curve,AUC)和等错误率(equal error rate,EER)分别达到了0.84和0.23;与Forensic Similarity(128×128)和DenseFCN相比,所提出的特征结合LightDenseNet的方法在翻拍篡改文档数据集的跨库协议下,词汇级别的AUC指标分别提高了0.06和0.17.
Tampering localization of recaptured image based on text edge distortion features
To address the tampering localization of recaptured document image,a tampering localization method of recaptured docu-ment images based on text edge distortion features was proposed.Text distortion features were constructed based on text edge distri-bution,edge gradient,and the difference in edge gradient between the text to be detected and reference.A deep neural network-based classifier was trained to make decisions.To evaluate the performance of the detection method,a dataset containing 120 cap-tured images and 1 200 recaptured documents images with tampering was constructed.The results show that the proposed method achieves 0.84 and 0.23 area under ROC curve(AUC)and equal error rate(EER)at the lexical level in the cross-library scenario.Compared with Forensic Similarity(128×128)and DenseFCN,the proposed features combined with LightDenseNet improve the lexical-level AUC by 0.06 and 0.17 under the cross-library protocol of the dataset.

document imagerecapturing attacksimage tampering localizationtext edge distortionrecapturing tampered docu-ment dataset

陈昌盛、陈自炜、李锡劲

展开 >

深圳大学电子与信息工程学院, 广东深圳 518060

文档图像 翻拍攻击 篡改定位 文字边缘失真 翻拍篡改文档数据库

国家自然科学基金资助项目

62072313

2024

中国科技论文
教育部科技发展中心

中国科技论文

影响因子:0.466
ISSN:2095-2783
年,卷(期):2024.19(2)
  • 18