首页|Reports Outline Artificial Intelligence Findings from China University of Petroleum (East China) (Call White Black: Enhanced Imagescaling Attack In Industrial Artificial Intelligence Systems)

Reports Outline Artificial Intelligence Findings from China University of Petroleum (East China) (Call White Black: Enhanced Imagescaling Attack In Industrial Artificial Intelligence Systems)

扫码查看
New research on Artificial Intelligence is the subject of a report. According to news reporting from Qingdao, People’s Republic of China, by NewsRx journalists, research stated, “The increasing prevalence of deep neural networks (DNNs) in industrial artificial intelligence systems (IAISs) promotes the development of industrial automation. However, the growing employment of DNNs also exposes them to various attacks.” Financial support for this research came from Natural Science Foundation of Shandong Province. The news correspondents obtained a quote from the research from the China University of Petroleum (East China), “Recent studies have shown that the data preprocessing process of DNNs is vulnerable to image-scaling attack. Such attacks can craft an attack image, which looks like a given source image but becomes a different target image after being scaled to the target size. The attack images generated by existing image-scaling attacks are easily perceivable to the human visual system, significantly degrading the attack’s stealthiness. In this paper, we investigate image-scaling attack from the perspective of signal processing. We unearth that the root cause of the weak deceiving effects of existing image-scaling attack images lies in the introduction of additional high-frequency signals during their construction. Thus, we propose an enhanced image-scaling attack (EIS), which employs adversarial images crafted based on the source (‘clean’) images as the target images. Those adversarial images preserve the ‘clean’ pixel information of source images, thereby significantly mitigating the emergence of additional high-frequency signals in the attack images. Specifically, we consider three realistic threat models covering deep models’ training and inference phases. Correspondingly, we design three strategies tailored to generate adversarial images with vicious patterns. These patterns are subsequently integrated into the attack images, which can mislead a model with target input size after the necessary scaling operation.”

QingdaoPeople’s Republic of ChinaAsiaArtificial IntelligenceEmerging TechnologiesMachine LearningChina University of Petroleum (East China)

2024

Robotics & Machine Learning Daily News

Robotics & Machine Learning Daily News

ISSN:
年,卷(期):2024.(Feb.13)
  • 36