Study on Adversarial Sample Attacks on Deep Learning Based Fingerprinting Indoor Localization
This study investigated adversarial attacks on Deep Learning(DL)based Wi-Fi fingerprint indoor positioning systems,which have significantly improved indoor localization performance by effectively extracting deep features from Received Signal Strength(RSS)fingerprint data.However,such methods require a large and diverse dataset of RSS fingerprint data for model training.Furthermore,there is a lack of sufficient research on their security vulnerabilities stemming from the openness of wireless Wi-Fi media and inherent flaws in classifiers,such as susceptibility to adversarial attacks.To address this issue,we researched adversarial attacks on DL based RSS fingerprint indoor positioning systems.Herein,we proposed an adversarial sample attack framework based on Wi-Fi fingerprint indoor positioning.Furthermore,we utilized this framework to assess the impact of adversarial attacks on the performance of DL based RSS fingerprint indoor positioning models.The framework consists of two phases:offline training and online positioning.In the offline training phase,we designed a Conditional Generative Adversarial Network(CGAN)suitable for augmenting Wi-Fi RSS fingerprint data to generate a large and diverse dataset for training robust indoor positioning DL models.In the online positioning phase,we constructed the most potent first-order attack strategy to generate effective RSS fingerprint adversarial samples and studied the impact of adversarial attacks on different indoor positioning DL models.Experimental results on the publicly available UJIIndoorLoc dataset showed that the adversarial samples generated by the proposed framework achieved average attack success rates of 94.1%,63.75%,43.45%,and 72.5%on existing fingerprint indoor positioning models based on Convolutional Neural Network(CNN),Deep Neural Network(DNN),Multilayer Perceptron(MLP),and pixeldp_CNN,respectively.Furthermore,the average attack success rates on the fingerprint indoor positioning models trained with data augmented by the CGAN were 84.95%,44.8%,15.7%,and 11.5%for CNN,DNN,MLP,and pixeldp_CNN,respectively.Therefore,existing DL based fingerprint indoor positioning models were susceptible to adversarial sample attacks.The models trained using a mixture of real and augmented data exhibited better robustness when encountering adversarial sample attacks.