Abstract
Pneumonia is the leading cause of paediatric deaths worldwide. Timely diagnosis can help save a child's life, long-term health, etc. Chest X-ray (CXR) examination is an effective and economical means to diagnose pneumonia. However, there is lack of expert radiologists in many resource-constrained areas. Deep learning-based pneumonia diagnosis is a solution to this problem, but deep learning models are susceptible to adversarial attacks. This research study investigates the susceptibility of a paediatric pneumonia detection model under projected gradient descent (PGD) attack. Experimental results show that the diagnostic performance of the model degrades sharply when the magnitude of the perturbation, i.e., ε, is increased from 0.0001 to 0.009 but after that the performance remains almost stable and does not significantly degrade further. The lowest model accuracy attained under the attack is 33.33%. It has been shown that the attack is much more detrimental to the specificity of the model than its sensitivity. Moreover, it has also been demonstrated that the model's performance can be degraded to unacceptable levels while keeping the perturbations imperceptible.