Disease classification and visualization on chest X-ray images based on the ConvNeXt model
Objective Chest X-ray imaging is a prevalent method for screening and diagnosing chest diseases in clinical practice.However,the likelihood of misdiagnosis and missed diagnosis is exacerbated by radiologists' fatigue from prolonged image reading and uneven distribution of healthcare resources.To address this issue,this study employs deep learning techniques to develop a chest disease detection method based on the ConvNeXt model,aiming to enhance diagnostic accuracy,mitigate the risk of misdiagnosis,and improve physician efficiency.Methods This research utilizes the large-scale public dataset ChestX-ray14 to train a ConvNeXt model,which builds upon the ResNet model and integrates the advantages of visual Transformer structures.This enhancement enables more effective feature extraction and recognition capabilities.The model 's performance is evaluated by using the AUC (area under the receiver operating characteristic curve),comparing it with existing classification models such as CheXNet,ResNet,and Swin Transformer.Furthermore,the study incorporates Grad-CAM visualization to generate heatmap-based class activation maps from convolutional neural network feature gradients.These heatmaps facilitate the localization of diseased regions in chest X-rays,thereby enhancing diagnostic efficiency.Results The proposed ConvNeXt-based diagnostic approach achieves an average AUC value of 0.842 when identifying 14 types of chest diseases.It demonstrates particularly promising performance in detecting conditions like pleural effusion (AUC value of 0.883),edema (AUC value of 0.902),and hernia (AUC value of 0.942) .Conclusions The proposed method exhibits strong performance in chest disease detection from chest X-rays,serving as a beneficial attempt to assist physicians in improving their diagnostic efficiency through AI-assisted chest X-ray analysis.