Enhancing chest X-ray analysis : a comparative study of deep learning models with explainable AI
Analyzing medical images is a vital aspect of contemporary healthcare. While conventional deep learning models have demonstrated potential in this area, their interpretability remains a challenge, especially for intricate medical images like chest X-rays. In our study, we assess the performance of advanced deep learning models, such as InceptionV4 and Vision Transformers, in comparison to traditional models like ResNet50 to determine their effectiveness in classifying chest X-ray images. We use two comprehensive datasets, NIH-CXR-LT and MIMIC-CXR-LT, for evaluating the models' performance.Our goal is to improve the comprehensibility and usability of these models by implementing explainable AI techniques for visualizations, contributing to the creation of user-friendly AI tools for medical imaging. Our findings indicate that Vision Transformers attain higher AUC scores while reducing training time compared to other models. By employing explainable AI methods like GradCAM and SHAP, we showcase the interpretability of the models, allowing us to pinpoint areas of interest in chest X-ray images that the models depend on for their predictions. These techniques also assist in detecting model biases and recognizing potential errors, which can be crucial for informed clinical decision-making.