Validating Alzheimer’s Disease Atrophy from Heatmap-based Explainable AI Methods with a Large Meta-Analysis of Neuroimaging Studies
Deep learning has shown great potential in Alzheimer's disease (AD) research, but its complexity makes interpretation and explanation challenging. To address this, heatmap-based explainable artificial intelligence (XAI) methods have emerged as a popular tool for visually interpreting deep learning models. However, when no ground truth is available, evaluating and validating the quality of these heatmaps becomes difficult. Our study aimed to address this challenge by quantifying the overlap between heatmaps generated by deep neural networks for Alzheimer's disease classification and a ground truth map obtained from a large meta-analysis. Using T1-weighted MRI scans from the ADNI dataset, we trained 3D CNN classifiers and employed three state-of-the-art XAI heatmap methods: Layer-wise Relevance Propagation (LRP), Integrated Gradients (IG), and Guided grad-CAM (GGC). The heatmaps produced by these methods were compared to a binary brain map derived from a meta-analysis of voxel-based morphometry studies conducted on other T1 MRI scans. Remarkably, all three heatmap methods effectively captured brain regions that showed significant overlap with the meta-analysis map, with the IG method demonstrating the most promising results. Moreover, the performance of the three heatmap methods outperformed that of linear Support Vector Machine (SVM) models, indicating that using the latest heatmap techniques to analyze deep nonlinear models can generate more meaningful brain maps compared to linear and shallow models. In conclusion, our research highlights the potential of heatmap visualization techniques in comprehending the effects of Alzheimer's disease on brain regions. By enhancing interpretability and explicability, these methods contribute to the advancement of AD research and hold promise for potential applications in other neurological conditions in the future.