Improving Trust and Usability of Deep Learning Predictions in Radiology by Making Medical Diagnostics More Explainable
*Amina Mamuda Damo, Hashim Ibrahim Bisallah, Fatimah Binta Abdullahi and Benjamin Okike
Department of Computer Science, University of Abuja, Abuja, Nigeria
*Corresponding Author: amina.damo2020@uniabuja.edu.ng
ABSTRACT
The application of deep learning in radiology has markedly improved diagnostic performance; however, widespread clinical adoption is hindered by the opaque, black-box nature of these models, which limits interpretability and undermines trust among healthcare professionals. This study introduces an explainable deep learning framework for brain tumor classification using magnetic resonance imaging (MRI). A convolutional neural network (CNN) was trained and validated on a curated dataset comprising four diagnostic categories: glioma, meningioma, pituitary tumor, and normal brain scans. To address the interpretability challenge, Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to generate visual explanations highlighting the regions most influential to the model’s predictions. The framework achieved high quantitative performance across key metrics, including accuracy, precision, recall, and F1-score. In addition, qualitative assessments by radiologists confirmed that the Grad-CAM visualizations provided clinically meaningful insights, aligning with known diagnostic landmarks and improving trust in the model’s outputs. These findings underscore the value of integrating explainability into deep learning systems for medical imaging, paving the way for safer, more transparent, and clinically acceptable AI-assisted diagnostics.
Keywords: Deep Learning, Grad-CAM, Model Interpretability, Brain Tumor Classification, Medical Imaging.
CITE AS: Amina Mamuda Damo, Hashim Ibrahim Bisallah, Fatimah Binta Abdullahi and Benjamin Okike (2025). Improving Trust and Usability of Deep Learning Predictions in Radiology by Making Medical Diagnostics More Explainable. IAA Journal of Scientific Research 12(2):17-28. https://doi.org/10.59298/IAAJSR/2025/1221728.00