Integrating domain knowledge and deep learning for enhanced chest X-ray diagnosis and localization
Access full-text files
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Chest X-ray imaging has become increasingly crucial for diagnosing various medical conditions, including pneumonia, lung cancer, and heart diseases. Despite the growing number of chest X-ray images, their interpretation remains a manual and time-consuming process, often leading to radiologist burnout and delays in diagnosis. The integration of domain knowledge and deep learning techniques has the potential to improve diagnosis, classification, and localization of abnormalities in chest X-rays, while also addressing the challenge of model interpretability.
This work proposes a series of novel methods combining radiomics features and deep learning techniques for chest X-ray diagnosis, classification, and localization. We first introduce a framework leveraging radiomics features and contrastive learning for pneumonia detection, achieving superior performance and interpretability. The second method, ChexRadiNet, utilizes radiomics features and a lightweight triplet-attention mechanism for enhanced abnormality classification performance.
In addition, we present a semi-supervised knowledge-augmented contrastive learning framework that seamlessly integrates radiomic features as a knowledge augmentation for disease classification and localization. This approach leverages Grad-CAM to highlight crucial abnormal regions, extracting radiomic features that act as positive samples for image features generated from the same chest X-ray. Consequently, this framework creates a feedback loop, enabling image and radiomic features to mutually reinforce each other, resulting in robust and interpretable knowledge-augmented representations.
The Radiomics-Guided Transformer (RGT) fuses global image information with local radiomics-guided auxiliary information for accurate cardiopulmonary pathology localization and classification without bounding box annotations.
Experimental results on public datasets such as NIH ChestX-ray, CheXpert, MIMIC-CXR, and the RSNA Pneumonia Detection Challenge demonstrate the effectiveness of our proposed methods, consistently outperforming state-of-the-art models in chest X-ray diagnosis, classification, and localization tasks. By bridging the gap between traditional radiomics and deep learning approaches, this work aims to advance the field of medical image analysis and facilitate more efficient and accurate diagnoses in clinical practice.