Abstract
Breast cancer is a heterogeneous and complex disease characterized by the uncontrolled growth of abnormal cells in breast tissue. Early detection is crucial for effective treatment and improved patient outcomes. Mammography, a primary screening modality, employs X-rays to capture detailed images of the breast, playing a pivotal role in identifying suspicious masses or microcalcifications. The standard visual inspection of a mammogram is time-consuming, subjective, and prone to error, often requiring consensus from two radiologists. Deep learning, particularly convolutional neural networks (CNNs), can be utilized to automatically identify subtle and complex patterns indicative of malignancy, assisting radiologists.The clinical application of deep learning models to mammograms, however, faces several significant challenges. A primary challenge is confounding learning, wherein a predictive model relies on incorrect information or reasoning to make a decision, even if the decision itself is correct. Another critical concern is clinical acceptance, as the "black-box" nature of deep learning models makes it difficult for radiologists to trust and rely on AI-based diagnostic systems. The theme of this thesis is therefore to develop a "clinically-aligned" and interpretable model that addresses two primary goals: (1) mitigating confounding learning by aligning the model’s diagnostic reasoning with the clinical expert, (2) providing interpretable explanations to enhance collaboration between the model and human decision-makers.
Initially, we analyzed a traditional activation map (Grad-CAM), which represents deep learned features in a low-dimensional space, to connect these features with the hand-crafted domain. Our findings revealed that texture features, such as mean, entropy, and auto-correlation, showed strong similarities with deep learned features. This analysis provides insight into the features leveraged by a baseline model and serves as a foundation for our work. To achieve clinical alignment, this thesis subsequently proposes and evaluates two distinct frameworks.
The first framework addresses a specific confounding learning related to calcifications. Analysis of baseline CNNs reveals a tendency to disregard the primary mass and instead prioritize the identification of co-occurring calcifications as a shortcut for malignancy classification. This increases the risk of misclassifying cases where masses are the dominant abnormality. Our work aims to follow the reasoning processes of mammographic readers in detecting clinically relevant pathology features, such as the shape characteristics of the mass. By considering factors beyond calcifications, the shape of the mass, the model aims to achieve a more comprehensive understanding of the features associated with malignancy. The results indicate that this approach improves diagnostic accuracy, suggesting that integrating shape features contributes to a more nuanced and balanced decision-making process. Simultaneously, the reasoning
process the model explains to humans is verifying that its focus is on true pathological features to check its output for plausibility.
The second framework seeks to enforce this shape feature learning automatically during training. We present a custom loss function designed to steer the model’s attention toward the shape characteristics of the mass, making the model inherently clinically aligned. This is achieved by training the model with a loss function that integrates original mammographic images and pixel-wise annotated data. This framework is accompanied by an interpretability algorithm that explains model predictions using specific clinical characteristics, allowing radiologists to verify the model’s plausible reasoning, fostering trust and collaboration.
| Date of Award | 2025 |
|---|---|
| Original language | English |
| Awarding Institution |
|
| Supervisor | Reyer Zwiggelaar (Supervisor) |
Keywords
- mammography
- deep learning CADx systems
- convolutional neural networks (CNNs)
- confounding information
- interpretability
Cite this
- Standard