Visualizing Predictions - TobiasSchmidtDE/DeepL-MedicalImaging GitHub Wiki
CRM: Class-selective relevance mapping
Class-selective relevance mapping can be used to visualize the learned behavior of the individual models and their ensembles. It localizes discriminative regions of interest (ROIs) and offers an improved explanation of model predictions.
The algorithm measures the contributions of both positive and negative spatial elements in the feature maps in the deepest convolutional layer of the network. A prediction score S_c is calculated at each node c in the output layer. Then the spatial element at (l, m) is substracted to obtain the prediction score Sc(l, m).
The CRM is defined as a linear sum of incremental MSE between Sc and Sc(l, m).
Code
The implementation of the CRM can be found here the two methods generate_crm_combined
and generate_crm_class
implement the functionality described above.
Demo application
The demo application allows tho test the visualizations interactively, it's description can be found here
On our model
Over all classes
Grad Cam: Gradient-weighted Class Activation Mapping
- a technique for making Convolutional Neural Network based models more transparent by visualizing regions that are "important" for predictions
- Gradient-weighted Class Activation Mapping (Grad-CAM): uses the class-specific gradient information flowing into the final convolutional layer of a CNN to produce a coarse localization map of the important regions in the image.
- Requires no re-training and is broadly applicable to any CNN-based architectures.
Guided Grad Cam
While Grad-CAM visualizations are class-discriminative and localize relevant image regions well, they lack the ability to show fine-grained importance like pixel-space gradient visualization methods). Guided Backpropagation and the Grad-CAM visualizations can be fused via a pointwise multiplication.
Original Source Code (pytorch)
Grad Cam on Our Model
Prediction: 0.75 Cardiomegaly
CNN Fixations - An unraveling approach to visualize the discriminative image regions
- Visualizes regions that are "important" for predictions, via unraveling the forward pass operation
- The method exploits feature dependencies across the layer hierarchy and uncovers the discriminative image locations that guide the network's predictions, these locations are named CNN-Fixations, loosely analogous to human eye fixations.
- Generic method that requires no architectural changes, additional training or gradient computation to compute the important image locations (CNN Fixations)