How to explain the model behavior#
This guide explains the model behavior, which is trained through previous stage. It allows displaying the saliency maps, which provide the locality where the model gave an attention to predict a specific category.
To be specific, this tutorial uses as an example of the ATSS model trained through otx train
and saved as outputs/weights.pth
.
Note
This tutorial uses an object detection model for example, however for other tasks the functionality remains the same - you just need to replace the input dataset with your own.
For visualization we use images from WGISD dataset from the object detection tutorial together with trained model.
1. Activate the virtual environment created in the previous step.
.otx/bin/activate
# or by this line, if you created an environment, using tox
. venv/otx/bin/activate
2. otx explain
returns saliency maps (heatmaps with red colored areas of focus)
at the path specified by --output
.
otx explain --input otx-workspace-DETECTION/splitted_dataset/val/ \
--output outputs/explanation \
--load-weights outputs/weights.pth
3. To specify the algorithm of saliency map creation for classification,
we can define the --explain-algorithm
parameter.
activationmap
- for activation map classification algorithmeigencam
- for Eigen-Cam classification algorithmclasswisesaliencymap
- for Recipro-CAM classification algorithm, this is a default method
For detection task, we can choose between the following methods:
activationmap
- for activation map detection algorithmclasswisesaliencymap
- for DetClassProbabilityMap algorithm (works for single-stage detectors only)
Note
Learn more about Explainable AI and its algorithms in XAI explanation section
4. As a result we will get a folder with a pair of generated
images for each image in --input
:
saliency map - where red color means more attention of the model
overlay - where the saliency map is combined with the original image: