Overview
Saliency maps provide a 'heat map' of a model's attention. If a model classifies an image as a 'dog,' the saliency map should ideally highlight the dog's features (ears, nose, fur).
How it's Created
It is typically calculated by taking the gradient of the output class with respect to the input pixels. Pixels with higher gradients are those that, if changed slightly, would most change the model's confidence in its prediction.
Use Case
- Debugging models (e.g., discovering that a 'husky' classifier is actually just looking for 'snow' in the background).
- Building trust with users by showing what the AI is 'looking at.'