Overview

Many AI models, especially deep learning ones, are 'black boxes'—it's hard to know why they made a specific decision. XAI aims to make these decisions transparent.

Importance

Critical in high-stakes fields like medicine, finance, and law, where 'the AI said so' is not an acceptable justification for a decision.

Techniques

  • Feature Importance: Showing which inputs most influenced the output.
  • Saliency Maps: Highlighting parts of an image the model focused on.

Related Terms