Overview
LIME helps us understand 'black box' models by looking at what happens to the prediction when we slightly change the input data.
How it Works
- Pick a specific data point you want to explain.
- Create a new dataset of 'perturbed' samples (slightly modified versions of that point).
- Get predictions for these samples from the complex model.
- Train a simple, interpretable model (like a linear regression) on this new dataset.
- The simple model's weights provide an explanation of the complex model's behavior around that specific point.
Comparison with SHAP
LIME is generally faster but can be less stable and mathematically rigorous than SHAP.