Looking for a different project? Check out one of my other works here:


Alternative Sampling for More Faithful
Explanation Through Local Surrogate Models

How do we typically explain predictions?

We generate a local approximation or surrogate model that can be easily explained.

We start off with the data set we would like to generate explanations for, and pick one of the instances. The background represents the class probabilities output by the complex ML model.

Our contribution

We present LEMON, an improvement to a popular explanation technique called LIME.
The difference between LIME and LEMON is in the way the synthetic data is sampled:



LIME samples points in the entire feature space, then reweights them according to the proximity to point to be explained.

Hence, LIME only finds few points (3) the area of interest.


We suggest instead to define the area of interest first, and then sample directly from the desired distribution.

As a result, we have much more data (15) to train a surrogate model on



The improved sampling approach in LEMON yields much more faithful explanations! More details can be found in the paper.

Try it out!

You can try out the LEMON technique for yourself by checking out the code on GitHub.

Check GitHub


If you want to refer to our explanation technique, please cite our paper using the following BibTeX entry:

  title={{LEMON}: Alternative Sampling for More Faithful Explanation Through Local Surrogate Models},
  author={Collaris, Dennis and Gajane, Pratik and Jorritsma, Joost and van Wijk, Jarke J and Pechenizkiy, Mykola},
  booktitle={Advances in Intelligent Data Analysis XXI: 21st International Symposium on Intelligent Data Analysis (IDA 2023)},