Layer-wise Relevance Propagation (LRP)


By Prof. Hyunseok Oh
https://sddo.gist.ac.kr/
SDDO Lab at GIST

1. Why we need XAI?

  • Deep learning is a 'black-box' model that cannot be interpreted by users.
  • We need to figure out "why" does the model arrive at a certain prediction.

1.png

2. Making Deep Neural Nets Transparent

  • Interpreting models: how to interpret the model itself
  • Explaining decisions: how to figure out why the decision was made.

2.png

3. Layer-wise Relevance Propagation (LRP)

  • LRP is a technique that describes the decision boundary of model by applying Taylor decomposition.
  • As its name implies, the relevance $R(x)$ that contributed to the prediction results is calculated and propagated for each layer.
  • That is, LRP can check the influence of each pixel of the input layer on the prediction results.

3.png

The basic assumptions and how LRP works are as follows:

  • Each neuron has a certain relevance.
  • The relevance is redistributed from the output to the input of each neuron.
  • The relevance is preserved when redistributed to each layer.