Skip to content

Guided backprop

GuidedBackpropagation ¤

Bases: XAImethod_Base

XAI Method for Guided Backpropagation.

Normally, this class is used internally in the aucmedi.xai.decoder.xai_decoder in the AUCMEDI XAI module.

Reference - Implementation #1

Author: Hoa Nguyen
GitHub Profile: https://nguyenhoa93.github.io/
Date: Jul 29, 2020
https://stackoverflow.com/questions/55924331/how-to-apply-guided-backprop-in-tensorflow-2-0

Reference - Implementation #2

Author: Huynh Ngoc Anh
GitHub Profile: https://github.com/experiencor
Date: Jun 23, 2017
https://github.com/experiencor/deep-viz-keras/

Reference - Implementation #3

Author: Tim
Date: Jan 25, 2019
https://stackoverflow.com/questions/54366935/make-a-deep-copy-of-a-keras-model-in-python

Reference - Publication

Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, Martin Riedmiller. 21 Dec 2014. Striving for Simplicity: The All Convolutional Net.
https://arxiv.org/abs/1412.6806

This class provides functionality for running the compute_heatmap function, which computes a Guided Backpropagation for an image with a model.

Source code in aucmedi/xai/methods/guided_backprop.py
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
class GuidedBackpropagation(XAImethod_Base):
    """ XAI Method for Guided Backpropagation.

    Normally, this class is used internally in the [aucmedi.xai.decoder.xai_decoder][] in the AUCMEDI XAI module.

    ??? abstract "Reference - Implementation #1"
        Author: Hoa Nguyen <br>
        GitHub Profile: [https://nguyenhoa93.github.io/](https://nguyenhoa93.github.io/) <br>
        Date: Jul 29, 2020 <br>
        [https://stackoverflow.com/questions/55924331/how-to-apply-guided-backprop-in-tensorflow-2-0](https://stackoverflow.com/questions/55924331/how-to-apply-guided-backprop-in-tensorflow-2-0) <br>

    ??? abstract "Reference - Implementation #2"
        Author: Huynh Ngoc Anh <br>
        GitHub Profile: [https://github.com/experiencor](https://github.com/experiencor) <br>
        Date: Jun 23, 2017 <br>
        [https://github.com/experiencor/deep-viz-keras/](https://github.com/experiencor/deep-viz-keras/) <br>

    ??? abstract "Reference - Implementation #3"
        Author: Tim <br>
        Date: Jan 25, 2019 <br>
        [https://stackoverflow.com/questions/54366935/make-a-deep-copy-of-a-keras-model-in-python](https://stackoverflow.com/questions/54366935/make-a-deep-copy-of-a-keras-model-in-python) <br>

    ??? abstract "Reference - Publication"
        Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, Martin Riedmiller. 21 Dec 2014.
        Striving for Simplicity: The All Convolutional Net.
        <br>
        [https://arxiv.org/abs/1412.6806](https://arxiv.org/abs/1412.6806)

    This class provides functionality for running the compute_heatmap function,
    which computes a Guided Backpropagation for an image with a model.
    """
    def __init__(self, model, layerName=None):
        """ Initialization function for creating Guided Backpropagation as XAI Method object.

        Args:
            model (keras.model):               Keras model object.
            layerName (str):                   Not required in Guided Backpropagation, but defined by Abstract Base Class.
        """
        # Create a deep copy of the model
        model_copy = tf.keras.models.clone_model(model)
        model_copy.build(model.input.shape)
        model_copy.compile(optimizer=model.optimizer, loss=model.loss)
        model_copy.set_weights(model.get_weights())

        # Define custom Relu activation function
        @tf.custom_gradient
        def guidedRelu(x):
            def grad(dy):
                return tf.cast(dy>0, "float32") * tf.cast(x>0, "float32") * dy
            return tf.nn.relu(x), grad
        # Replace Relu activation layers with custom Relu activation layer
        layer_dict = [layer for layer in model_copy.layers if hasattr(layer, "activation")]
        for layer in layer_dict:
            if layer.activation == tf.keras.activations.relu:
                layer.activation = guidedRelu
        # Cache class parameters
        self.model = model_copy

    #---------------------------------------------#
    #             Heatmap Computation             #
    #---------------------------------------------#
    def compute_heatmap(self, image, class_index, eps=1e-8):
        """ Core function for computing the Guided Backpropagation for a provided image and for specific classification outcome.

        ???+ attention
            Be aware that the image has to be provided in batch format.

        Args:
            image (numpy.ndarray):              Image matrix encoded as NumPy Array (provided as one-element batch).
            class_index (int):                  Classification index for which the heatmap should be computed.
            eps (float):                        Epsilon for rounding.

        The returned heatmap is encoded within a range of [0,1]

        ???+ attention
            The shape of the returned heatmap is 2D -> batch and channel axis will be removed.

        Returns:
            heatmap (numpy.ndarray):            Computed Guided Backpropagation for provided image.
        """
        # Compute gradient for desierd class index
        with tf.GradientTape() as tape:
            inputs = tf.cast(image, tf.float32)
            tape.watch(inputs)
            preds = self.model(inputs)
            loss = preds[:, class_index]
        gradient = tape.gradient(loss, inputs)
        # Obtain maximum gradient based on feature map of last conv layer
        gradient = tf.reduce_max(gradient, axis=-1)
        # Convert to NumPy & Remove batch axis
        heatmap = gradient.numpy()[0,:,:]

        # Intensity normalization to [0,1]
        numer = heatmap - np.min(heatmap)
        denom = (heatmap.max() - heatmap.min()) + eps
        heatmap = numer / denom

        # Return the resulting heatmap
        return heatmap

__init__(model, layerName=None) ¤

Initialization function for creating Guided Backpropagation as XAI Method object.

Parameters:

Name Type Description Default
model keras.model

Keras model object.

required
layerName str

Not required in Guided Backpropagation, but defined by Abstract Base Class.

None
Source code in aucmedi/xai/methods/guided_backprop.py
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
def __init__(self, model, layerName=None):
    """ Initialization function for creating Guided Backpropagation as XAI Method object.

    Args:
        model (keras.model):               Keras model object.
        layerName (str):                   Not required in Guided Backpropagation, but defined by Abstract Base Class.
    """
    # Create a deep copy of the model
    model_copy = tf.keras.models.clone_model(model)
    model_copy.build(model.input.shape)
    model_copy.compile(optimizer=model.optimizer, loss=model.loss)
    model_copy.set_weights(model.get_weights())

    # Define custom Relu activation function
    @tf.custom_gradient
    def guidedRelu(x):
        def grad(dy):
            return tf.cast(dy>0, "float32") * tf.cast(x>0, "float32") * dy
        return tf.nn.relu(x), grad
    # Replace Relu activation layers with custom Relu activation layer
    layer_dict = [layer for layer in model_copy.layers if hasattr(layer, "activation")]
    for layer in layer_dict:
        if layer.activation == tf.keras.activations.relu:
            layer.activation = guidedRelu
    # Cache class parameters
    self.model = model_copy

compute_heatmap(image, class_index, eps=1e-08) ¤

Core function for computing the Guided Backpropagation for a provided image and for specific classification outcome.

Attention

Be aware that the image has to be provided in batch format.

Parameters:

Name Type Description Default
image numpy.ndarray

Image matrix encoded as NumPy Array (provided as one-element batch).

required
class_index int

Classification index for which the heatmap should be computed.

required
eps float

Epsilon for rounding.

1e-08

The returned heatmap is encoded within a range of [0,1]

Attention

The shape of the returned heatmap is 2D -> batch and channel axis will be removed.

Returns:

Name Type Description
heatmap numpy.ndarray

Computed Guided Backpropagation for provided image.

Source code in aucmedi/xai/methods/guided_backprop.py
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
def compute_heatmap(self, image, class_index, eps=1e-8):
    """ Core function for computing the Guided Backpropagation for a provided image and for specific classification outcome.

    ???+ attention
        Be aware that the image has to be provided in batch format.

    Args:
        image (numpy.ndarray):              Image matrix encoded as NumPy Array (provided as one-element batch).
        class_index (int):                  Classification index for which the heatmap should be computed.
        eps (float):                        Epsilon for rounding.

    The returned heatmap is encoded within a range of [0,1]

    ???+ attention
        The shape of the returned heatmap is 2D -> batch and channel axis will be removed.

    Returns:
        heatmap (numpy.ndarray):            Computed Guided Backpropagation for provided image.
    """
    # Compute gradient for desierd class index
    with tf.GradientTape() as tape:
        inputs = tf.cast(image, tf.float32)
        tape.watch(inputs)
        preds = self.model(inputs)
        loss = preds[:, class_index]
    gradient = tape.gradient(loss, inputs)
    # Obtain maximum gradient based on feature map of last conv layer
    gradient = tf.reduce_max(gradient, axis=-1)
    # Convert to NumPy & Remove batch axis
    heatmap = gradient.numpy()[0,:,:]

    # Intensity normalization to [0,1]
    numer = heatmap - np.min(heatmap)
    denom = (heatmap.max() - heatmap.min()) + eps
    heatmap = numer / denom

    # Return the resulting heatmap
    return heatmap