Class Activation Map (CAM)


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Table of Contents

  • Attention

  • Visualizing and Understanding Convolutional Networks

1. CNN with a Fully Connected Layer

The conventional CNN can be conceptually divided into two parts. One part is feature extraction and the other is classification. In the feature extraction process, convolution is used to extract the features of the input data so that the classification can be performed well. The classification process classifies which group each input data belongs to by using the extracted features from the input data.

When we visually identify images, we do not look at the whole image; instead, we intuitively focus on the most important parts of the image. CNN learning is similar to the way humans focus. When its weights are optimized, the more important parts are given higher weights. But generally, we are not able to recognize this because the generic CNN goes through a fully connected layer and makes the features extracted by the convolution layer more abstract.



1.1. Issues on CNN (or Deep Learning)

  • Deep learning performs well comparing with any other existing algorithms
  • But works as a black box

    • A classification result is simply returned without knowing how the classification results are derived → little interpretability
  • When we visually identify images, we do not look at the whole image

  • Instead, we intuitively focus on the most important parts of the image
  • When CNN weights are optimized, the more important parts are given higher weights

  • Class activation map (CAM)

    • We can determine which parts of the image the model is focusing on, based on the learned weights
    • Highlighting the importance of the image region to the prediction



2. CAM: CNN with a Global Average Pooling

  • shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability
  • the heatmap is the class activation map, highlighting the importance of the image region to the prediction

The deep learning model is a black box model. When input data is received, a classification result of 1 or 0 is simply returned for the binary classification problem, without knowing how the classification results are derived. Meanwhile, The class activation map (CAM) is capable of interpreting the results of the classification. We can determine which parts of the image the model is focusing on. Through an analysis of which part of the image the model is focusing on, we are able to interpret which part of the image is considered important.

The class activation map (CAM) is a modified convolution layer. It directly highlights the important parts of the spatial grid of an image. As a result, we can see the emphasized parts of the model. The below figure describes the procedure for class activation mapping.



The feature maps of the last convolution layer can be interpreted as a collection of visual spatial locations focused on by the model. The CAM can be obtained by taking a linear sum of the features. They all have different weights and thus can obtain spatial locations according to various input images through a linear combination. For a given image, $f_k(x,y)$ represents the feature map of unit $k$ in the last convolution layer at spatial location $(x,y)$. For a given class $c$, the class score, $S_c$, is expressed as the following equation.


$$S_c = \sum_k \omega_k^c \sum_{x,y} f_k(x,y)= \sum_{x,y} \sum_k \omega_k^c \; f_k(x,y)$$

where $\omega_k^c$ the weight corresponding to class $c$ for unit $k$. The class activation map for class $c$ is denoted as $M_c$.


$$M_c(x,y) = \sum_k \omega_k^c \; f_k(x,y)$$

$M_c$ directly indicates the importance of the feature map at a spatial grid $(x,y)$ of the class $c$. Finally the output of the softmax for class $c$ is,


$$P_c = \frac{\exp\left(S_c\right)}{\sum_c \exp\left(S_c\right)}$$

In case of the CNN, the size of the feature map is reduced by the pooling layer. By simple up-sampling, it is possible to identify attention image regions for each label.

3. CAM with MNIST

In [1]:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import cv2
In [2]:
mnist = tf.keras.datasets.mnist

(train_x, train_y), (test_x, test_y) = mnist.load_data()

train_x, test_x = train_x/255.0, test_x/255.0
train_x = train_x.reshape((train_x.shape[0], 28, 28, 1))
test_x = test_x.reshape((test_x.shape[0], 28, 28, 1))
In [3]:
# digits of 7 and 9 will be used

train_idx = np.hstack((np.where(train_y == 7), np.where(train_y == 9)))[0]
test_idx = np.hstack((np.where(test_y == 7), np.where(test_y == 9)))[0]

train_imgs   = train_x[train_idx]
train_labels = train_y[train_idx]
test_imgs    = test_x[test_idx]
test_labels  = test_y[test_idx]

n_train      = train_imgs.shape[0]
n_test       = test_imgs.shape[0]

print ("The number of train images: {}, shape: {}".format(n_train, train_imgs.shape))
print ("The number of test images: {}, shape: {}".format(n_test, test_imgs.shape))
The number of train images: 12214, shape: (12214, 28, 28, 1)
The number of test images: 2037, shape: (2037, 28, 28, 1)
In [4]:
# binary classification
# label 7 to 0 
# label 9 to 1

train_y = (train_labels == 9).astype(int)
test_y = (test_labels == 9).astype(int)
In [5]:
model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(filters = 32, 
                           kernel_size = (3,3), 
                           activation = 'relu',
                           padding = 'SAME',
                           input_shape = (28, 28, 1)),
    
    tf.keras.layers.MaxPool2D((2,2)),
    
    tf.keras.layers.Conv2D(filters = 64, 
                           kernel_size = (3,3), 
                           activation = 'relu',
                           padding = 'SAME',
                           input_shape = (14, 14, 32)),
    
    tf.keras.layers.MaxPool2D((2,2)),
    
    tf.keras.layers.GlobalAveragePooling2D(),
    
    tf.keras.layers.Dense(2, activation = 'softmax', use_bias = False)
])
In [6]:
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 32)        320       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 14, 14, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
global_average_pooling2d (Gl (None, 64)                0         
_________________________________________________________________
dense (Dense)                (None, 2)                 128       
=================================================================
Total params: 18,944
Trainable params: 18,944
Non-trainable params: 0
_________________________________________________________________



In [7]:
model.compile(optimizer = 'adam', 
              loss = 'sparse_categorical_crossentropy', 
              metrics = ['accuracy'])
In [8]:
model.fit(train_imgs, train_y, epochs = 10)
Epoch 1/10
382/382 [==============================] - 4s 11ms/step - loss: 0.4545 - accuracy: 0.8003
Epoch 2/10
382/382 [==============================] - 4s 12ms/step - loss: 0.1998 - accuracy: 0.9348
Epoch 3/10
382/382 [==============================] - 4s 11ms/step - loss: 0.1577 - accuracy: 0.9468
Epoch 4/10
382/382 [==============================] - 4s 11ms/step - loss: 0.1272 - accuracy: 0.9564
Epoch 5/10
382/382 [==============================] - 4s 11ms/step - loss: 0.1116 - accuracy: 0.9636
Epoch 6/10
382/382 [==============================] - 4s 11ms/step - loss: 0.1010 - accuracy: 0.9645
Epoch 7/10
382/382 [==============================] - 4s 11ms/step - loss: 0.0957 - accuracy: 0.9673
Epoch 8/10
382/382 [==============================] - 4s 10ms/step - loss: 0.0854 - accuracy: 0.9708
Epoch 9/10
382/382 [==============================] - 4s 10ms/step - loss: 0.0773 - accuracy: 0.9736
Epoch 10/10
382/382 [==============================] - 4s 10ms/step - loss: 0.0741 - accuracy: 0.9738
Out[8]:
<tensorflow.python.keras.callbacks.History at 0x1e7b385e9b0>
In [9]:
# accuracy test
test_loss, test_acc = model.evaluate(test_imgs,  test_y)

print('loss = {}, Accuracy = {} %'.format(round(test_loss,8), round(test_acc*100)))
64/64 [==============================] - 0s 3ms/step - loss: 0.0709 - accuracy: 0.9779
loss = 0.07086747, Accuracy = 98 %
In [10]:
# get max pooling layer and fully connected layer 
conv_layer = model.get_layer(index = 3)
fc_layer = model.layers[5].get_weights()[0]

# Class activation map 
activation_map = tf.matmul(conv_layer.output, fc_layer)
CAM_model = tf.keras.Model(inputs = model.inputs, outputs = activation_map)
In [11]:
test_img = test_imgs[np.random.choice(test_imgs.shape[0], 1)]
pred = np.argmax(model.predict(test_img), axis = 1)

CAM = CAM_model.predict(test_img)
attention = CAM[:,:,:,pred]
attention = np.abs(np.reshape(attention,(7,7)))

large_test_x = cv2.resize(test_img.reshape(28,28), (28*5, 28*5))
large_attention = cv2.resize(attention, (28*5, 28*5))
plt.figure(figsize = (10,10))
plt.subplot(2,2,1)
plt.imshow(large_test_x, 'gray')
plt.axis('off')
plt.subplot(2,2,2)
plt.imshow(large_attention, 'jet', alpha = 0.5)
plt.axis('off')
plt.subplot(2,2,4)
plt.imshow(large_test_x, 'gray')
plt.imshow(large_attention, 'jet', alpha = 0.5)
plt.axis('off')
plt.show()

4. CAM with NEU

Download NEU steel surface defects images and labels

In [12]:
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import cv2
In [13]:
train_x = np.load('./data_files/NEU_train_imgs.npy')
train_y= np.load('./data_files/NEU_train_labels.npy')
test_x = np.load('./data_files/NEU_test_imgs.npy')
test_y = np.load('./data_files/NEU_test_labels.npy')

n_train = train_x.shape[0]
n_test = test_x.shape[0]

print ("The number of training images : {}, shape : {}".format(n_train, train_x.shape))
print ("The number of testing images : {}, shape : {}".format(n_test, test_x.shape))
The number of training images : 1500, shape : (1500, 200, 200, 1)
The number of testing images : 300, shape : (300, 200, 200, 1)
In [14]:
model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(filters = 32, 
                           kernel_size = (3,3), 
                           activation = 'relu',
                           padding = 'SAME',
                           input_shape = (200, 200, 1)),

    tf.keras.layers.MaxPool2D((2,2)),
    
    tf.keras.layers.Conv2D(filters = 64, 
                           kernel_size = (3,3), 
                           activation = 'relu',
                           padding = 'SAME',
                           input_shape = (100, 100, 32)),
    
    tf.keras.layers.MaxPool2D((2,2)),
    
    tf.keras.layers.Conv2D(filters = 64, 
                           kernel_size = (3,3), 
                           activation = 'relu',
                           padding = 'SAME',
                           input_shape = (50, 50, 64)),
    
    tf.keras.layers.MaxPool2D((2,2)),
    
    tf.keras.layers.Conv2D(filters = 64, 
                           kernel_size = (3,3), 
                           activation = 'relu',
                           padding = 'SAME',
                           input_shape = (25, 25, 64)),
    
    tf.keras.layers.GlobalAveragePooling2D(),
    
    tf.keras.layers.Dense(6, activation = 'softmax')
])
In [15]:
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_2 (Conv2D)            (None, 200, 200, 32)      320       
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 100, 100, 32)      0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 100, 100, 64)      18496     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 50, 50, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 50, 50, 64)        36928     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 25, 25, 64)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 25, 25, 64)        36928     
_________________________________________________________________
global_average_pooling2d_1 ( (None, 64)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 6)                 390       
=================================================================
Total params: 93,062
Trainable params: 93,062
Non-trainable params: 0
_________________________________________________________________
In [16]:
model.compile(optimizer = 'adam', 
              loss = 'sparse_categorical_crossentropy', 
              metrics = 'accuracy')
In [17]:
model.fit(train_x, train_y, epochs = 15)
Epoch 1/15
47/47 [==============================] - 26s 543ms/step - loss: 1.7344 - accuracy: 0.1827
Epoch 2/15
47/47 [==============================] - 25s 538ms/step - loss: 1.4146 - accuracy: 0.3660
Epoch 3/15
47/47 [==============================] - 24s 520ms/step - loss: 0.8264 - accuracy: 0.7027
Epoch 4/15
47/47 [==============================] - 24s 513ms/step - loss: 0.5519 - accuracy: 0.7947
Epoch 5/15
47/47 [==============================] - 24s 514ms/step - loss: 0.4909 - accuracy: 0.8193
Epoch 6/15
47/47 [==============================] - 24s 507ms/step - loss: 0.4245 - accuracy: 0.8380
Epoch 7/15
47/47 [==============================] - 24s 505ms/step - loss: 0.5365 - accuracy: 0.8053
Epoch 8/15
47/47 [==============================] - 24s 506ms/step - loss: 0.3669 - accuracy: 0.8633
Epoch 9/15
47/47 [==============================] - 24s 516ms/step - loss: 0.3724 - accuracy: 0.8587
Epoch 10/15
47/47 [==============================] - 25s 528ms/step - loss: 0.4424 - accuracy: 0.8267
Epoch 11/15
47/47 [==============================] - 26s 550ms/step - loss: 0.3099 - accuracy: 0.8780
Epoch 12/15
47/47 [==============================] - 26s 545ms/step - loss: 0.2981 - accuracy: 0.8907
Epoch 13/15
47/47 [==============================] - 27s 570ms/step - loss: 0.3194 - accuracy: 0.8687
Epoch 14/15
47/47 [==============================] - 26s 544ms/step - loss: 0.2984 - accuracy: 0.8820
Epoch 15/15
47/47 [==============================] - 25s 541ms/step - loss: 0.4547 - accuracy: 0.8287
Out[17]:
<tensorflow.python.keras.callbacks.History at 0x1e7b9eecc50>
In [18]:
# accuracy test
test_loss, test_acc = model.evaluate(test_x, test_y)
10/10 [==============================] - 1s 118ms/step - loss: 0.8632 - accuracy: 0.6733
In [19]:
# get max pooling layer and fully connected layer 
conv_layer = model.get_layer(index = 6)
fc_layer = model.layers[8].get_weights()[0]

# Class activation map 
activation_map = tf.matmul(conv_layer.output, fc_layer)
CAM_model = tf.keras.Model(inputs = model.inputs, outputs = activation_map)
In [20]:
test_idx = [10]
test_image = test_x[test_idx] 

pred = np.argmax(model.predict(test_image), axis = 1)
predCAM = CAM_model.predict(test_image)

attention = predCAM[:,:,:,pred]
attention = np.abs(np.reshape(attention,(25,25)))

resized_attention = cv2.resize(attention,
                               (200*5, 200*5), 
                               interpolation = cv2.INTER_CUBIC)

resized_test_x = cv2.resize(test_image.reshape(200,200), 
                            (200*5, 200*5),
                            interpolation = cv2.INTER_CUBIC)

plt.figure(figsize = (10,15))
plt.subplot(3,2,1)
plt.imshow(test_x[test_idx].reshape(200,200), 'gray')
plt.axis('off')
plt.subplot(3,2,2)
plt.imshow(attention)
plt.axis('off')
plt.subplot(3,2,3)
plt.imshow(resized_test_x, 'gray')
plt.axis('off')
plt.subplot(3,2,4)
plt.imshow(resized_attention, 'jet', alpha = 0.5)
plt.axis('off')
plt.subplot(3,2,6)
plt.imshow(resized_test_x, 'gray')
plt.imshow(resized_attention, 'jet', alpha = 0.5)
plt.axis('off')
plt.show()

5. Video Lectures

In [21]:
%%html
<center><iframe src="https://www.youtube.com/embed/-e4H5Fqm4-w?rel=0" 
width="420" height="315" frameborder="0" allowfullscreen></iframe></center>
In [22]:
%%html
<center><iframe src="https://www.youtube.com/embed/yaDqxQA7rzA?rel=0" 
width="420" height="315" frameborder="0" allowfullscreen></iframe></center>
In [23]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')