Class Activation Map (CAM)
Table of Contents
Attention
Visualizing and Understanding Convolutional Networks
The conventional CNN can be conceptually divided into two parts. One part is feature extraction and the other is classification. In the feature extraction process, convolution is used to extract the features of the input data so that the classification can be performed well. The classification process classifies which group each input data belongs to by using the extracted features from the input data.
When we visually identify images, we do not look at the whole image; instead, we intuitively focus on the most important parts of the image. CNN learning is similar to the way humans focus. When its weights are optimized, the more important parts are given higher weights. But generally, we are not able to recognize this because the generic CNN goes through a fully connected layer and makes the features extracted by the convolution layer more abstract.
But works as a black box
When we visually identify images, we do not look at the whole image
When CNN weights are optimized, the more important parts are given higher weights
Class activation map (CAM)
The deep learning model is a black box model. When input data is received, a classification result of 1 or 0 is simply returned for the binary classification problem, without knowing how the classification results are derived. Meanwhile, The class activation map (CAM) is capable of interpreting the results of the classification. We can determine which parts of the image the model is focusing on. Through an analysis of which part of the image the model is focusing on, we are able to interpret which part of the image is considered important.
The class activation map (CAM) is a modified convolution layer. It directly highlights the important parts of the spatial grid of an image. As a result, we can see the emphasized parts of the model. The below figure describes the procedure for class activation mapping.
The feature maps of the last convolution layer can be interpreted as a collection of visual spatial locations focused on by the model. The CAM can be obtained by taking a linear sum of the features. They all have different weights and thus can obtain spatial locations according to various input images through a linear combination. For a given image, $f_k(x,y)$ represents the feature map of unit $k$ in the last convolution layer at spatial location $(x,y)$. For a given class $c$, the class score, $S_c$, is expressed as the following equation.
$$S_c = \sum_k \omega_k^c \sum_{x,y} f_k(x,y)= \sum_{x,y} \sum_k \omega_k^c \; f_k(x,y)$$
where $\omega_k^c$ the weight corresponding to class $c$ for unit $k$. The class activation map for class $c$ is denoted as $M_c$.
$$M_c(x,y) = \sum_k \omega_k^c \; f_k(x,y)$$
$M_c$ directly indicates the importance of the feature map at a spatial grid $(x,y)$ of the class $c$. Finally the output of the softmax for class $c$ is,
$$P_c = \frac{\exp\left(S_c\right)}{\sum_c \exp\left(S_c\right)}$$
In case of the CNN, the size of the feature map is reduced by the pooling layer. By simple up-sampling, it is possible to identify attention image regions for each label.
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import cv2
mnist = tf.keras.datasets.mnist
(train_x, train_y), (test_x, test_y) = mnist.load_data()
train_x, test_x = train_x/255.0, test_x/255.0
train_x = train_x.reshape((train_x.shape[0], 28, 28, 1))
test_x = test_x.reshape((test_x.shape[0], 28, 28, 1))
# digits of 7 and 9 will be used
train_idx = np.hstack((np.where(train_y == 7), np.where(train_y == 9)))[0]
test_idx = np.hstack((np.where(test_y == 7), np.where(test_y == 9)))[0]
train_imgs = train_x[train_idx]
train_labels = train_y[train_idx]
test_imgs = test_x[test_idx]
test_labels = test_y[test_idx]
n_train = train_imgs.shape[0]
n_test = test_imgs.shape[0]
print ("The number of train images: {}, shape: {}".format(n_train, train_imgs.shape))
print ("The number of test images: {}, shape: {}".format(n_test, test_imgs.shape))
# binary classification
# label 7 to 0
# label 9 to 1
train_y = (train_labels == 9).astype(int)
test_y = (test_labels == 9).astype(int)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters = 32,
kernel_size = (3,3),
activation = 'relu',
padding = 'SAME',
input_shape = (28, 28, 1)),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Conv2D(filters = 64,
kernel_size = (3,3),
activation = 'relu',
padding = 'SAME',
input_shape = (14, 14, 32)),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(2, activation = 'softmax', use_bias = False)
])
model.summary()
model.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics = ['accuracy'])
model.fit(train_imgs, train_y, epochs = 10)
# accuracy test
test_loss, test_acc = model.evaluate(test_imgs, test_y)
print('loss = {}, Accuracy = {} %'.format(round(test_loss,8), round(test_acc*100)))
# get max pooling layer and fully connected layer
conv_layer = model.get_layer(index = 3)
fc_layer = model.layers[5].get_weights()[0]
# Class activation map
activation_map = tf.matmul(conv_layer.output, fc_layer)
CAM_model = tf.keras.Model(inputs = model.inputs, outputs = activation_map)
test_img = test_imgs[np.random.choice(test_imgs.shape[0], 1)]
pred = np.argmax(model.predict(test_img), axis = 1)
CAM = CAM_model.predict(test_img)
attention = CAM[:,:,:,pred]
attention = np.abs(np.reshape(attention,(7,7)))
large_test_x = cv2.resize(test_img.reshape(28,28), (28*5, 28*5))
large_attention = cv2.resize(attention, (28*5, 28*5))
plt.figure(figsize = (10,10))
plt.subplot(2,2,1)
plt.imshow(large_test_x, 'gray')
plt.axis('off')
plt.subplot(2,2,2)
plt.imshow(large_attention, 'jet', alpha = 0.5)
plt.axis('off')
plt.subplot(2,2,4)
plt.imshow(large_test_x, 'gray')
plt.imshow(large_attention, 'jet', alpha = 0.5)
plt.axis('off')
plt.show()
Download NEU steel surface defects images and labels
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import cv2
train_x = np.load('./data_files/NEU_train_imgs.npy')
train_y= np.load('./data_files/NEU_train_labels.npy')
test_x = np.load('./data_files/NEU_test_imgs.npy')
test_y = np.load('./data_files/NEU_test_labels.npy')
n_train = train_x.shape[0]
n_test = test_x.shape[0]
print ("The number of training images : {}, shape : {}".format(n_train, train_x.shape))
print ("The number of testing images : {}, shape : {}".format(n_test, test_x.shape))
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters = 32,
kernel_size = (3,3),
activation = 'relu',
padding = 'SAME',
input_shape = (200, 200, 1)),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Conv2D(filters = 64,
kernel_size = (3,3),
activation = 'relu',
padding = 'SAME',
input_shape = (100, 100, 32)),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Conv2D(filters = 64,
kernel_size = (3,3),
activation = 'relu',
padding = 'SAME',
input_shape = (50, 50, 64)),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Conv2D(filters = 64,
kernel_size = (3,3),
activation = 'relu',
padding = 'SAME',
input_shape = (25, 25, 64)),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(6, activation = 'softmax')
])
model.summary()
model.compile(optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics = 'accuracy')
model.fit(train_x, train_y, epochs = 15)
# accuracy test
test_loss, test_acc = model.evaluate(test_x, test_y)
# get max pooling layer and fully connected layer
conv_layer = model.get_layer(index = 6)
fc_layer = model.layers[8].get_weights()[0]
# Class activation map
activation_map = tf.matmul(conv_layer.output, fc_layer)
CAM_model = tf.keras.Model(inputs = model.inputs, outputs = activation_map)
test_idx = [10]
test_image = test_x[test_idx]
pred = np.argmax(model.predict(test_image), axis = 1)
predCAM = CAM_model.predict(test_image)
attention = predCAM[:,:,:,pred]
attention = np.abs(np.reshape(attention,(25,25)))
resized_attention = cv2.resize(attention,
(200*5, 200*5),
interpolation = cv2.INTER_CUBIC)
resized_test_x = cv2.resize(test_image.reshape(200,200),
(200*5, 200*5),
interpolation = cv2.INTER_CUBIC)
plt.figure(figsize = (10,15))
plt.subplot(3,2,1)
plt.imshow(test_x[test_idx].reshape(200,200), 'gray')
plt.axis('off')
plt.subplot(3,2,2)
plt.imshow(attention)
plt.axis('off')
plt.subplot(3,2,3)
plt.imshow(resized_test_x, 'gray')
plt.axis('off')
plt.subplot(3,2,4)
plt.imshow(resized_attention, 'jet', alpha = 0.5)
plt.axis('off')
plt.subplot(3,2,6)
plt.imshow(resized_test_x, 'gray')
plt.imshow(resized_attention, 'jet', alpha = 0.5)
plt.axis('off')
plt.show()
%%html
<center><iframe src="https://www.youtube.com/embed/-e4H5Fqm4-w?rel=0"
width="420" height="315" frameborder="0" allowfullscreen></iframe></center>
%%html
<center><iframe src="https://www.youtube.com/embed/yaDqxQA7rzA?rel=0"
width="420" height="315" frameborder="0" allowfullscreen></iframe></center>
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')