Deep Learning for Mechanical Engineering

Homework 05

Due Friday, 10/13/2023, 23:59 PM


Instructor: Prof. Seungchul Lee
http://iailab.kaist.ac.kr/
Industrial AI Lab at KAIST
  • For your handwritten solutions, please scan or take a picture of them. Alternatively, you can write them in markdown if you prefer.

  • Only .ipynb files will be graded for your code.

    • Ensure that your NAME and student ID are included in your .ipynb files. ex) IljeokKim_20202467_HW06.ipynb
  • Compress all the files into a single .zip file.

    • In the .zip file's name, include your NAME and student ID. ex) DogyeomPark_20202467_HW06.zip
    • Submit this .zip file on KLMS
  • Do not submit a printed version of your code, as it will not be graded.

Problem 1: Rotating Machinery Diagnosis with Logistic Regression

Mechanical systems invariably exhibit vibrations during their operation. Consequently, vibration analysis stands as one of the foremost and traditional methods for diagnosing mechanical systems. In the context of this problem, we will employ the logistic regression algorithm to identify anomalies within a mechanical system.

Data information

  • File format: npy
  • Information: signal, label
  • Sampling rate: 12,800 Hz
  • File length (time): 0.78 sec
  • Labels
    • A (Normal)
    • B (Abnormal)


Download the datasets

In [ ]:
import tensorflow as tf
import numpy as np
import scipy
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
%matplotlib inline
In [ ]:
from google.colab import drive
drive.mount('/content/drive')
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
In [ ]:
## data load

signal = np.load('/content/drive/MyDrive/DL_Colab/DL_data/rotating_machinery_signal.npy')
label = np.load('/content/drive/MyDrive/DL_Colab/DL_data/rotating_machinery_label.npy')

print(signal.shape)
print(label.shape)
(2000, 10000)
(2000,)
In [ ]:
m = label.shape[0]

for _ in range(2):
  idx = np.random.randint(m)
  plt.plot(signal[idx])

  if label[idx] == True:
    plt.title('B (Abnormal)')
  else:
    plt.title('A (Normal)')

  plt.show()

Now, we need to extract features from the raw signal.

  • Use the following statistical features
    • peak, rms, kurtosis, crest factor, impulse factor, shape factor, skewness, smr, peak-peak

The equations are as follows:


$$\begin{align*} \text{Peak Value } &= \max_{n = 1, \dots, N} x_n \\ \\ \text{RMS } &= \sqrt{\frac{1}{N}\sum_{n=1}^N x_n^2} \\ \\ \text{Kurtosis } &= \frac{\frac{1}{N}\sum_{n=1}^N (x_n-\bar{x})^4}{\text{Var}^2} \\ \\ \text{Var } &= \frac{1}{N}\sum_{n=1}^N (x_n-\bar{x})^2 \\\\ \text{Crest Factor } &= \frac{\text{Peak}}{\text{RMS}}\\\\ \text{Impulse Factor } &= \frac{\text{Peak}}{\text{Mean}} \\ \\ \text{Mean } &= \frac{1}{N}\sum_{n=1}^N \lvert x_n \rvert \\\\ \text{Shape Factor } &= \frac{\text{RMS}}{\text{Mean}}\\\\ \text{Skewness } &= \frac{\frac{1}{N}\sum_{n=1}^N(x_n-\bar{x})^3}{\text{Var}^{3/2}}\\\\ \text{SMR } &= \left(\frac{1}{N}\sum_{n=1}^N\sqrt{x_n} \right)^2 \\\\ \text{Peak-Peak Value } &= \max_{n = 1, \dots, N} x_n-\min_{n = 1, \dots, N} x_n \end{align*}$$$$x_i: \text{signal data}$$


The function for feature extraction has been preconfigured for your convenience. The function operates with 'data x' as input and produces a horizontally stacked feature vector as its output.

In [ ]:
def extfeat(x):
    fvector = []

    # time domain feature
    peak = np.max(np.abs(x))
    fvector.append(peak)

    rms = np.sqrt(np.mean(x**2))
    fvector.append(rms)

    kurtosis = scipy.stats.kurtosis(x)
    fvector.append(kurtosis)

    crest_factor = fvector[0]/fvector[1]
    fvector.append(crest_factor)

    impulse_factor = fvector[0]/(np.sum(np.abs(x))/len(x))
    fvector.append(impulse_factor)

    shape_factor = fvector[1]/(np.sum(np.abs(x))/len(x))
    fvector.append(shape_factor)

    skewness = scipy.stats.skew(x)
    fvector.append(skewness)

    smr = (np.sum(np.sqrt(np.abs(x)))/len(x))**2
    fvector.append(smr)

    pp = np.max(x) - np.min(x)
    fvector.append(pp)

    return fvector

feature_name = ['Peak', 'RMS', 'Kurtosis', 'Crest Factor', 'Impulse Factor','Shape Factor','Skewness','SMR', 'Peak-Peak']
In [ ]:
feature_record = []

for idx in range(m):
    feature_record.append(extfeat(signal[idx]))

feature_record = np.array(feature_record)

train_x, test_x, train_y, test_y = train_test_split(feature_record, label, test_size = 1/4, shuffle = True, random_state = 42)

1) Design your logistic regression model using scikit learn library and train it.

In [ ]:
## your code here
#

2) Compute the test accuracy.

In [ ]:
## your code here
#
Accuracy : 97.8%

3) Configure an autoencoder model with the identical structure as depicted in the figure below and proceed with the training process.

In [ ]:
## your code here
#
In [ ]:
autoencoder.compile(optimizer = tf.keras.optimizers.Adam(0.001),
                    loss = 'mean_squared_error',
                    metrics = ['mse'])
In [ ]:
# Train Model & Evaluate Test Data
training = autoencoder.fit(train_x, train_x, batch_size = 32, epochs = 250, verbose = 0)

4) Compute MSE loss for the test dataset.

In [ ]:
## your code here
#
16/16 [==============================] - 0s 2ms/step - loss: 0.0877 - mse: 0.0877

5) Visualize the training dataset in the latent space.

In [ ]:
## your code here
#

6) Design a logistic regression model to find a linear boundary between normal and abnormal instances within the latent space.

In [ ]:
## your code here
#
Out[ ]:
LogisticRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

7) Visualize the test dataset within the latent space along with the linear boundary that separates normal and abnormal instances.

In [ ]:
## your code here
#

Problem 2: Autoencoder

PCA (Principal Component Analysis)

  • PCA is one of the oldest and most widely used dimensionality reduction algorithms. Its idea is to reduce the dimensionality of a dataset, while preserving as much 'variability' as possible. (but you don't need to fully understand the PCA algorithm for this problem.)

Actually, autoencoder is very similar to the PCA algorithm in terms of 'dimension reduction'. So, in this problem, we are going to build up PCA with autoencoder. While PCA is a linear dimension reduction method, the autoencoder has non-linear activation functions. Therefore, autoencoder without non-linear activation functions can be considered as PCA.

Now, we have 3D data. Run the below cell to load and 3D plot them.


Download the datasets

In [ ]:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
In [ ]:
data = np.load('/content/drive/MyDrive/DL_Colab/DL_data/pca_autoencoder.npy')

fig = plt.figure(figsize = (8, 8))
ax = fig.add_subplot(111, projection = '3d')
ax.scatter(data[:,0], data[:,1], data[:,2])
plt.show()

train_x, test_x = data[:100], data[100:]

PCA result in 2D is shown as follows.

In [ ]:
from sklearn.decomposition import PCA

pca = PCA(n_components = 2)
pca.fit(data)
result = pca.transform(data)

plt.figure(figsize = (6, 6))
plt.plot(result[:,0], result[:,1], 'o')
plt.axis('equal')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.show()
In [ ]:
pca.fit(test_x)
result = pca.transform(test_x)

plt.figure(figsize = (6, 6))
plt.plot(result[:,0], result[:,1], 'o')
plt.axis('equal')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.show()

(1) Design your linear autoencoder model using ANN. (freely design your own network structure)

  • Hint: you must use (activation = None)
In [ ]:
## your code here
#
In [ ]:
autoencoder.compile(optimizer = tf.keras.optimizers.Adam(0.001),
                    loss = 'mean_squared_error')
In [ ]:
training = autoencoder.fit(train_x, train_x, epochs = 1000, verbose = 0)

(2) After training your model, plot the data on latent space. (You will be able to obtain similar result as PCA!!)

In [ ]:
## your code here
#
4/4 [==============================] - 0s 2ms/step

Problem 3:

The encoder part of an autoencoder is well known as a dimensionality reduction operator. This problem will ask you to implement the autoencoder algorithm for face data. The given data consists of 100 pictures of human faces with size of (50, 40), we will apply an autoencoder to this dataset.

Downlaod the datasets

In [ ]:
data = np.load('/content/drive/MyDrive/DL_Colab/DL_data/pca_faces.npy')

print(data.shape)
(100, 50, 40)

(a) Plot one random face out of 100 pictures. You might want to run multiple times to see what kinds of faces are in the data set.

In [ ]:
## your code here
#

(b) Apply the autoencoder to the dataset. Build your model with the following structure:

  • first encoder: 500
  • second encoder: 300
  • latent node: 8
  • first decoder: 300
  • second decoder: 500
  • activation = 'relu'
In [ ]:
train_face = data.reshape([100, 50*40])
print(train_face.shape)
(100, 2000)
In [ ]:
## your code here
#
In [ ]:
autoencoder.compile(optimizer = tf.keras.optimizers.Adam(0.001),
                    loss = 'mean_squared_error')
In [ ]:
training = autoencoder.fit(train_face, train_face, epochs = 500, verbose = 0)

(c) Plot a randomly selected input image alongside its correspoding reconstructed image.

In [ ]:
## your code here
#

(d) Reconstruct the image of the individual wearing a crown using your autoencoder, and provide a discussion regarding the result of the reconstructed face. Please note that the reconstruction performance may not be optimal.

In [ ]:
test = train_face[94]

## your code here
#

(e) Reconstruct the image of the individual wearing sunglasses using your autoencoder and provide a discussion regarding the result of the reconstructed face. Please note that the reconstruction performance may not be optimal

In [ ]:
test_face = train_face[28]

## your code here
#