Deep Learning for Mechanical Engineering

Homework 04

Due Wed., 10/04/2023, 4:00 PM


Instructor: Prof. Seungchul Lee
http://iailab.kaist.ac.kr/
Industrial AI Lab at KAIST
  • For your handwritten solution, scan or take a picture of them (you can write it in markdown if you want).

  • For your code, only .ipynb file will be graded.

    • Please write your NAME and student ID on your .ipynb files. ex) IljeokKim_20202467_HW02.ipynb
  • Please compress all the files to make a single .zip file

    • Please write your NAME and student ID on your .zip files. ex) DogyeomPark_20202467_HW02.zip
    • Submit it to KLMS
  • Do not submit a printed version of your code. It will not be graded.

Problem 1: Understanding Backpropagation¶

In this problem, we are going to compute the gradient using the chain rule and dynamic programming, and update the weights $\omega \rightarrow \omega^+$. After that, the weights are updated through 1 back-propagation and compared with the error before the update.





Neural Network Model

  • The artificial neural network structure: an input layer, a hidden layern, and an output layer.
  • All neurons ($h_1$, $h_2$, $\sigma_1$ and $\sigma_2$) in the hidden and output layers use the sigmoid function as activation function.
  • The red number means the initial weight values, the blue number means the input values, and the ground truth means the actual values.
  • The loss function is the mean square error (MSE). Use 1/2 MSE for calculation convenience. For example, $E = \frac{1}{2}\sum(\text{target} - \text{output})^2$
  • Learning rate is set to 0.9.

Step 1: Forward Propagation¶

(1) [hand written] Write and calculate $z_1$, $z_2$, $h_1$, $h_2$, $z_3$, $z_4$, $\sigma_1$, $\sigma_2$, and $E_{\text{total}}$ of forward propagation.

Step 2: BackPropagation 1¶





(2) [hand written] update $\omega_5$, $\omega_6$, $\omega_7$, $\omega_8$ $\rightarrow$ $\omega_5^+$, $\omega_6^+$, $\omega_7^+$, $\omega_8^+$ of back-propagation.

Step 3: BackPropagation 2¶





(3) [hand written] update $\omega_1$, $\omega_2$, $\omega_3$, $\omega_4$ $\rightarrow$ $\omega_1^+$, $\omega_2^+$, $\omega_3^+$, $\omega_4^+$ of back-propagation.

Step 4: Check the Result for Weight Update¶





(4) [hand written] Write and calculate $E_{\text{total}}$ with the updated weights, and compare it to the previous error.

Problem 2: Multi-Layer Perceptron¶

You will do binary classification for nonlinearly seperable data using MLP. Plot the given data first.

In [1]:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline

N = 200
M = 2*N
gamma = 0.01

G0 = np.random.multivariate_normal([0, 0], gamma*np.eye(2), N)
G1 = np.random.multivariate_normal([1, 1], gamma*np.eye(2), N)
G2 = np.random.multivariate_normal([0, 1], gamma*np.eye(2), N)
G3 = np.random.multivariate_normal([1, 0], gamma*np.eye(2), N)

train_X = np.vstack([G0, G1, G2, G3])
train_y = np.vstack([np.ones([M,1]), np.zeros([M,1])])

train_X = np.asmatrix(train_X)
train_y = np.asmatrix(train_y)

print(train_X.shape)
print(train_y.shape)

plt.figure(figsize = (8, 6))
plt.plot(train_X[:M,0], train_X[:M,1], 'b.', alpha = 0.4, label = 'A')
plt.plot(train_X[M:,0], train_X[M:,1], 'r.', alpha = 0.4, label = 'B')
plt.axis('equal')
plt.xlim([-1, 2]); plt.ylim([-1, 2]); 
plt.grid(alpha = 0.15)
plt.legend(fontsize = 12)
plt.show()
(800, 2)
(800, 1)

(1) Design a perceptron model which has a single layer, and train it to show the accuracy.

  • Hidden layer with no nonlinear activiation function
In [2]:
model = tf.keras.models.Sequential([    
    ## your code here
    
    
])

model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                (None, 1)                 3         
=================================================================
Total params: 3
Trainable params: 3
Non-trainable params: 0
_________________________________________________________________
In [3]:
## Your code here 
#
Epoch 1/30
25/25 [==============================] - 0s 593us/step - loss: 5.7641 - accuracy: 0.3988
Epoch 2/30
25/25 [==============================] - 0s 498us/step - loss: 3.9330 - accuracy: 0.7175
Epoch 3/30
25/25 [==============================] - 0s 582us/step - loss: 3.8491 - accuracy: 0.7475
Epoch 4/30
25/25 [==============================] - 0s 603us/step - loss: 3.8303 - accuracy: 0.7500
Epoch 5/30
25/25 [==============================] - 0s 579us/step - loss: 3.8254 - accuracy: 0.7500
Epoch 6/30
25/25 [==============================] - 0s 504us/step - loss: 3.8224 - accuracy: 0.7500
Epoch 7/30
25/25 [==============================] - 0s 477us/step - loss: 3.8204 - accuracy: 0.7500
Epoch 8/30
25/25 [==============================] - 0s 485us/step - loss: 3.8191 - accuracy: 0.7500
Epoch 9/30
25/25 [==============================] - 0s 513us/step - loss: 3.8184 - accuracy: 0.7500
Epoch 10/30
25/25 [==============================] - 0s 523us/step - loss: 3.8176 - accuracy: 0.7500
Epoch 11/30
25/25 [==============================] - 0s 490us/step - loss: 3.8172 - accuracy: 0.7500
Epoch 12/30
25/25 [==============================] - 0s 483us/step - loss: 3.8170 - accuracy: 0.7500
Epoch 13/30
25/25 [==============================] - 0s 439us/step - loss: 3.8167 - accuracy: 0.7500
Epoch 14/30
25/25 [==============================] - 0s 469us/step - loss: 3.8163 - accuracy: 0.7500
Epoch 15/30
25/25 [==============================] - 0s 516us/step - loss: 3.8162 - accuracy: 0.7500
Epoch 16/30
25/25 [==============================] - 0s 496us/step - loss: 3.8161 - accuracy: 0.7500
Epoch 17/30
25/25 [==============================] - 0s 483us/step - loss: 3.8159 - accuracy: 0.7500
Epoch 18/30
25/25 [==============================] - 0s 458us/step - loss: 3.8157 - accuracy: 0.7500
Epoch 19/30
25/25 [==============================] - 0s 473us/step - loss: 3.8155 - accuracy: 0.7500
Epoch 20/30
25/25 [==============================] - 0s 530us/step - loss: 3.8153 - accuracy: 0.7500
Epoch 21/30
25/25 [==============================] - 0s 509us/step - loss: 3.8151 - accuracy: 0.7500
Epoch 22/30
25/25 [==============================] - 0s 501us/step - loss: 3.8152 - accuracy: 0.7500
Epoch 23/30
25/25 [==============================] - 0s 479us/step - loss: 3.8151 - accuracy: 0.7500
Epoch 24/30
25/25 [==============================] - 0s 531us/step - loss: 3.8150 - accuracy: 0.7500
Epoch 25/30
25/25 [==============================] - 0s 547us/step - loss: 3.8148 - accuracy: 0.7500
Epoch 26/30
25/25 [==============================] - 0s 514us/step - loss: 3.8146 - accuracy: 0.7500
Epoch 27/30
25/25 [==============================] - 0s 553us/step - loss: 3.8147 - accuracy: 0.7500
Epoch 28/30
25/25 [==============================] - 0s 534us/step - loss: 3.8145 - accuracy: 0.7500
Epoch 29/30
25/25 [==============================] - 0s 473us/step - loss: 3.8145 - accuracy: 0.7500
Epoch 30/30
25/25 [==============================] - 0s 480us/step - loss: 3.8144 - accuracy: 0.7500

(2) Plot the classifier (decision boundary).

In [4]:
## Your code here
#

(3) What is the highest accuracy you can get? Discuss the result.

In [5]:
## write down your discussion here

#
#

(4) Modify a perceptron model which has 2 layers, and train it to show the accuracy.

  • Hidden layer: sigmoid function
In [6]:
## Your code here

model = tf.keras.models.Sequential([    
    ## your code here
    
    
])

model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 2)                 6         
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 3         
=================================================================
Total params: 9
Trainable params: 9
Non-trainable params: 0
_________________________________________________________________
In [7]:
## Your code here
#
Epoch 1/20
25/25 [==============================] - 0s 606us/step - loss: 0.6984 - accuracy: 0.5263
Epoch 2/20
25/25 [==============================] - 0s 499us/step - loss: 0.6441 - accuracy: 0.7462
Epoch 3/20
25/25 [==============================] - 0s 573us/step - loss: 0.5590 - accuracy: 0.7462
Epoch 4/20
25/25 [==============================] - 0s 580us/step - loss: 0.4670 - accuracy: 0.7887
Epoch 5/20
25/25 [==============================] - 0s 590us/step - loss: 0.3169 - accuracy: 0.9875
Epoch 6/20
25/25 [==============================] - 0s 611us/step - loss: 0.1685 - accuracy: 0.9987
Epoch 7/20
25/25 [==============================] - 0s 485us/step - loss: 0.0993 - accuracy: 1.0000
Epoch 8/20
25/25 [==============================] - 0s 458us/step - loss: 0.0688 - accuracy: 0.9987
Epoch 9/20
25/25 [==============================] - 0s 457us/step - loss: 0.0510 - accuracy: 1.0000
Epoch 10/20
25/25 [==============================] - 0s 499us/step - loss: 0.0408 - accuracy: 1.0000
Epoch 11/20
25/25 [==============================] - 0s 563us/step - loss: 0.0339 - accuracy: 1.0000
Epoch 12/20
25/25 [==============================] - 0s 550us/step - loss: 0.0285 - accuracy: 1.0000
Epoch 13/20
25/25 [==============================] - 0s 498us/step - loss: 0.0256 - accuracy: 0.9987
Epoch 14/20
25/25 [==============================] - 0s 475us/step - loss: 0.0215 - accuracy: 1.0000
Epoch 15/20
25/25 [==============================] - 0s 0s/step - loss: 0.0186 - accuracy: 1.0000
Epoch 16/20
25/25 [==============================] - 0s 501us/step - loss: 0.0168 - accuracy: 1.0000
Epoch 17/20
25/25 [==============================] - 0s 582us/step - loss: 0.0152 - accuracy: 1.0000
Epoch 18/20
25/25 [==============================] - 0s 574us/step - loss: 0.0137 - accuracy: 1.0000
Epoch 19/20
25/25 [==============================] - 0s 548us/step - loss: 0.0128 - accuracy: 1.0000
Epoch 20/20
25/25 [==============================] - 0s 498us/step - loss: 0.0116 - accuracy: 1.0000

(5) Plot two linear classification boundaries in the input space.

In [8]:
## your code here
#

(6) Plot one linear classification boundary in z space (or values in the hidden layer).

In [9]:
## your code here
#

Problem 3: ANN Regression¶

In this problem, you are asked to use TensorFlow to implement the linear regression algorithm. By doing this, we hope that you are getting familiar with the syntax of TensorFlow.

In [10]:
# Data Generation

m = 5000

data_x = np.linspace(-3, 3, m)
data_y = 0.8*data_x + 2 + np.random.randn(m)*0.3

plt.figure(figsize = (10,8))
plt.plot(data_x, data_y, '.', alpha = 0.4)
plt.axis('equal')
plt.show()

We will build the simplest ANN model as shown in the following figure in order to find the best line fit (i.e., linear regression) for the given training data set. Note that there is no hidden layer, and both the input and output layer have only one neuron.




(1) Define the AI model.

In [11]:
model = tf.keras.models.Sequential([    
    ## your code here
    
    
])

model.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_3 (Dense)              (None, 1)                 2         
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________
In [12]:
## Your code here 
#
Epoch 1/10
157/157 [==============================] - 0s 511us/step - loss: 0.2967
Epoch 2/10
157/157 [==============================] - 0s 505us/step - loss: 0.0925
Epoch 3/10
157/157 [==============================] - 0s 452us/step - loss: 0.0929
Epoch 4/10
157/157 [==============================] - 0s 385us/step - loss: 0.0953
Epoch 5/10
157/157 [==============================] - 0s 395us/step - loss: 0.0919
Epoch 6/10
157/157 [==============================] - 0s 395us/step - loss: 0.0928
Epoch 7/10
157/157 [==============================] - 0s 380us/step - loss: 0.0923
Epoch 8/10
157/157 [==============================] - 0s 374us/step - loss: 0.0924
Epoch 9/10
157/157 [==============================] - 0s 386us/step - loss: 0.0955
Epoch 10/10
157/157 [==============================] - 0s 381us/step - loss: 0.0949
Out[12]:
<tensorflow.python.keras.callbacks.History at 0x1b0fec46e48>

(2) Find the estimated weight and bias from the trained model.

In [13]:
## Your code here
#
w_hat : 0.8228356242179871
b_hat : 1.99810791015625

(3) Plot the linear regression.

In [14]:
## Your code here
#