AI for Mechanical Engineering

Artificial Neural Networks (ANN)

Problem 01

  1. Explain why the perceptron cannot solve the XOR problem.

  2. ANN (Artificial Neural Networks) is also called as MLP (Multilayer Perceptron). Explain why the MLP is able to solve the XOR problem.

Problem 02

  1. (Choose correct answers) Jonathan has now switched to multilayer neural networks and notices that the training error is going down and converges to a local minimum. Then when he tests on the new data, the test error is abnormally high. What is probably going wrong and what do you recommend him to do? (3 choices)

a) The training data size is not large enough. Collect a larger training data and retrain it.

b) Play with learning rate and add regularization term to the objective function.

c) Use a different initialization and train the network several times. Use the average of predictions from all nets to predict test data.

d) Use the same training data but add two more hidden layers.


  1. True or false for the following questions. (Correct +1, Wrong -1)

a) Single perceptron can solve a lineary inseperable problem with a kernel function.

b) Gradient descent trains neural networks to the global optimum.


  1. (Choose all the correct answers) Jonathan is tyring to solve the XOR problem using a multilayer perceptron (MLP) with ReLU activation function. However, as he trains the MLP model, the results vary at every iteration. The results are correct in some iterations, and the results are wrong at the other iterations. What is probably going wrong and what do you recommend him to do?

a) The training data points are not large enough. Collect a larger training data points and re-train it.

b) The number of perceptron layers is too large. Remove the perceptron layers and re-train it.

c) The number of perceptron layers is too small. Add more the perceptron layers and re-train it.

d) Learning rate is too high. Reduce learning rate and re-train it.


  1. Explain the difference between sigmoid (or hyperbolic tangent), and rectified linear unit (ReLU) activation functions in gradient backpropagation.

Problem 03

To deal with nonlinearly distributed data set, we need to design an appropriate kernel to make (or map) the original data to be linearly seperable. However, in artificial neural networks, this step is not necessary. Discuss why. (Hint: see the following figure.)


Problem 04

Suppose a multi-layer perceptron which has an input layer with 10 neurons, a hidden layer with 50 neurons, and an output layer with 3 neurons. Non-linear activation function for every neurons are ReLU. Write your answer to the following questions.

  1. Size of input $X$?

  2. Size of weights and biases ($W_h, b_h$) for the hidden layer?

  3. Size of weights and biases ($W_o, b_o$) for the output layer?

  4. Size of output $Y$?

Problem 05

To train neural networks, backpropagation is used. Briefly explain what the backpropagation is. When you discuss it, use the keywords such as recursive, memorized, dynamic programming, chain rule, etc.

Problem 06

Build the ANN model which receives three binary-valued (i.e., $0$ or $1$) inputs $x_1,x_2,x_3$, and outputs $1$ if exactly two of the inputs are $1$, and outputs $0$ otherwise. All of the units use a hard threshold activation function:


$$z = \begin{cases} 1 \quad \text{if } z \geq 0\\ 0 \quad \text{if } z < 0 \end{cases} $$

Suggest one of possible weights and biases which correctly implement this function.




Denote by

  • $\mathbf{W}_{2 \times 3}$ and $\mathbf{V}_{1 \times 2}$ weight matrices connecting input and hidden layer, and hidden layer and output respectively.

  • $\mathbf{b}^{(1)}_{2 \times 1}$ and $\mathbf{b}^{(2)}_{1 \times 1}$ biases matrices at hidden layer and output, respectively.

  • $x_{3 \times 1}$ and $h_{2 \times 1}$ node values at input and hidden layer, repectively.

Problem 07

In this problem, we are going to compute the gradient using the chain rule and dynamic programming, and update the weights $\omega \rightarrow \omega^+$. After that, the weights are updated through 1 back-propagation and compared with the error before the update.




Neural Network Model

  • The artificial neural network structure: an input layer, a hidden layern, and an output layer.
  • All neurons ($h_1$, $h_2$, $\sigma_1$ and $\sigma_2$) in the hidden and output layers use the sigmoid function as activation function.
  • The red number means the initial weight values, the blue number means the input values, and the ground truth means the actual values.
  • The loss function is the mean square error (MSE). Use 1/2 MSE for calculation convenience. For example, $E = \frac{1}{2}\sum(\text{target} - \text{output})^2$
  • Learning rate is set to 0.9.

Step 1: Forward Propagation

  1. [hand written] Write and calculate $z_1$, $z_2$, $h_1$, $h_2$, $z_3$, $z_4$, $\sigma_1$, $\sigma_2$, and $E_{\text{total}}$ of forward propagation.

Step 2: BackPropagation 1




  1. [hand written] update $\omega_5$, $\omega_6$, $\omega_7$, $\omega_8$ $\rightarrow$ $\omega_5^+$, $\omega_6^+$, $\omega_7^+$, $\omega_8^+$ of back-propagation.

Step 3: BackPropagation 2




  1. [hand written] update $\omega_1$, $\omega_2$, $\omega_3$, $\omega_4$ $\rightarrow$ $\omega_1^+$, $\omega_2^+$, $\omega_3^+$, $\omega_4^+$ of back-propagation.

Step 4: Check the Result for Weight Update




  1. [hand written] Write and calculate $E_{\text{total}}$ with the updated weights, and compare it to the previous error.

Problem 08

  1. Classify the given four points into two classes in 2D plane using a single layer structure as shown below. Plot a linear boundary even if it fails to classify them.

Note that bias units are not indicated here.



In [ ]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

x_data = np.array([[0, 0], [1, 1], [0, 1], [1, 0]], dtype=np.float32)
y_data = np.array([[0], [0], [1], [1]], dtype=np.float32)

plt.figure(figsize = (8,6))
plt.scatter(x_data[:2,0], x_data[:2,1], marker='+', s=100, label='A')
plt.scatter(x_data[2:,0], x_data[2:,1], marker='x', s=100, label='B')
plt.axis('equal')
plt.ylim([-0.5, 1.5]);
plt.grid(alpha=0.15);
plt.legend();
plt.show()
In [ ]:
## write your code here
#
  1. Classify the given four points in 2D plane using two layers as shown below (the number of neurons in the out layer can be changed to one).

Note that bias units are not indicated here and you can use either one-hot-encoding or sparse_categorical_crossentropy.



In [ ]:
## write your code here
#
  1. The first layer can be seen as kernel function $\phi$. Show the location of four points on 2D plane after the first layer.
In [ ]:
## write your code here
#
  1. Visualize the kernel space onto 2D plane.

Hint: Make 2d grid points and apply the kernel.

In [ ]:
## write your code here
#
  1. Plot the decision boundary on kernel space.
In [ ]:
## write your code here
#

Problem 09

You will do binary classification for nonlinearly seperable data using MLP. Plot the given data first.

In [ ]:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline

N = 200
M = 2*N
gamma = 0.01

G0 = np.random.multivariate_normal([0, 0], gamma*np.eye(2), N)
G1 = np.random.multivariate_normal([1, 1], gamma*np.eye(2), N)
G2 = np.random.multivariate_normal([0, 1], gamma*np.eye(2), N)
G3 = np.random.multivariate_normal([1, 0], gamma*np.eye(2), N)

train_X = np.vstack([G0, G1, G2, G3])
train_y = np.vstack([np.ones([M,1]), np.zeros([M,1])])

train_X = np.asmatrix(train_X)
train_y = np.asmatrix(train_y)

print(train_X.shape)
print(train_y.shape)

plt.figure(figsize = (6, 4))
plt.plot(train_X[:M,0], train_X[:M,1], 'b.', alpha = 0.4, label = 'A')
plt.plot(train_X[M:,0], train_X[M:,1], 'r.', alpha = 0.4, label = 'B')
plt.axis('equal')
plt.xlim([-1, 2]); plt.ylim([-1, 2]);
plt.grid(alpha = 0.15)
plt.legend(fontsize = 12)
plt.show()
(800, 2)
(800, 1)
  1. Design a perceptron model which has a single layer, and train it to show the accuracy.
  • Hidden layer with no nonlinear activiation function
In [ ]:
model = tf.keras.models.Sequential([
    ## your code here


])

model.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 1)                 3         
                                                                 
=================================================================
Total params: 3 (12.00 Byte)
Trainable params: 3 (12.00 Byte)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
In [ ]:
## your code here
#
Epoch 1/30
25/25 [==============================] - 2s 4ms/step - loss: 4.0770 - accuracy: 0.6438
Epoch 2/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8435 - accuracy: 0.7487
Epoch 3/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8267 - accuracy: 0.7475
Epoch 4/30
25/25 [==============================] - 0s 8ms/step - loss: 3.8226 - accuracy: 0.7487
Epoch 5/30
25/25 [==============================] - 0s 8ms/step - loss: 3.8208 - accuracy: 0.7487
Epoch 6/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8201 - accuracy: 0.7487
Epoch 7/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8192 - accuracy: 0.7487
Epoch 8/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8189 - accuracy: 0.7487
Epoch 9/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8183 - accuracy: 0.7487
Epoch 10/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8179 - accuracy: 0.7487
Epoch 11/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8177 - accuracy: 0.7487
Epoch 12/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8179 - accuracy: 0.7487
Epoch 13/30
25/25 [==============================] - 0s 4ms/step - loss: 3.8176 - accuracy: 0.7487
Epoch 14/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8169 - accuracy: 0.7487
Epoch 15/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8166 - accuracy: 0.7487
Epoch 16/30
25/25 [==============================] - 0s 8ms/step - loss: 3.8167 - accuracy: 0.7487
Epoch 17/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8167 - accuracy: 0.7487
Epoch 18/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8165 - accuracy: 0.7487
Epoch 19/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8161 - accuracy: 0.7487
Epoch 20/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8161 - accuracy: 0.7487
Epoch 21/30
25/25 [==============================] - 0s 8ms/step - loss: 3.8160 - accuracy: 0.7487
Epoch 22/30
25/25 [==============================] - 0s 8ms/step - loss: 3.8158 - accuracy: 0.7487
Epoch 23/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8157 - accuracy: 0.7487
Epoch 24/30
25/25 [==============================] - 0s 9ms/step - loss: 3.8158 - accuracy: 0.7487
Epoch 25/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8159 - accuracy: 0.7487
Epoch 26/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8155 - accuracy: 0.7487
Epoch 27/30
25/25 [==============================] - 0s 4ms/step - loss: 3.8156 - accuracy: 0.7487
Epoch 28/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8154 - accuracy: 0.7487
Epoch 29/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8154 - accuracy: 0.7487
Epoch 30/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8151 - accuracy: 0.7487
  1. Plot the classifier (decision boundary).
In [ ]:
## your code here
#
  1. What is the highest accuracy you can get? Discuss the result.
In [ ]:
## write down your discussion here

#
#
#
  1. Design a perceptron model which has 2 layers, and train it to show the accuracy.
  • Hidden layer: sigmoid function
In [ ]:
model = tf.keras.models.Sequential([
    ## your code here



])

model.summary()
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_1 (Dense)             (None, 2)                 6         
                                                                 
 dense_2 (Dense)             (None, 1)                 3         
                                                                 
=================================================================
Total params: 9 (36.00 Byte)
Trainable params: 9 (36.00 Byte)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
In [ ]:
## your code here
#
Epoch 1/20
25/25 [==============================] - 2s 6ms/step - loss: 0.7013 - accuracy: 0.5387
Epoch 2/20
25/25 [==============================] - 0s 5ms/step - loss: 0.7047 - accuracy: 0.5038
Epoch 3/20
25/25 [==============================] - 0s 6ms/step - loss: 0.6853 - accuracy: 0.5487
Epoch 4/20
25/25 [==============================] - 0s 5ms/step - loss: 0.6251 - accuracy: 0.7462
Epoch 5/20
25/25 [==============================] - 0s 4ms/step - loss: 0.5232 - accuracy: 0.7425
Epoch 6/20
25/25 [==============================] - 0s 4ms/step - loss: 0.4311 - accuracy: 0.7500
Epoch 7/20
25/25 [==============================] - 0s 6ms/step - loss: 0.2902 - accuracy: 0.9025
Epoch 8/20
25/25 [==============================] - 0s 4ms/step - loss: 0.1552 - accuracy: 0.9987
Epoch 9/20
25/25 [==============================] - 0s 8ms/step - loss: 0.0881 - accuracy: 1.0000
Epoch 10/20
25/25 [==============================] - 0s 6ms/step - loss: 0.0580 - accuracy: 1.0000
Epoch 11/20
25/25 [==============================] - 0s 5ms/step - loss: 0.0434 - accuracy: 1.0000
Epoch 12/20
25/25 [==============================] - 0s 5ms/step - loss: 0.0336 - accuracy: 1.0000
Epoch 13/20
25/25 [==============================] - 0s 4ms/step - loss: 0.0272 - accuracy: 1.0000
Epoch 14/20
25/25 [==============================] - 0s 4ms/step - loss: 0.0231 - accuracy: 1.0000
Epoch 15/20
25/25 [==============================] - 0s 6ms/step - loss: 0.0196 - accuracy: 1.0000
Epoch 16/20
25/25 [==============================] - 0s 5ms/step - loss: 0.0168 - accuracy: 1.0000
Epoch 17/20
25/25 [==============================] - 0s 6ms/step - loss: 0.0148 - accuracy: 1.0000
Epoch 18/20
25/25 [==============================] - 0s 6ms/step - loss: 0.0132 - accuracy: 1.0000
Epoch 19/20
25/25 [==============================] - 0s 5ms/step - loss: 0.0118 - accuracy: 1.0000
Epoch 20/20
25/25 [==============================] - 0s 5ms/step - loss: 0.0108 - accuracy: 1.0000
  1. Plot two linear classification boundaries in the input space.
In [ ]:
## your code here
#