AI for Mechanical Engineering

Artificial Neural Networks (ANN)

Problem 01

  1. Explain why the perceptron cannot solve the XOR problem.

  2. ANN (Artificial Neural Networks) is also called as MLP (Multilayer Perceptron). Explain why the MLP is able to solve the XOR problem.

Problem 02

  1. (Choose correct answers) Jonathan has now switched to multilayer neural networks and notices that the training error is going down and converges to a local minimum. Then when he tests on the new data, the test error is abnormally high. What is probably going wrong and what do you recommend him to do? (3 choices)

a) The training data size is not large enough. Collect a larger training data and retrain it.

b) Play with learning rate and add regularization term to the objective function.

c) Use a different initialization and train the network several times. Use the average of predictions from all nets to predict test data.

d) Use the same training data but add two more hidden layers.


  1. True or false for the following questions. (Correct +1, Wrong -1)

a) Single perceptron can solve a lineary inseperable problem with a kernel function.

b) Gradient descent trains neural networks to the global optimum.


  1. (Choose all the correct answers) Jonathan is tyring to solve the XOR problem using a multilayer perceptron (MLP) with ReLU activation function. However, as he trains the MLP model, the results vary at every iteration. The results are correct in some iterations, and the results are wrong at the other iterations. What is probably going wrong and what do you recommend him to do?

a) The training data points are not large enough. Collect a larger training data points and re-train it.

b) The number of perceptron layers is too large. Remove the perceptron layers and re-train it.

c) The number of perceptron layers is too small. Add more the perceptron layers and re-train it.

d) Learning rate is too high. Reduce learning rate and re-train it.


  1. Explain the difference between sigmoid (or hyperbolic tangent), and rectified linear unit (ReLU) activation functions in gradient backpropagation.

Problem 03

To deal with nonlinearly distributed data set, we need to design an appropriate kernel to make (or map) the original data to be linearly seperable. However, in artificial neural networks, this step is not necessary. Discuss why. (Hint: see the following figure.)


Problem 04

Suppose a multi-layer perceptron which has an input layer with 10 neurons, a hidden layer with 50 neurons, and an output layer with 3 neurons. Non-linear activation function for every neurons are ReLU. Write your answer to the following questions.

  1. Size of input $X$?

  2. Size of weights and biases ($W_h, b_h$) for the hidden layer?

  3. Size of weights and biases ($W_o, b_o$) for the output layer?

  4. Size of output $Y$?

Problem 05

To train neural networks, backpropagation is used. Briefly explain what the backpropagation is. When you discuss it, use the keywords such as recursive, memorized, dynamic programming, chain rule, etc.

Problem 06

Build the ANN model which receives three binary-valued (i.e., $0$ or $1$) inputs $x_1,x_2,x_3$, and outputs $1$ if exactly two of the inputs are $1$, and outputs $0$ otherwise. All of the units use a hard threshold activation function:


$$z = \begin{cases} 1 \quad \text{if } z \geq 0\\ 0 \quad \text{if } z < 0 \end{cases} $$

Suggest one of possible weights and biases which correctly implement this function.




Denote by

  • $\mathbf{W}_{2 \times 3}$ and $\mathbf{V}_{1 \times 2}$ weight matrices connecting input and hidden layer, and hidden layer and output respectively.

  • $\mathbf{b}^{(1)}_{2 \times 1}$ and $\mathbf{b}^{(2)}_{1 \times 1}$ biases matrices at hidden layer and output, respectively.

  • $x_{3 \times 1}$ and $h_{2 \times 1}$ node values at input and hidden layer, repectively.

Problem 07

In this problem, we are going to compute the gradient using the chain rule and dynamic programming, and update the weights $\omega \rightarrow \omega^+$. After that, the weights are updated through 1 back-propagation and compared with the error before the update.




Neural Network Model

  • The artificial neural network structure: an input layer, a hidden layern, and an output layer.
  • All neurons ($h_1$, $h_2$, $\sigma_1$ and $\sigma_2$) in the hidden and output layers use the sigmoid function as activation function.
  • The red number means the initial weight values, the blue number means the input values, and the ground truth means the actual values.
  • The loss function is the mean square error (MSE). Use 1/2 MSE for calculation convenience. For example, $E = \frac{1}{2}\sum(\text{target} - \text{output})^2$
  • Learning rate is set to 0.9.

Step 1: Forward Propagation

  1. [hand written] Write and calculate $z_1$, $z_2$, $h_1$, $h_2$, $z_3$, $z_4$, $\sigma_1$, $\sigma_2$, and $E_{\text{total}}$ of forward propagation.

Step 2: BackPropagation 1




  1. [hand written] update $\omega_5$, $\omega_6$, $\omega_7$, $\omega_8$ $\rightarrow$ $\omega_5^+$, $\omega_6^+$, $\omega_7^+$, $\omega_8^+$ of back-propagation.

Step 3: BackPropagation 2




  1. [hand written] update $\omega_1$, $\omega_2$, $\omega_3$, $\omega_4$ $\rightarrow$ $\omega_1^+$, $\omega_2^+$, $\omega_3^+$, $\omega_4^+$ of back-propagation.

Step 4: Check the Result for Weight Update




  1. [hand written] Write and calculate $E_{\text{total}}$ with the updated weights, and compare it to the previous error.

Problem 08

  1. Classify the given four points into two classes in 2D plane using a single layer structure as shown below. Plot a linear boundary even if it fails to classify them.

Note that bias units are not indicated here.



In [ ]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

x_data = np.array([[0, 0], [1, 1], [0, 1], [1, 0]], dtype=np.float32)
y_data = np.array([[0], [0], [1], [1]], dtype=np.float32)

plt.figure(figsize = (8,6))
plt.scatter(x_data[:2,0], x_data[:2,1], marker='+', s=100, label='A')
plt.scatter(x_data[2:,0], x_data[2:,1], marker='x', s=100, label='B')
plt.axis('equal')
plt.ylim([-0.5, 1.5]);
plt.grid(alpha=0.15);
plt.legend();
plt.show()
In [ ]:
## write your code here
#
  1. Classify the given four points in 2D plane using two layers as shown below (the number of neurons in the out layer can be changed to one).

Note that bias units are not indicated here and you can use either one-hot-encoding or sparse_categorical_crossentropy.



In [ ]:
## write your code here
#
  1. The first layer can be seen as kernel function $\phi$. Show the location of four points on 2D plane after the first layer.
In [ ]:
## write your code here
#
  1. Visualize the kernel space onto 2D plane.

Hint: Make 2d grid points and apply the kernel.

In [ ]:
## write your code here
#
  1. Plot the decision boundary on kernel space.
In [ ]:
## write your code here
#

Problem 09

You will do binary classification for nonlinearly seperable data using MLP. Plot the given data first.

In [ ]:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline

N = 200
M = 2*N
gamma = 0.01

G0 = np.random.multivariate_normal([0, 0], gamma*np.eye(2), N)
G1 = np.random.multivariate_normal([1, 1], gamma*np.eye(2), N)
G2 = np.random.multivariate_normal([0, 1], gamma*np.eye(2), N)
G3 = np.random.multivariate_normal([1, 0], gamma*np.eye(2), N)

train_X = np.vstack([G0, G1, G2, G3])
train_y = np.vstack([np.ones([M,1]), np.zeros([M,1])])

train_X = np.asmatrix(train_X)
train_y = np.asmatrix(train_y)

print(train_X.shape)
print(train_y.shape)

plt.figure(figsize = (6, 4))
plt.plot(train_X[:M,0], train_X[:M,1], 'b.', alpha = 0.4, label = 'A')
plt.plot(train_X[M:,0], train_X[M:,1], 'r.', alpha = 0.4, label = 'B')
plt.axis('equal')
plt.xlim([-1, 2]); plt.ylim([-1, 2]);
plt.grid(alpha = 0.15)
plt.legend(fontsize = 12)
plt.show()
(800, 2)
(800, 1)
  1. Design a perceptron model which has a single layer, and train it to show the accuracy.
  • Hidden layer with no nonlinear activiation function
In [ ]:
model = tf.keras.models.Sequential([
    ## your code here


])

model.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 1)                 3         
                                                                 
=================================================================
Total params: 3 (12.00 Byte)
Trainable params: 3 (12.00 Byte)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
In [ ]:
## your code here
#
Epoch 1/30
25/25 [==============================] - 2s 4ms/step - loss: 4.0770 - accuracy: 0.6438
Epoch 2/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8435 - accuracy: 0.7487
Epoch 3/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8267 - accuracy: 0.7475
Epoch 4/30
25/25 [==============================] - 0s 8ms/step - loss: 3.8226 - accuracy: 0.7487
Epoch 5/30
25/25 [==============================] - 0s 8ms/step - loss: 3.8208 - accuracy: 0.7487
Epoch 6/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8201 - accuracy: 0.7487
Epoch 7/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8192 - accuracy: 0.7487
Epoch 8/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8189 - accuracy: 0.7487
Epoch 9/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8183 - accuracy: 0.7487
Epoch 10/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8179 - accuracy: 0.7487
Epoch 11/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8177 - accuracy: 0.7487
Epoch 12/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8179 - accuracy: 0.7487
Epoch 13/30
25/25 [==============================] - 0s 4ms/step - loss: 3.8176 - accuracy: 0.7487
Epoch 14/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8169 - accuracy: 0.7487
Epoch 15/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8166 - accuracy: 0.7487
Epoch 16/30
25/25 [==============================] - 0s 8ms/step - loss: 3.8167 - accuracy: 0.7487
Epoch 17/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8167 - accuracy: 0.7487
Epoch 18/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8165 - accuracy: 0.7487
Epoch 19/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8161 - accuracy: 0.7487
Epoch 20/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8161 - accuracy: 0.7487
Epoch 21/30
25/25 [==============================] - 0s 8ms/step - loss: 3.8160 - accuracy: 0.7487
Epoch 22/30
25/25 [==============================] - 0s 8ms/step - loss: 3.8158 - accuracy: 0.7487
Epoch 23/30
25/25 [==============================] - 0s 7ms/step - loss: 3.8157 - accuracy: 0.7487
Epoch 24/30
25/25 [==============================] - 0s 9ms/step - loss: 3.8158 - accuracy: 0.7487
Epoch 25/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8159 - accuracy: 0.7487
Epoch 26/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8155 - accuracy: 0.7487
Epoch 27/30
25/25 [==============================] - 0s 4ms/step - loss: 3.8156 - accuracy: 0.7487
Epoch 28/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8154 - accuracy: 0.7487
Epoch 29/30
25/25 [==============================] - 0s 6ms/step - loss: 3.8154 - accuracy: 0.7487
Epoch 30/30
25/25 [==============================] - 0s 5ms/step - loss: 3.8151 - accuracy: 0.7487
  1. Plot the classifier (decision boundary).
In [ ]:
## your code here
#
  1. What is the highest accuracy you can get? Discuss the result.
In [ ]:
## write down your discussion here

#
#
#
  1. Design a perceptron model which has 2 layers, and train it to show the accuracy.
  • Hidden layer: sigmoid function
In [ ]:
model = tf.keras.models.Sequential([
    ## your code here



])

model.summary()
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_1 (Dense)             (None, 2)                 6         
                                                                 
 dense_2 (Dense)             (None, 1)                 3         
                                                                 
=================================================================
Total params: 9 (36.00 Byte)
Trainable params: 9 (36.00 Byte)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
In [ ]:
## your code here
#
Epoch 1/20
25/25 [==============================] - 2s 6ms/step - loss: 0.7013 - accuracy: 0.5387
Epoch 2/20
25/25 [==============================] - 0s 5ms/step - loss: 0.7047 - accuracy: 0.5038
Epoch 3/20
25/25 [==============================] - 0s 6ms/step - loss: 0.6853 - accuracy: 0.5487
Epoch 4/20
25/25 [==============================] - 0s 5ms/step - loss: 0.6251 - accuracy: 0.7462
Epoch 5/20
25/25 [==============================] - 0s 4ms/step - loss: 0.5232 - accuracy: 0.7425
Epoch 6/20
25/25 [==============================] - 0s 4ms/step - loss: 0.4311 - accuracy: 0.7500
Epoch 7/20
25/25 [==============================] - 0s 6ms/step - loss: 0.2902 - accuracy: 0.9025
Epoch 8/20
25/25 [==============================] - 0s 4ms/step - loss: 0.1552 - accuracy: 0.9987
Epoch 9/20
25/25 [==============================] - 0s 8ms/step - loss: 0.0881 - accuracy: 1.0000
Epoch 10/20
25/25 [==============================] - 0s 6ms/step - loss: 0.0580 - accuracy: 1.0000
Epoch 11/20
25/25 [==============================] - 0s 5ms/step - loss: 0.0434 - accuracy: 1.0000
Epoch 12/20
25/25 [==============================] - 0s 5ms/step - loss: 0.0336 - accuracy: 1.0000
Epoch 13/20
25/25 [==============================] - 0s 4ms/step - loss: 0.0272 - accuracy: 1.0000
Epoch 14/20
25/25 [==============================] - 0s 4ms/step - loss: 0.0231 - accuracy: 1.0000
Epoch 15/20
25/25 [==============================] - 0s 6ms/step - loss: 0.0196 - accuracy: 1.0000
Epoch 16/20
25/25 [==============================] - 0s 5ms/step - loss: 0.0168 - accuracy: 1.0000
Epoch 17/20
25/25 [==============================] - 0s 6ms/step - loss: 0.0148 - accuracy: 1.0000
Epoch 18/20
25/25 [==============================] - 0s 6ms/step - loss: 0.0132 - accuracy: 1.0000
Epoch 19/20
25/25 [==============================] - 0s 5ms/step - loss: 0.0118 - accuracy: 1.0000
Epoch 20/20
25/25 [==============================] - 0s 5ms/step - loss: 0.0108 - accuracy: 1.0000
  1. Plot two linear classification boundaries in the input space.
In [ ]:
## your code here
#
  1. Plot one linear classification boundary in z space (or values in the hidden layer).
In [ ]:
## your code here
#

Problem 10

Here, we will solve the same problem that we covered in class.

In [ ]:
# training data gerneration

m = 1000
x1 = 10*np.random.rand(m, 1) - 5
x2 = 8*np.random.rand(m, 1) - 4

g = - 0.5*(x1-1)**2 + 2*x2 + 5

C1 = np.where(g >= 0)[0]
C0 = np.where(g < 0)[0]
N = C1.shape[0]
M = C0.shape[0]
m = N + M

X1 = np.hstack([x1[C1], x2[C1]])
X0 = np.hstack([x1[C0], x2[C0]])

train_X = np.vstack([X1, X0])
train_X = np.asmatrix(train_X)

train_y = np.vstack([np.ones([N,1]), np.zeros([M,1])])

plt.figure(figsize = (6, 4))
plt.plot(x1[C1], x2[C1], 'ro', alpha = 0.4, label = 'C1')
plt.plot(x1[C0], x2[C0], 'bo', alpha = 0.4, label = 'C0')
plt.legend(loc = 1, fontsize = 15)
plt.xlabel(r'$x_1$', fontsize = 15)
plt.ylabel(r'$x_2$', fontsize = 15)
plt.xlim([-5, 5])
plt.ylim([-4, 4])
plt.show()
  1. Classify the above given data in 2D plane with the hidden layer of size of 3 as shown below (the number of neurons in the out layer can be changed to one).




In [ ]:
## write your code here
#
  1. Plot those data in Z-plane (hidden layer)
In [ ]:
## write your code here
#
  1. Plot 2D hyperplane that separates those data into C1 and C0
In [ ]:
## write your code here
#

Problem 11

With the below dataset, you are asked to apply ANN to the multiclass classification problem. You are supposed to design your own ANN structure. Plot the linear classification boundaries.

In [ ]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

## generate three simulated clusters

mu1 = np.array([1, 7])
SIGMA1 = 0.8*np.array([[1, 1.5],
                       [1.5, 3]])
X1 = np.random.multivariate_normal(mu1, SIGMA1, 100)

mu2 = np.array([3, 4])
SIGMA2 = 0.3*np.array([[1, 0],
                       [0, 1]])
X2 = np.random.multivariate_normal(mu2, SIGMA2, 100)

mu3 = np.array([7, 5])
SIGMA3 = 0.3*np.array([[1, -1],
                       [-1, 2]])
X3 = np.random.multivariate_normal(mu3, SIGMA3, 50)

plt.figure(figsize = (6, 4))
plt.title('Generated Data', fontsize=15)
plt.plot(X1[:,0], X1[:,1], '.')
plt.plot(X2[:,0], X2[:,1], '.')
plt.plot(X3[:,0], X3[:,1], '.')
plt.xlabel('$X_1$', fontsize = 15)
plt.ylabel('$X_2$', fontsize = 15)
plt.axis('equal')
plt.grid(alpha = 0.3)
plt.axis([-2, 10, 1, 12])
plt.show()
In [ ]:
## write your code here
#

Problem 12

We are going to do 3 classes classification. The class of each digit is devided according to the remainder divided by 3.

(ex., 0 $\Rightarrow$ class 0, 1 $\Rightarrow$ class 1, 2 $\Rightarrow$ class 2, 3 $\Rightarrow$ class 0, 4 $\Rightarrow$ class 1, $\cdots$)

  1. Plot the random images and labels.
In [ ]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

mnist_train_images = np.load('./data_files/mnist_train_images_rev.npy')
mnist_train_labels = np.load('./data_files/mnist_train_labels_rev.npy')
mnist_test_images = np.load('./data_files/mnist_test_images_rev.npy')
mnist_test_labels = np.load('./data_files/mnist_test_labels_rev.npy')

train_labels = mnist_train_labels
train_imgs = mnist_train_images
test_labels = mnist_test_labels
test_imgs = mnist_test_images
In [ ]:
## write your code here
#
  1. Make your own ANN model that classifies MNIST images into 3 classes.
In [ ]:
## write your code here
#
  1. Train your model and check its accuracy
In [ ]:
## write your code here
#

Problem 13

In this problem, we want to conduct regression for nonlinearly distributed data using multilayer perceptron.

Use MLP (or ANN) to find a regression curve, and then plot it with data.

In [ ]:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
%matplotlib inline

# 10 data points
n = 10
x = np.linspace(-4.5, 4.5, 10).reshape(-1,1)
y = np.array([0.9819, 0.7973, 1.9737, 0.1838, 1.3180, -0.8361, -0.6591, -2.4701, -2.8122, -6.2512]).reshape(-1,1)

plt.figure(figsize = (6, 4))
plt.plot(x, y, 'o', label = 'Data')
plt.xlabel('X', fontsize = 15)
plt.ylabel('Y', fontsize = 15)
plt.grid(alpha = 0.3)
plt.show()
In [ ]:
## write your code here
#
In [ ]:
xp = np.arange(-5, 5, 0.01).reshape(-1, 1)
yp = reg.predict(xp)

plt.figure(figsize = (10, 8))
plt.plot(x, y, 'o', label = 'Data')
plt.plot(xp, yp, 'r-', label = 'Regression')
plt.xlabel('X', fontsize = 15)
plt.ylabel('Y', fontsize = 15)
plt.grid(alpha = 0.3)
plt.legend(fontsize=12)
plt.show()

Problem 14

Rotating Machinery Diagnosis with Logistic Regression

Mechanical systems are always vibrating when operating. So, vibration analysis is one of the most popular and conventional ways to diagnose the mechanical system. In this problem, we are going to use the logistic regression algorithm to identify abnormality of a mechanical system.

Data information

  • File format: npy

  • Information: signal, label

  • Sampling rate: 12,800 Hz

  • File length (time): 0.78 sec

  • Labels based on one-hot encoding

    • A (Normal): [1,0]
    • B (Abnormal): [0,1]

Data Download Link

In [ ]:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import IPython.display as ipd
import scipy.stats
import scipy
%matplotlib inline

# data load
data = {
    'signal': np.load('./data_files/signal.npy'),
    'label': np.load('./data_files/label.npy')
}

idx_A = []
idx_B = []
for i in range(len(data['label'])):
    if data['label'][i] == False:
        idx_A.append(i)
    else:
        idx_B.append(i)

# one-hot encoding
data['label'] = tf.keras.utils.to_categorical(data['label'])

Run the below cells to hear the sound of A and B.

In [ ]:
fs = 12800
signal_A = data['signal'][np.random.choice(idx_A),:]
ipd.Audio(signal_A, rate = fs)
In [ ]:
signal_B = data['signal'][np.random.choice(idx_B),:]
ipd.Audio(signal_B, rate = fs)
  1. Plot 2 randomly selected vibration signals.
In [ ]:
## Your code here
#

Now, we need to extract features from the raw signal.

  • Use the following statistical features
    • peak, rms, kurtosis, crest factor, impulse factor, shape factor, skewness, smr, peak-peak

The equations are as follows:

$$\begin{align*} \text{Peak Value } &= \max_{n = 1, \dots, N} x_n \\ \\ \text{RMS } &= \sqrt{\frac{1}{N}\sum_{n=1}^N x_n^2} \\ \\ \text{Kurtosis } &= \frac{\frac{1}{N}\sum_{n=1}^N (x_n-\bar{x})^4}{\text{Var}^2} \\ \\ \text{Var } &= \frac{1}{N}\sum_{n=1}^N (x_n-\bar{x})^2 \\\\ \text{Crest Factor } &= \frac{\text{Peak}}{\text{RMS}}\\\\ \text{Impulse Factor } &= \frac{\text{Peak}}{\text{Mean}} \\ \\ \text{Mean } &= \frac{1}{N}\sum_{n=1}^N \lvert x_n \rvert \\\\ \text{Shape Factor } &= \frac{\text{RMS}}{\text{Mean}}\\\\ \text{Skewness } &= \frac{\frac{1}{N}\sum_{n=1}^N(x_n-\bar{x})^3}{\text{Var}^{3/2}}\\\\ \text{SMR } &= \left(\frac{1}{N}\sum_{n=1}^N\sqrt{x_n} \right)^2 \\\\ \text{Peak-Peak Value } &= \max_{n = 1, \dots, N} x_n-\min_{n = 1, \dots, N} x_n \end{align*}$$$$x_i: \text{signal data}$$
The function for feature extraction is already ready for you. The function input and output are data $x$ and horizontally stacked feature vector, respectively.
In [ ]:
def extfeat(x):
    fvector = []

    # time domain feature
    peak = np.max(np.abs(x))
    fvector.append(peak)

    rms = np.sqrt(np.mean(x**2))
    fvector.append(rms)

    kurtosis = scipy.stats.kurtosis(x)
    fvector.append(kurtosis)

    crest_factor = fvector[0]/fvector[1]
    fvector.append(crest_factor)

    impulse_factor = fvector[0]/(np.sum(np.abs(x))/len(x))
    fvector.append(impulse_factor)

    shape_factor = fvector[1]/(np.sum(np.abs(x))/len(x))
    fvector.append(shape_factor)

    skewness = scipy.stats.skew(x)
    fvector.append(skewness)

    smr = (np.sum(np.sqrt(np.abs(x)))/len(x))**2
    fvector.append(smr)

    pp = np.max(x) - np.min(x)
    fvector.append(pp)

    return fvector

feature_name = ['Peak', 'RMS', 'Kurtosis', 'Crest Factor', 'Impulse Factor','Shape Factor','Skewness','SMR', 'Peak-Peak']
  1. Print out features of a randomly selected signal. You can use the above extfeat function.
In [ ]:
## Your code here
#
  1. Split train and test data with your own separating ratio.
In [ ]:
## Your code here
#
  1. Design your logistic regression model using ANN and train it.
In [ ]:
## Your code here
#
  1. Print out learned $\omega$.
In [ ]:
## Your code here
#
  1. Compute test accuracy. You should use learned $\omega$ to predict labels for unseen test data.

Note: test accuracy would be somewhat low (around 80%).

In [ ]:
## Your code here
#