Explain why the perceptron cannot solve the XOR problem.
ANN (Artificial Neural Networks) is also called as MLP (Multilayer Perceptron). Explain why the MLP is able to solve the XOR problem.
a) The training data size is not large enough. Collect a larger training data and retrain it.
b) Play with learning rate and add regularization term to the objective function.
c) Use a different initialization and train the network several times. Use the average of predictions from all nets to predict test data.
d) Use the same training data but add two more hidden layers.
a) Single perceptron can solve a lineary inseperable problem with a kernel function.
b) Gradient descent trains neural networks to the global optimum.
a) The training data points are not large enough. Collect a larger training data points and re-train it.
b) The number of perceptron layers is too large. Remove the perceptron layers and re-train it.
c) The number of perceptron layers is too small. Add more the perceptron layers and re-train it.
d) Learning rate is too high. Reduce learning rate and re-train it.
To deal with nonlinearly distributed data set, we need to design an appropriate kernel to make (or map) the original data to be linearly seperable. However, in artificial neural networks, this step is not necessary. Discuss why. (Hint: see the following figure.)
Suppose a multi-layer perceptron which has an input layer with 10 neurons, a hidden layer with 50 neurons, and an output layer with 3 neurons. Non-linear activation function for every neurons are ReLU. Write your answer to the following questions.
Size of input $X$?
Size of weights and biases ($W_h, b_h$) for the hidden layer?
Size of weights and biases ($W_o, b_o$) for the output layer?
Size of output $Y$?
To train neural networks, backpropagation is used. Briefly explain what the backpropagation is. When you discuss it, use the keywords such as recursive, memorized, dynamic programming, chain rule, etc.
Build the ANN model which receives three binary-valued (i.e., $0$ or $1$) inputs $x_1,x_2,x_3$, and outputs $1$ if exactly two of the inputs are $1$, and outputs $0$ otherwise. All of the units use a hard threshold activation function:
Suggest one of possible weights and biases which correctly implement this function.
Denote by
$\mathbf{W}_{2 \times 3}$ and $\mathbf{V}_{1 \times 2}$ weight matrices connecting input and hidden layer, and hidden layer and output respectively.
$\mathbf{b}^{(1)}_{2 \times 1}$ and $\mathbf{b}^{(2)}_{1 \times 1}$ biases matrices at hidden layer and output, respectively.
$x_{3 \times 1}$ and $h_{2 \times 1}$ node values at input and hidden layer, repectively.
In this problem, we are going to compute the gradient using the chain rule and dynamic programming, and update the weights $\omega \rightarrow \omega^+$. After that, the weights are updated through 1 back-propagation and compared with the error before the update.
Neural Network Model
1/2 MSE
for calculation convenience. For example, $E = \frac{1}{2}\sum(\text{target} - \text{output})^2$Note that bias units are not indicated here.
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x_data = np.array([[0, 0], [1, 1], [0, 1], [1, 0]], dtype=np.float32)
y_data = np.array([[0], [0], [1], [1]], dtype=np.float32)
plt.figure(figsize = (8,6))
plt.scatter(x_data[:2,0], x_data[:2,1], marker='+', s=100, label='A')
plt.scatter(x_data[2:,0], x_data[2:,1], marker='x', s=100, label='B')
plt.axis('equal')
plt.ylim([-0.5, 1.5]);
plt.grid(alpha=0.15);
plt.legend();
plt.show()
## write your code here
#
Note that bias units are not indicated here and you can use either one-hot-encoding or sparse_categorical_crossentropy
.
## write your code here
#
## write your code here
#
Hint: Make 2d grid points and apply the kernel.
## write your code here
#
## write your code here
#
You will do binary classification for nonlinearly seperable data using MLP. Plot the given data first.
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
N = 200
M = 2*N
gamma = 0.01
G0 = np.random.multivariate_normal([0, 0], gamma*np.eye(2), N)
G1 = np.random.multivariate_normal([1, 1], gamma*np.eye(2), N)
G2 = np.random.multivariate_normal([0, 1], gamma*np.eye(2), N)
G3 = np.random.multivariate_normal([1, 0], gamma*np.eye(2), N)
train_X = np.vstack([G0, G1, G2, G3])
train_y = np.vstack([np.ones([M,1]), np.zeros([M,1])])
train_X = np.asmatrix(train_X)
train_y = np.asmatrix(train_y)
print(train_X.shape)
print(train_y.shape)
plt.figure(figsize = (6, 4))
plt.plot(train_X[:M,0], train_X[:M,1], 'b.', alpha = 0.4, label = 'A')
plt.plot(train_X[M:,0], train_X[M:,1], 'r.', alpha = 0.4, label = 'B')
plt.axis('equal')
plt.xlim([-1, 2]); plt.ylim([-1, 2]);
plt.grid(alpha = 0.15)
plt.legend(fontsize = 12)
plt.show()
model = tf.keras.models.Sequential([
## your code here
])
model.summary()
## your code here
#
## your code here
#
## write down your discussion here
#
#
#
model = tf.keras.models.Sequential([
## your code here
])
model.summary()
## your code here
#
## your code here
#