Machine Learning for Mechanical Engineering

Opimization

Prof. Seungchul Lee
http://iailab.kaist.ac.kr/
Industrial AI Lab at KAIST
• For your handwritten solutions, please scan or take a picture of them. Alternatively, you can write them in markdown if you prefer.

• Ensure that your NAME and student ID are included in your .ipynb files. ex) SeungchulLee_20241234_HW01.ipynb
• Compress all the files into a single .zip file.

• In the .zip file's name, include your NAME and student ID.

ex) SeungchulLee_20241234_HW01.zip

• Submit this .zip file on KLMS
• Do not submit a printed version of your code, as it will not be graded.

# Problem 1¶

A healthy diet contains $m$ different nutrients in quantities at least equal to $b_1, \cdots, b_m$. We can compose such a diet by choosing nonnegative quantities $x_1, \cdots, x_n$ of $n$ different foods. One unit quantity of food $j$ contains an amount $a_{ij}$ of nutrient $i$, and has a cost of $c_j$. We want to determine the cheapest diet that satisfies the nutritional requirements. This problem can be formulated as the linear programming:

\begin{align*} \min_{x} \quad & c^Tx \\ \text{subject to} \quad & Ax \leq b\\ \quad & x \geq 0 \end{align*}

Find $c, A$, and $b$

# Problem 2¶

Find solution $D$ of the optimization problem with the following objective function and constraints. Here, $\lVert \cdot \rVert_2$ denotes the Euclidean norm (or distance).

(Hint: you do not have to find the optimal value of $Z$)

\begin{align*} D= \min_Z \;\; & \lVert X_1 - Z \rVert_2 + \lVert X_2 - Z \rVert_2 \\ \\ \text{subject to} \; & Z \in \text{span} \{V_1, V_2\} \end{align*}

# Problem 3¶

(a) Find the solution by taking derivatives of the following objective function:

$$\min_{x} \,\,\, (2x_1 - 1)^2 + (-x_1 + x_2)^2 + (2x_2 +1)^2$$

(b) Formulate the above corresponding least-squares problem (i.e., find one of possible $A$ and $b$), and solve it using cvxpy.

$$\min_{x} \, \lVert Ax - b\rVert_2 ^2$$
In [ ]:
# write your code here
#


# Problem 4¶

Solve the following optimization problem

\begin{align*} \min &\;\; \sqrt[4]{(x_1 - 3x_2 - 2x_3+ 5)^2 + (4x_1 + 2x_2 +3x_3 - 8)^2}\\\\ \text{subject to } &\;\; 2x_1 +3x_2 + x_3 \geq 32 \\ &\;\; x_1 + x_2 \leq 10 \\ &\;\; x_2 + x_3 \leq 27 \\ &\;\; x_3 + x_1 \leq 20 \\ &\;\; x_1, x_2, x_3 \geq 0 \end{align*}

(a) Rewrite the objective function in a matrix form.

In [ ]:
import cvxpy as cvx
import numpy as np

# f =
# H =

# A =
# b =
# lb =


(b) Solve the optimization problem using cvxpy.

In [ ]:
# write your code here
#


(c) Find the minimum point of the objective function.

In [ ]:
# write your code here
# x_1 =
# x_2 =
# x_3 =
# f_min =


# Problem 5¶

In this problem we use least-squares to fit a circle to given points $(u_i, v_i)$ in a plane, as shown in the figure.

We use $(u_c, v_c)$ to denote the center of the circle and $R$ for its radius. A point $(u,v)$ is on the circle if $(u-u_c)^2 + (v-v_c)^2 = R^2$. We can therefore formulate the fitting problem as

$$\text{minimize } \sum_{i=1}^{m} \left( (u_i-u_c)^2 + (v_i-v_c)^2 - R^2 \right)^2$$

With variables $v_c, u_c$, and $R$.

(a) Show that this can be written as a linear least-squares problem if we make a change of variables and use as variables $u_c, v_c$ and $\omega = u_c^2 + v_c^2 -R^2$.

(b) Find $A, b$, and $x$ in the equivalent linear least-squares formulations.

# Problem 6¶

Solve the following optimization problem

\begin{align*} \text{minimize} \quad & \frac{1}{2}x^TPx+q^Tx+r \\ \text{subject to} \quad & -1 \leq x_i \leq 1, \quad i = 1,2,3,\\ \end{align*}

where

$$P = \begin{bmatrix} 13 & 12 & -2\\ 12 & 17 & 6\\ -2 & 6 & 12\end{bmatrix} , \quad\quad q = \begin{bmatrix} -22.0\\ -14.5\\ 13.0\end{bmatrix} , \quad\quad r = 1.$$

(a) Show that $x^* = (1,\frac{1}{2},-1)$ is optimal for the optimization problem

(b) Solve the above optimization problem using cvxpy.

In [ ]:
# write your code here
#


# Problem 7 (Steepest Descent Method)¶

Consider the following optimization problem:

\begin{align*} \min \quad f(X) & = \frac{1}{2}X^TAX-b^TX \qquad (\text{where}\; A = A^T)\\ \end{align*}

Here, the steepest descent algorithm:

\begin{align*} &\text{Steepest Descent Algorithm} \; \\ &1 \;\;\; \text{Initialize at } X_i, \text{and set } i \leftarrow 0\\ &2 \;\;\; \text{At iteration } i\\ &3 \;\;\;\;\;\; d_i = \nabla f(X_i) = AX_i - b. \; \text{If} \; d_i = 0 \; \text{then stop.} \\ &4 \;\;\;\;\;\; \text{Choose step-size }\; \alpha_i = \frac{d_i^T d_i}{d_i^T A d_i} \\ &5 \;\;\;\;\;\; \text{Updated } X_{i+1} \leftarrow X_i - \alpha_i d_i \\ \end{align*}

(a) Show that $\alpha_i = \frac{d_i^T d_i}{d_i^T A d_i}$ is the best step-size.

(b) Determine the minimum of

$$f(x,y) = x^2+y^2+2x+4$$

$\quad \,$Use $X_0 = \begin{bmatrix} x_0 \\ y_0 \end{bmatrix} = \begin{bmatrix} 2 \\ 1 \end{bmatrix}$ as an initial point and the steepest descent algorithm by hands.

# Problem 8¶

A linear programming is in a canonical form if it is of the form:

\begin{align*} \text{minimize} \quad &c^Tx\\ \text{subject to} \quad & Ax\leq b\\ \quad & \;\;\, x\geq 0\\ \quad & \;\;\, x\in S\\\\ \end{align*}

A linear programming is in a standard form if it is of the form:

\begin{align*} \text{minimize} \quad &c^Tx\\ \text{subject to} \quad & Ax = b\\ \quad & \;\;\, x\geq 0\\ \quad & \;\;\, x\in S\\\\ \end{align*}

(a) Show that a linear programming in a canonical form can be transformed to one in a standard form.

\begin{align*} Ax \leq b, x\geq 0 \quad \implies \quad \begin{bmatrix} A & \mathbb{1} \end{bmatrix} \begin{bmatrix} x\\ \omega \end{bmatrix}=b,\quad \begin{bmatrix} x\\ \omega \end{bmatrix} \geq 0 \end{align*}

(hint) $\omega$ is a slack variable

# Problem 9¶

Python provides the 'linprog' function in the 'scipy.optimize' module to solve linear programming problems in the following form.

\begin{align*} \min \quad &c^Tx\\ \text{subject to} \quad &A_{ub}x \leq b_{ub}\\ &A_{eq}x = b_{eq}\\ & l \leq x \leq u \end{align*}

(a) Solve the following optimization problem using linprog

\begin{align*} \max \quad & x_1 + x_2 \\\\ \text{subject to} \quad & 2x_1 + x_2 \leq 29 \\ & x_1 + 2x_2 \leq 25 \\ & x_1 \geq 2 \\ & x_2 \geq 5 \end{align*}
In [ ]:
import numpy as np
from scipy.optimize import linprog

#


# Problem 10¶

(a) Show that the given constrained optimization problem can be transformed to the unconstrained optimization one (quadratic penalty method)

\begin{align*} \text{minimize} & \quad f(x) & \text{minimize} & \quad f(x) + \sum_{i=1}^{l} \gamma_i\{\max(0, g_i(x))\}^2 + \sum_{i=1}^{m} \mu_i \{h_i(x)\}^2\\ \text{subject to} & \quad g(x) \leq 0 \quad \qquad \implies \quad & \text{subject to} & \quad x \in S\\ &\quad h(x) = 0\\ &\quad x\in S \end{align*}

(b) Show that the given multi-objective optimization problem can be transformed to the single-objective optimization one (goal programming)

\begin{align*} \min \quad f_1(x), f_2(x), \cdots, f_m(x) \qquad \Longrightarrow \quad \qquad \min \quad &\beta \\\\ \text{subject to}\quad &f_1(x), f_2(x),\cdots, f_m(x)\leq \beta \end{align*}