Optimization


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Table of Contents

0. Video Lectures

In [2]:
%%html 
<center><iframe src="https://www.youtube.com/embed/AjEGqE9UOvo?rel=0" 
width="560" height="315" frameborder="0" allowfullscreen></iframe></center>

1. Optimization

  • an important tool in 1) engineering problem solving and 2) decision science
  • optimize

3 key components

  1. objective
  2. decision variable or unknown
  3. constraints

Procedures

  1. The process of identifying objective, variables, and constraints for a given problem is known as "modeling"
  2. Once the model has been formulated, optimization algorithm can be used to find its solutions.

In mathematical expression


$$\begin{align*} \min_{x} \quad &f(x) \\ \text{subject to} \quad &g_i(x) \leq 0, \qquad i=1,\cdots,m \end{align*} $$

$\;\;\; $where

  • $x=\begin{bmatrix}x_1 \\ \vdots \\ x_n\end{bmatrix} \in \mathbb{R}^n$ is the decision variable
  • $f: \mathbb{R}^n \rightarrow \mathbb{R}$ is an objective function
  • Feasible region: $\mathcal{C} = \{x: g_i(x) \leq 0. \quad i=1, \cdots,m\}$



Remarks) equivalent

$$\begin{align*} \min_{x} f(x) \quad&\leftrightarrow \quad \max_{x} -f(x)\\ \quad g_i(x) \leq 0\quad&\leftrightarrow \quad -g_i(x) \geq 0\\ h(x) = 0 \quad&\leftrightarrow \quad \begin{cases} h(x) \leq 0 \quad \text{and} \\ h(x) \geq 0 \end{cases} \end{align*} $$

2. Solving Optimization Problems

  • Starting with the unconstrained, one dimensional case



  • To find minimum point $x^*$, we can look at the derivave of the function $f'(x)$:
    • Any location where $f'(x)$ = 0 will be a "flat" point in the function
  • For convex problems, this is guaranteed to be a minimum
  • Generalization for multivariate function $f:\mathbb{R}^n \rightarrow \ \mathbb{R}$

    • The gradient of $f$ must be zero
$$ \nabla _x f(x) = 0$$
  • Gradient is a n-dimensional vector containing partial derivatives with respect to each dimension


$$ x = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} \quad \quad \quad \quad \nabla _x f(x) = \begin{bmatrix} \partial f(x) \over \partial x_1 \\ \vdots\\ \partial f(x) \over \partial x_n \end{bmatrix} $$


  • For continuously differentiable $f$ and unconstrained optimization, optimal point must have $\nabla _x f(x^*)=0$

2.1. Analytic Approach

  • Direct solution

    • In some cases, it is possible to analytically compute $x^*$ such that $ \nabla _x f(x^*)=0$


$$ \begin{align*} f(x) &= 2x_1^2+ x_2^2 + x_1 x_2 -6 x_1 -5 x_2\\\\ \Longrightarrow \nabla _x f(x) &= \begin{bmatrix} 4x_1+x_2-6\\ 2x_2 + x_1 -5 \end{bmatrix} = \begin{bmatrix}0\\0 \end{bmatrix}\\\\ \therefore x^* &= \begin{bmatrix} 4 & 1\\ 1 & 2 \end{bmatrix} ^{-1} \begin{bmatrix} 6 \\ 5\\ \end{bmatrix} = \begin{bmatrix} 1 \\ 2\\ \end{bmatrix} \end{align*} $$

  • Note: Matrix derivatives

Exampels

  • Affine function $g(x) = a^Tx + b$
$$\nabla g(x) = a$$
  • Quadratic function $g(x) = x^T P x + q^T x + r,\qquad P = P^T$
$$\nabla g(x) = 2Px + q$$
  • $g(x) = \lVert Ax - b \rVert ^2 = x^TA^TAx - 2b^TAx + b^Tb$
$$\nabla g(x) = 2A^TAx-2A^Tb$$



Note: Revisit Least-Square Solution of $J(x) = \lVert Ax - y \rVert ^2$


$$ \begin{align*} J(x) &= (Ax-y)^T(Ax-y)\\ &=(x^TA^T - y^T)(Ax - y)\\ &=x^TA^TAx - x^TA^Ty - y^TAx + y^Ty\\\\ \frac{\partial J}{\partial x} &= A^TAx + (A^TA)^Tx - A^Ty - (y^TA)^T \\ &=A^TAx - 2A^Ty = 0\\\\ &\Rightarrow (A^TA)x = A^Ty\\\\ \therefore x^* &= (A^TA)^{-1}A^Ty \end{align*} $$

2.2. Iterative Approach

  • Iterative methods

    • More commonly the condition that the gradient equal zero will not have an analytical solution, require iterative methods





  • The gradient points in the direction of "steepest ascent" for function $f$

2.2.1. Gradient Descent

  • It motivates the gradient descent algorithm, which repeatedly takes steps in the direction of the negative gradient


$$ x \leftarrow x - \alpha \nabla _x f(x) \quad \quad \text{for some step size } \alpha > 0$$



  • Gradient Descent
$$\text{Repeat : } x \leftarrow x - \alpha \nabla _x f(x) \quad \quad \text{for some step size } \alpha > 0$$



  • Gradient Descent in Higher Dimension
$$\text{Repeat : } x \leftarrow x - \alpha \nabla _x f(x)$$






2.2.2. Choosing Step Size

  • Learning rate



2.2.3 Where will We Converge?




$\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad \qquad \bullet \, \text{Random initialization}$

$\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \qquad \qquad \bullet \, \text{Multiple trials}$

Example


$$ \begin{align*} \min& \quad (x_1-3)^{2} + (x_2-3)^{2}\\\\ =\min& \quad \frac{1}{2} \begin{bmatrix} x_1 & x_2 \end{bmatrix} \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} - \begin{bmatrix} 6 & 6 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + 18 \end{align*} $$



\begin{align*} f &= \frac{1}{2}X^THX + g^TX \\ \nabla f &= HX+g \end{align*}
  • update rule
$$ X_{i+1} = X_{i} - \alpha_i \nabla f(X_i)$$
In [1]:
import numpy as np
In [3]:
x =  np.zeros((2,1))
print(x)
[[0.]
 [0.]]
In [8]:
H = np.matrix([[2, 0],[0, 2]])
g = -np.matrix([[6],[6]])

x =  np.zeros((2,1))
alpha = 0.1

for i in range(10):    
    df = H*x + g
    x = x - alpha*df

print(x)
[[2.67787745]
 [2.67787745]]
In [2]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')