Linear Programming (LP)


By Prof. Seungchul Lee
http://iailab.kaist.ac.kr/
Industrial AI Lab at KAIST

Table of Contents


1. Linear Programming (LP)

In [ ]:
from IPython.display import YouTubeVideo
YouTubeVideo('maFgxXOWO9k', width = "560", height = "315")
Out[ ]:

Standard Form


$$ \begin{align*} \text{minimize} \quad &c^Tx\\ \text{subject to}\quad & Ax=b\\ &x \geq0 \\ &x \in \mathbb{R}^n \end{align*} $$


Canonical Form


$$ \begin{align*} \text{minimize} \quad &c^Tx\\ \text{subject to}\quad & Ax \leq b\\ &x \geq0 \\ &x \in \mathbb{R}^n \end{align*} $$


1.1. From Canonical to Standard Form Transformation

Start with Example


$$ \begin{align*} \text{minimize} \quad &-x_1\\ \text{subject to} \quad & x_1 + x_2 \leq 1.5\\ & x_1, x_2 \geq 0\\ & x_1, x_2 \in \mathbb{R}^n \end{align*} $$


  • Graphical approach

  • Equivalent Transformation to Standard Form

$$ \begin{align*} \text{minimize} \quad &-x_1\\ \text{subject to} \quad & x_1 + x_2 + \color{red}{\omega} = 1.5\\ & x_1, x_2, \color{red}{\omega} \geq 0\\ & x_1,x_2, \color{red}{\omega} \in \mathbb{R}^n \end{align*} $$


  • Slack variable $\omega$

  • Graphical approach (Dimension increased)


Canonical → Standard Form Transformation


$$\begin{align*} \text{minimize}\quad & c^Tx & \text{minimize} \quad & c^Tx\\ \text{subject to} \quad & Ax \leq b \quad \qquad \Longrightarrow & \quad \text{subject to} \quad & Ax=b\\ \quad & x \geq 0 && x \geq 0\\ \quad & x \in S && x\in S\\\\ \end{align*}$$


$$ \begin{align*} Ax \leq b, \quad x\geq 0 \quad \Longrightarrow \quad \begin{bmatrix} A & \mathbb{1} \end{bmatrix} \begin{bmatrix} x\\ \color{red}{\omega} \end{bmatrix}=b,\quad \begin{bmatrix} x\\ \color{red}{\omega} \end{bmatrix} \geq 0 \end{align*}$$


The Procedures of Optimization

  1. The process of identifying objective function, variables, and constraints for a given problem (known as "modeling”)

  2. Once the model has been formulated, optimization algorithm can be used to find its solutions

Modeling Example

  • Manufacturer produces two parts P and Q with machine A and B.
  • Time of producing each unit of P
    • in machine A: 50 min
    • in machine B: 30 min
  • Time of producing each unit of Q
    • in machine A: 24 min
    • in machine B: 33 min
  • Working plan for a week
    • 40 hrs of work on machine A
    • 35 hrs of work on machine B
  • The week starts with
    • stock of 30 units of P
    • stock of 90 units of Q
    • demand of 75 units of P
    • demand of 95 units of Q

Question: how to plan the production to end the week with the maximum stock?


Solution

Define decision variables

  • $x$ = unit of P to be produced
  • $y$ = unit of Q to be produced

Optimization Problem (Actually this problem is IP)


$$\begin{align*} \max \quad &(30 + x - 75) + (90 + y - 95) \\\\ \text{subject to} \quad & 50x + 24y \leq 40 \times 60 \\ & 30x + 33y \leq 35 \times 60 \\ & x \geq 75 - 30 \\ & y \geq 95 - 90 \\ & \color{red}{x,y \in \mathbb{Z}} \end{align*}$$

1.2. Solving LP using Python

In [ ]:
from IPython.display import YouTubeVideo
YouTubeVideo('maFgxXOWO9k?si=5eWT-ojNbd2Bi-Ea&start=801', width = "560", height = "315")
Out[ ]:

linprog in scipy.optimize

Linear programming solves problems of the following form:


$$\begin{align*} \min_{x} \quad & c^Tx \\ \text{subject to} \quad & A_{ub}x \leq b_{ub}\\ & A_{eq}x = b_{eq}\\ & l \leq x \leq u \end{align*} $$


Now we no longer need to worry about whether the problem is in standard or canonical form, as linprog can handle both.


Let’s consider the following LP example.


$$\begin{align*} \min \quad &-5x_1-4x_2-6x_3\\\\ \text{subject to} \quad & x_1-x_2+x_3 \leq 20\\ &3x_1+2x_2+4x_3 \leq 42\\ &3x_1+2x_2 \leq 30\\ &0 \leq x_1\\ &0 \leq x_2\\ &0 \leq x_3\\ \end{align*}$$


In [ ]:
## linprog coding example with bounds

import numpy as np
from scipy.optimize import linprog

# Define the objective function coefficients
c = -np.array([5, 4, 6])

# Define the inequality constraint matrix and RHS
A_ub = np.array([[1, -1, 1],
                 [3, 2, 4],
                 [3, 2, 0]])

b_ub = np.array([20, 42, 30])

bounds = [(0, None), (0, None), (0, None)]

# Solve the linear program
result = linprog(c, A_ub = A_ub, b_ub = b_ub, bounds = bounds)

# Print the results
print("Optimal value:", result.fun)
print("Optimal x:", result.x)
Optimal value: -78.0
Optimal x: [ 0. 15.  3.]
In [ ]:
## linprog coding example without bounds

import numpy as np
from scipy.optimize import linprog

# Define the objective function coefficients
c = -np.array([5, 4, 6])

# Define the inequality constraint matrix and RHS
A_ub = np.array([[1, -1, 1],
                 [3, 2, 4],
                 [3, 2, 0],
                 [-1, 0, 0],
                 [0, -1, 0],
                 [0, 0, -1]])

b_ub = np.array([20, 42, 30, 0, 0, 0])

# Solve the linear program
result = linprog(c, A_ub = A_ub, b_ub = b_ub)

# Print the results
print("Optimal value:", result.fun)
print("Optimal x:", result.x)
Optimal value: -78.0
Optimal x: [ 0. 15.  3.]

CVXPY

We can also use CVXPY since linear programming is a subset of convex optimization problems.

In [ ]:
import cvxpy as cp
import numpy as np

# Define the variables
x = cp.Variable([3, 1])

# Define the objective function
f = np.array([-5, -4, -6])
objective = cp.Minimize(f @ x)

# Define the constraints
A = np.array([[1, -1, 1],
              [3, 2, 4],
              [3, 2, 0]])
b = np.array([[20], [42], [30]])

constraints = [A @ x <= b, x >= 0]

# Create and solve the problem
prob = cp.Problem(objective, constraints)
prob.solve()

# Print the results
print("Optimal value:", prob.value)
print("Optimal x:", x.value)
Optimal value: -77.9999999850724
Optimal x: [[2.74980481e-10]
 [1.50000000e+01]
 [3.00000000e+00]]

2. More Examples

In [ ]:
from IPython.display import YouTubeVideo
YouTubeVideo('maFgxXOWO9k?si=gEBtrBZkUHLR7sdx&amp;start=1249', width = "560", height = "315")
Out[ ]:

Now we know what linear programming (LP) is and how to find solutions using numerical computations. The next step, of course, is to explore the theory behind the algorithm. However, let's first look at more examples, and then examine the steps of solving LP problems in detail.

2.1. Example 1


$$ \begin{align*} \max \; & \; 3x_1 + \frac{3}{2}x_2 \quad \quad \leftarrow \text{objective function}\\ \\ \text{subject to} \; & -1 \leq x_1 \leq 2 \quad \leftarrow \text{constraints}\\ & \quad 0 \leq x_2 \leq 3 \end{align*} $$

Method 1: graphical approach


$$ 3 x_1 + 1.5 x_2 = C \qquad \Rightarrow \qquad x_2 = -2 x_1 + \frac{2}{3}C $$




Method 2: linprog in Python

  • Need to convert to the standard form

$$ \begin{align*} \min \quad &c^Tx\\ \text{subject to} \quad &A_{ub}x \leq b_{ub}\\ &A_{eq}x = b_{eq}\\ & l \leq x \leq u \end{align*} $$



$$ \begin{array}{Icr}\begin{align*} \min \quad - & 3x_1 - 1.5 x_2\\ \\ \text{subject to} \quad - & 1 \leq x_1 \leq 2\\ & 0 \leq x_2 \leq 1\\ \end{align*}\end{array} \qquad \implies \qquad \begin{array}{I} \quad \quad \min \quad \begin{bmatrix} -3\\ -1.5 \end{bmatrix}^T \begin{bmatrix} x_1\\ x_2 \end{bmatrix}\\ \\ \text{subject to}\quad \begin{bmatrix} -1\\ 0 \end{bmatrix} \leq \begin{bmatrix} x_1\\ x_2 \end{bmatrix} \leq \begin{bmatrix} 2\\ 3 \end{bmatrix} \end{array} $$

In [ ]:
from scipy.optimize import linprog

c = [-3, -1.5]

x1_bounds = (-1, 2)
x2_bounds = (0, 3)

res = linprog(c, bounds = [x1_bounds, x2_bounds])

print(res.x)
print(res.fun)
[2. 3.]
-10.5

Method 3: CVXPY


$$ \begin{array}{Icr}\begin{align*} \max_{x} \quad & 3x_1 + {3 \over 2}x_2 \\ \text{subject to} \quad & -1 \leq x_1 \leq 2 \\ & \quad 0 \leq x_2 \leq 3 \end{align*}\end{array} \quad\implies\quad \begin{array}{I} \quad \min_{x} \quad & - \begin{bmatrix} 3 \\ 3 / 2 \end{bmatrix}^T \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \\ \text{subject to} \quad & \begin{bmatrix} -1 \\ 0 \end{bmatrix} \leq \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \leq \begin{bmatrix} 2 \\ 3 \\ \end{bmatrix} \end{array} $$


In [ ]:
import numpy as np
import cvxpy as cvx

f = np.array([[3], [3/2]])
lb = np.array([[-1], [0]])
ub = np.array([[2], [3]])

x = cvx.Variable([2, 1])

obj = cvx.Minimize(-f.T @ x)
constraints = [lb <= x, x <= ub]

prob = cvx.Problem(obj, constraints)
result = prob.solve()

print(x.value)
print(result)
[[1.99999999]
 [2.99999999]]
-10.499999966365493

2.2. Example 2


$$ \begin{array}{Icr}\begin{align*} \max \quad & x_1 + x_2 \\\\ \text{subject to} \quad & 2x_1 + x_2 \leq 29 \\ & x_1 + 2x_2 \leq 25 \\ & x_1 \geq 2 \\ & x_2 \geq 5 \end{align*}\end{array} \qquad\implies\qquad \begin{array}{Icl} \min \quad & - \begin{bmatrix} 1 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \\\\ \text{subject to} \quad & \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \leq \begin{bmatrix} 29 \\ 25 \end{bmatrix} \\ & \begin{bmatrix} 2 \\ 5 \end{bmatrix} \leq \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \leq \begin{bmatrix} \\ \\ \end{bmatrix} \end{array} $$

linprog in Python


$$ \begin{align*} \min \quad &c^Tx\\ \text{subject to} \quad &A_{ub}x \leq b_{ub}\\ &A_{eq}x = b_{eq}\\ & l \leq x \leq u \end{align*} $$

In [ ]:
import numpy as np
from scipy.optimize import linprog

c = [-1, -1]
A = [[2, 1],
     [1, 2]]
b = [29, 25]

x0_bounds = (2, None)
x1_bounds = (5, None)

res = linprog(c, A_ub = A, b_ub = b, bounds = [x0_bounds, x1_bounds])

print(res.x)
print(-res.fun)
[11.  7.]
18.0

CVX in Python

In [ ]:
import numpy as np
import cvxpy as cvx

f = np.array([[-1], [-1]])
A = np.array([[2, 1],
              [1, 2]])
b = np.array([[29], [25]])
lb = np.array([[2], [5]])

x = cvx.Variable([2, 1])

obj = cvx.Minimize(f.T @ x)
constraints = [A@x <= b, lb <= x]

prob = cvx.Problem(obj, constraints)
result = prob.solve()

# Display results
print(x.value)

print(-result)
[[11.]
 [ 7.]]
17.999999998643816

3. How to Solve LP (Optional)

The below has not been completed

3.1. Proposed Algorithm for LP


  • Find all extreme points

  • Compute the objective functions at all extreme (or corner) points

  • Find $\min$


  • Let's do it by hands

$$\begin{align*} \text{minimize} \quad & -x_1\\ \text{subject to} \quad & x_1+x_2 \leq 1.5\\ & x_2 \leq 1\\ &x_1, x_2 \geq 0 \end{align*}$$


$$ \begin{align*} x_1 + x_2 + \omega_1 & = 1.5\\ x_2 + \omega_2 & = 1 \\ x_1, x_2, \omega_1, \omega_2 & \geq 0 \end{align*} $$


$$ \begin{bmatrix}1 & 1 & 1 & 0\\0 & 1 & 0 & 1\end{bmatrix} \begin{bmatrix}x_1\\x_2\\\omega_1\\\omega_2\end{bmatrix} = \begin{bmatrix}1.5\\1\end{bmatrix} $$


3.2. How Computer Finds Corners?


$Ax = b$


$$\begin{matrix} &\begin{matrix}m\end{matrix}\\ \begin{matrix}m\end{matrix}& \begin{bmatrix} & & & & \\ & & A & & \\ & & & & \end{bmatrix}\\ \end{matrix} \begin{matrix} &\begin{matrix}1\end{matrix}\\ \begin{matrix}\quad n\end{matrix}& \begin{bmatrix} \\ \\ X \\ \\ \\ \end{bmatrix}\\ \end{matrix} \quad = \begin{matrix} &\begin{matrix}1\end{matrix}\\ \begin{matrix}\quad m\end{matrix}& \begin{bmatrix} \\X\\ \\ \end{bmatrix}\\ \end{matrix}$$


if $m$ independent (Why? it will give an unique solic)


$$\begin{matrix} &\begin{matrix}m\end{matrix}\\ \begin{matrix}m\end{matrix}& \begin{bmatrix} & & & & \\ & &A& & \\ & & & & \end{bmatrix}\\ \end{matrix} \;\;\;\; \implies \quad \begin{matrix} &\begin{matrix}m\end{matrix}\\ \begin{matrix}m\end{matrix}& \begin{bmatrix} & & & & \\ & &B& & \\ & & & & \end{bmatrix}\\ \end{matrix}$$


3.3. Base and Basic Solution


  • Base $\beta=\{a_{I_B(1)},a_{I_B(2)},\cdots,a_{I_B(m)}\}$ is a set of $m$ linearly independent columns of $A$, where $I_B$ is the set of indices of the columns in $\beta$, and $I_B(i)$ is the $i$-th element of $I_B$. Since $m \lt n$, choice of $\beta$ is not unique.

  • Basic matrix $B=(a_{I_B(i)})$ is obtained by rearranging columns of $A$ such that $A=(B \mid N)$.

  • Basic solution $x=(x_B \mid x_N)$ where $x_B=B^{-1}b$ is a vector of basic variables and $x_N=0$ is a vector of non-basic variables.

  • Basic feasible solution (bfs) is a basic solution $x \in F$.

  • Each vector (extreme point) of the prototype defined by $F$ corresponds to a bfs.


$$\begin{align*} \beta_1 = \{\alpha_1,\alpha_2\} &\implies B_1 = \begin{bmatrix}1 & 1\\0 & 1\end{bmatrix} \begin{bmatrix}x_1\\x_2\end{bmatrix} = B^{-1}_1 \begin{bmatrix}1.5\\1\end{bmatrix} = \begin{bmatrix}0.5\\1\end{bmatrix} \implies D\\\\ \beta_2 = \{\alpha_1,\alpha_3\} &\implies B_2 = \begin{bmatrix}1 & 1\\0 & 0\end{bmatrix} \begin{bmatrix}x_1\\S_1\end{bmatrix} = B^{-1}_2 \begin{bmatrix}1.5\\1\end{bmatrix} \;\; (\times) \;\; \text{Why?}\\\\ \beta_3 = \{\alpha_1,\alpha_4\} &\implies B_3 = \begin{bmatrix}1 & 0\\0 & 1\end{bmatrix} \begin{bmatrix}x_1\\S_2\end{bmatrix} = B^{-1}_3 \begin{bmatrix}1.5\\1\end{bmatrix} = \begin{bmatrix}1.5\\1\end{bmatrix} \implies \begin{bmatrix}x_1\\x_2\end{bmatrix} = \begin{bmatrix}1.5\\0\end{bmatrix} \implies A\\\\ \beta_4 = \{\alpha_2,\alpha_3\} &\implies B_4 = \begin{bmatrix}1 & 1\\1 & 0\end{bmatrix} \begin{bmatrix}x_2\\S_1\end{bmatrix} = B^{-1}_4 \begin{bmatrix}1.5\\1\end{bmatrix} = \begin{bmatrix}1\\0.5\end{bmatrix} \implies \begin{bmatrix}x_1\\x_2\end{bmatrix} = \begin{bmatrix}0\\1\end{bmatrix} \implies C\\\\ \beta_5 = \{\alpha_2,\alpha_4\} &\implies B_5 = \begin{bmatrix}1 & 0\\1 & 1\end{bmatrix} \begin{bmatrix}x_2\\S_2\end{bmatrix} = B^{-1}_5 \begin{bmatrix}1.5\\1\end{bmatrix} = \begin{bmatrix}1.5\\-0.5\end{bmatrix} \implies \begin{bmatrix}x_1\\x_2\end{bmatrix} = \begin{bmatrix}0\\1.5\end{bmatrix} \implies B\\\\ \beta_6 = \{\alpha_3,\alpha_4\} &\implies B_6 = \begin{bmatrix}1 & 0\\0 & 1\end{bmatrix} \begin{bmatrix}S_1\\S_2\end{bmatrix} = B^{-1}_6 \begin{bmatrix}1.5\\1\end{bmatrix} = \begin{bmatrix}1.5\\1\end{bmatrix} \implies \begin{bmatrix}x_1\\x_2\end{bmatrix} = \begin{bmatrix}0\\0\end{bmatrix} \implies O\\\\ \end{align*}$$


Basic Feasible Solutions


$$\begin{aligned} A = \begin{bmatrix}1.5\\0\end{bmatrix} \quad &\implies \quad J=-1.5\\\\ C = \begin{bmatrix}0\\1\end{bmatrix} \quad &\implies \quad J=0\\\\ D = \begin{bmatrix}0.5\\1\end{bmatrix} \quad &\implies \quad J=-0.5\\\\ O = \begin{bmatrix}0\\0\end{bmatrix} \quad &\implies \quad J=0\\\\ \end{aligned}$$


3.4. Any Drawbacks of Above Algorithm


  • Inverse matrix computation

  • Exhaustive search

    • For $n$ decision variables, and $\textrm{rank} = m$
    • Maximum number of bfs in $F$

$$O \left( \begin{pmatrix}n\\m\end{pmatrix} \right) = O \left( \frac{n!}{m!(n-m)!} \right) = O\left(2^n\right)$$


  • Can you propose smarter ways?

$$\begin{align*} \text{maximize} \quad & 2x_1+x_2\\ \text{subject to} \quad & 2x_1-x_2\leq8\\ &x_1+2x_2\leq14\\ &-x_1+x_2\leq4\\ &x_1,x_2\geq0 \end{align*}$$


  • Keep moving it in the direction of $c$ (along the gradient direction)

  • Objective function in linear


  • Move from one extreme point to another $\varepsilon$

  • Adjacent

  • Share $\alpha_i$


  • Back to slide 10

3.5. Simplex Algorithm


  • Basic idea: any LP has a bfs that is optimal → starting with an initial bfs, iterate over "adjacent" bfs that gives lower costs.

  • Input: $A \in \mathbb{R}^{m \times n} ,b \in \mathbb{R}^m,c \in \mathbb{R}^n$, and bfs $x_0 \in \mathbb{R}^n$ of the LP in standard form defined by $A$, $b$ and $c$.

  • Output: solution $x^*$ of the given LP


$$\begin{align*} &\text{Primal-Simplex} \; (A, b, c^T, x_0)\\ &1 \;\;\; x \leftarrow x_0 \;\; \text{where} \; x_0 \; \text{is a bfs}\\ &2 \;\;\; V \leftarrow \{y \mid c^Ty < c^Tx, \; y \; \text{is a bfs adjection to} \; x\}\\ &3 \;\;\; while \; V \neq \emptyset\\ &4 \;\;\;\;\;\;\; \text{do} \; x \leftarrow y^* \; \text{where} \; c^Ty^* = \min_{y \in V} \; c^Ty\\ &5 \;\;\;\;\;\;\;\;\;\;\, V \leftarrow \{y \mid c^Ty < c^Tx, \; y \; \text{is a bfs adjacent to} \; x\}\\ &6 \;\;\; return \; x\\ \end{align*}$$


4. Simplex Algorithm In Detail

4.1.


Begin with $\hat{x}$, basic feasible solution of LP


$$A \hat{x} = \begin{bmatrix}B & N\end{bmatrix} \begin{bmatrix}\hat{x}_B\\ \hat{x}_N\end{bmatrix} = b$$


$$\begin{align*} A\hat{x} &= B\hat{x}_B + N\hat{x}_N = b && B: \text{invertible matrix}\\ &= B\hat{x}_B = b && \hat{x}_N = \overrightarrow{0}: \text{non-basic variables}\\\\ \therefore \; &\hat{x}_B = B^{-1}b && \hat{x}_B: \text{basic variables} \end{align*}$$


$$\begin{align*} z &= C^T x\\ &= \begin{bmatrix}C^T_B & C^T_N\end{bmatrix} \begin{bmatrix}x_B\\x_N\end{bmatrix} \end{align*}$$


$$\begin{align*} \hat{Z} &= C^T_B\hat{X}_B\\ &= C^T_BB^{-1}b \end{align*}$$


depends on how to select independent columns in $A$ $\Longleftrightarrow$ corresponding to extreme point (corner)


What happens to $Z$ as $x_N$ takes on positive entires?

For any feasible $x$


$$Ax=b=Bx_B+Nx_N$$

$$x_B=B^{-1}b-B^{-1}Nx_N$$


$$\begin{align*} Ax=b &=Bx_B+Nx_N\\ x_B &=B^{-1}b-B^{-1}Nx_N\\\\ Z = C^TX &= C^T_BX_B+C^T_NX_N\\ &= C^T_B(B^{-1}b-B^{-1}Nx_N)+C^T_NX_N\\ &= C^T_BB^{-1}b+(C^T_N-C^T_BB^{-1}N)x_N\\ &= \hat{Z}+r^Tx_N\\\\ \implies Z-\hat{Z} &= r^TX_N \end{align*}$$




How $Z$ changes as we move away from $\hat{x}$ and permit $x_N$ to have positive entries.


$$Z-\hat{Z}=r^TX_N$$


$$\text{if} \; r \geq 0, Z-\hat{Z}=r^TX_N \geq 0, Z \; \text{can be smaller than} \; \hat{Z} \implies \; \hat{Z} \text{optimal.}$$


if $r$ has some negative entries, select index $j$ such that $r_j \lt 0$ and $r_j$ is the most negative entry of $r$




$$\begin{align*} x_B&=B^{-1}b-B^{-1}Nx_N\\ &=B^{-1}b-B^{-1}\begin{bmatrix}\cdots&\alpha_e&\cdots\end{bmatrix} \begin{bmatrix}0\\\vdots\\x_e\\\vdots\\0\end{bmatrix}\;\leftarrow\;\text{corresponding non-basic vaiable of}\;r_j\;\text{= entering variable}\\ &=B^{-1}b-B^{-1}\alpha_ex_e\\ \end{align*}$$ $\qquad\qquad\qquad\qquad\;\; \uparrow$
$\qquad\qquad\qquad\;\; \alpha_e \; \text{is column of} \; A \; \text{corresponding to} \; x_e$


Then, we need to find leaving variable in $B$ (or $X_B$)


Since $x_B \geq 0$, we can only positively increase until any entry in $x_B$ hits zero.

When $x_e = \frac{\left(B^{-1}b \right)_i}{\left(B^{-1}\alpha_e \right)_i}$, $x_i$ in $X_B$ becomes zero.

The first basic variable to reach zero $\implies$ leaving variable.


4.2. Back to Example


$$\begin{align*} \min \quad &-x_1\\ \text{subject to} \quad &x_1+x_2\leq1.5\\ &x_2\leq1\\ &x_1,x_2\geq0\\\\ \end{align*}$$


$$\begin{align*} x_1 + x_2 + s_1 &= 1.5\\ x_2 + s_2 &= 1 \\ x_1, x_2, s_1, s_2 &\geq 0 \end{align*}$$


$$ \implies \quad \begin{bmatrix}1&1&1&0\\0&1&0&1\end{bmatrix} \begin{bmatrix}x_1\\x_2\\s_1\\s_2\end{bmatrix} = \begin{bmatrix}1.5\\1\end{bmatrix}$$


4.3. Example


$$s_1 \rightarrow x_3, \quad s_2 \rightarrow x_4$$


start from $C$


$$\begin{align*} &A = \begin{bmatrix}1&1&1&0\\0&1&0&1\end{bmatrix} \begin{bmatrix}x_1\\x_2\\x_3\\x_4\end{bmatrix} = \begin{bmatrix}1.5\\1\end{bmatrix}, \qquad C = \begin{bmatrix}-1\\0\\0\\0\end{bmatrix}\end{align*}$$

$$\beta_4 = \{\alpha_2,\alpha_3\}$$


$$\begin{align*} B &= \begin{bmatrix}1&1\\1&0\end{bmatrix} &N& = \begin{bmatrix}1&0\\0&1\end{bmatrix}\\ x_B &= \begin{bmatrix}x_2\\x_3\end{bmatrix} &x_N& = \begin{bmatrix}x_1\\x_4\end{bmatrix}\\ C_B &= \begin{bmatrix}0\\0\end{bmatrix} &C_N& = \begin{bmatrix}-1\\0\end{bmatrix} \end{align*}$$


$$\begin{align*} \hat{X}_B &= B^{-1}b= \begin{bmatrix}0&1\\1&-1\end{bmatrix} \begin{bmatrix}1.5\\1\end{bmatrix} = \begin{bmatrix}1\\0.5\end{bmatrix}\\ \hat{Z} &= C^Y_B\hat{X}_B= \begin{bmatrix}0&0\end{bmatrix} \begin{bmatrix}1\\1.5\end{bmatrix} = 0 \end{align*}$$




$$\begin{align*} r^T=C^T_N-C^T_BB^{-1}N=\begin{bmatrix}-1&0\end{bmatrix} - \begin{bmatrix}0&0\end{bmatrix} B^{-1}N = &\begin{bmatrix}-1&0\end{bmatrix}\\ &\;\; \downarrow \;\;\;\;\; \downarrow\\ &\;\;\; x_1 \;\;\; x_4 \;\;\; \text{OR}\\ &\;\;\; \underline{\alpha_1} \;\;\; \alpha_4\\ &\text{entering variable}\end{align*}$$


$$\begin{align*}X_B&=B^{-1}b-B^{-1}\alpha_ex_e\\ \begin{bmatrix}x_2\\x_3\end{bmatrix}&=\begin{bmatrix}1\\0.5\end{bmatrix} - \begin{bmatrix}0&1\\1&-1\end{bmatrix} \begin{bmatrix}1\\0\end{bmatrix}x_1\\ &= \begin{bmatrix}1\\0.5-x_1\end{bmatrix} \;\;\; \text{OR}\end{align*}$$


$$\begin{align*} &x_3=\arg\min&\bigg\{&\frac{1}{0},\frac{0.5}{1}\bigg\}\\ &\uparrow &\;& \downarrow \;\;\;\; \downarrow\\ &\text{leaving variable} &\;& x_2 \;\;\;\; x_3\\ &\; &\;& \alpha_2 \;\;\;\; \alpha_3 \end{align*}$$


iteration 2 ($C \rightarrow D$)


$$\begin{align*} \beta_1&=\{\alpha_1\;\alpha_2\}&\;&\mu=\{\alpha_3\;\alpha_4\}\\ B&=\begin{bmatrix}1 & 1\\0 & 1\end{bmatrix}&\;&N=\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}&\;&C_B=\begin{bmatrix}-1\\0\end{bmatrix}\\ x_B&=\begin{bmatrix}x_1\\x_2\end{bmatrix}&\;&x_N=\begin{bmatrix}x_3\\x_4\end{bmatrix}&\;&C_N=\begin{bmatrix}0\\0\end{bmatrix} \end{align*}$$


$$\begin{align*} \hat{x}_B &= B^{-1}b= \begin{bmatrix}1&-1\\0&1\end{bmatrix} \begin{bmatrix}1.5\\1\end{bmatrix} = \begin{bmatrix}0.5\\1\end{bmatrix}\\ \hat{Z} &= C^T_B\hat{X}_B= \begin{bmatrix}-1&0\end{bmatrix} \begin{bmatrix}0.5\\1\end{bmatrix} = -0.5 \; (\lt0 \; \text{at previous iteration}) \end{align*}$$




$$\begin{align*} r^T=C^T_N-C^T_BB^{-1}N=\begin{bmatrix}0 & 0\end{bmatrix}-\begin{bmatrix}-1&0\end{bmatrix} \begin{bmatrix}1 & -1\\0 & 1\end{bmatrix}\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}=&\begin{bmatrix}1 & -1\end{bmatrix}\\ &\;\; x_3 \;\;\; x_4 \;\;\; \text{OR}\\ &\;\; \alpha_3 \;\;\; \underline{\alpha_4}\\ &\text{entering variable} \end{align*}$$


$$\begin{align*} X_B &= B^{-1}b - B^{-1}\alpha_ex_e\\ \begin{bmatrix}x_1\\x_2\end{bmatrix} &= \begin{bmatrix}0.5\\1\end{bmatrix} - \begin{bmatrix}1&-1\\0&1\end{bmatrix} \begin{bmatrix}0\\1\end{bmatrix} x_4\\ &= \begin{bmatrix}0.5+x_4\\1-x_4\end{bmatrix} \;\;\; \text{OR} \end{align*}$$


leaving variable $x_2$ ($\alpha_2$)

iteration 3 ($D \rightarrow A$)


$$\begin{align*} \beta_3 &= \{\alpha_1\;\alpha_4\}&\;&\mu_3=\{\alpha_2\;\alpha_3 \}\\ B&=\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}&\;&N=\begin{bmatrix}1 & 1\\1 & 0\end{bmatrix}&\;&C_B=\begin{bmatrix}-1\\0\end{bmatrix}\\ x_B&=\begin{bmatrix}x_1\\x_4\end{bmatrix}&\;&x_N=\begin{bmatrix}x_2\\x_3\end{bmatrix}&\;&C_N=\begin{bmatrix}0\\0\end{bmatrix} \end{align*}$$


$$\begin{align*} \hat{x}_B&=B^{-1}b=\begin{bmatrix}1 & 0\\0 & 1\end{bmatrix}\begin{bmatrix}1.5\\1\end{bmatrix}=\begin{bmatrix}1.5\\1\end{bmatrix}\\ \hat{Z} &= C^T_B\hat{X}_B= \begin{bmatrix}-1 & 0\end{bmatrix} \begin{bmatrix}1.5\\1\end{bmatrix}=-1.5\; (< -0.5\;\text{at previous iteration}) \end{align*}$$




$$\begin{align*} r^T=C^T_N-C^T_BB^{-1}N=\begin{bmatrix}0 & 0\end{bmatrix} - \begin{bmatrix}-1 & 0\end{bmatrix} \begin{bmatrix}1 & 0\\0 & 1\end{bmatrix} \begin{bmatrix}1 & 1\\1 & 0\end{bmatrix} = \;&\begin{bmatrix}1 & 1\end{bmatrix}\\ &\text{all positive. stop!}\end{align*}$$

$$\begin{align*} r^T=C^T_N-C^T_BB^{-1}N=\begin{bmatrix}-1&0\end{bmatrix} - \begin{bmatrix}0&0\end{bmatrix} B^{-1}N = &\begin{bmatrix}-1&0\end{bmatrix}\\ &\;\; \downarrow \;\;\;\;\; \downarrow\\ &\;\;\; x_1 \;\;\; x_4 \;\;\; \text{OR}\\ &\;\;\; \underline{\alpha_1} \;\;\; \alpha_4\\ &\text{entering variable}\end{align*}$$

In [ ]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')