Fixed-Point Iteration


By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

Source

  • By Prof. David J. Malan from Harvard University
  • By Prof. Erik Demaine from MIT online lecture

Table of Contents

1. Fixed-Point Iteration

1.1. Numerical approach

For the given equation:

$$f(x) = 0 \implies x = g(x)$$

Remark: always achievable

$$ \begin{align*} x & = f(x) + x = g(x)\\ x &= -f(x) + x = g(x) \end{align*} $$

Goal: numerically find the solution of $x = g(x)$

Main idea:

  • Make a guess of the solution, $x_k$

  • If $g(x_k)$ is 'nice', then hopefully, $g(x_k)$ will be closer to the answer. If so, we can iterate

Iteration algorithm

  1. choose an initial point $x_0$
  2. Do the iteration $x_{k+1} = g(x_k)$ until meeting stopping criteria

Convergence check (or analysis)

$\quad$ Let $r$ be the exact solution, $r = g(r)$

$$ \begin{array}{l} x_{k+1} &= g(x_k)\\ \text{error} :&\quad e_k &= x_k - r\\ &\quad e_{k+1} &= x_{k+1} - r = g(x_k) - g(r)\\ &&= g'(\eta)(x_k - r) = g'(\eta)e_k , \quad \eta \in (x_k, r)\\ \\ &\implies \lvert e_{k+1}\rvert &\leq \lvert g'(\eta)\rvert \lvert e_k \rvert\\ \end{array} $$


$$\text{If } \lvert g'(\eta) \rvert <1, \text{error decreases (iteration converges)}$$

1) Example of $x = \cos(x)$

In [1]:
# Computational Thinking on how to calculate cos(x) = x

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
In [2]:
x = np.linspace(-2*np.pi, 2*np.pi, 100)
y = np.cos(x)

plt.figure(figsize = (8,6))
plt.plot(x, y, linewidth = 2)
plt.plot(x, x, linewidth = 2)
# plt.xlim(-2*np.pi, 2*np.pi)
plt.axvline(x=0, color = 'k', linestyle = '--')
plt.axhline(y=0, color = 'k', linestyle = '--')
plt.legend(['cos(x)','x'])
plt.axis('equal')
plt.ylim([-1,1])
plt.show()
In [3]:
# naive approach

x = 0.3
print (np.cos(x))
print (np.cos(np.cos(x)))
print (np.cos(np.cos(np.cos(x))))
print (np.cos(np.cos(np.cos(np.cos(x)))))
print (np.cos(np.cos(np.cos(np.cos(np.cos(x))))))
print (np.cos(np.cos(np.cos(np.cos(np.cos(np.cos(x)))))))
print (np.cos(np.cos(np.cos(np.cos(np.cos(np.cos(np.cos(x))))))))
0.955336489125606
0.5773340444711864
0.8379206831271269
0.6690097308223832
0.7844362247423562
0.7077866472756374
0.7598027552852303
In [4]:
# better way

x = 0.3

for i in range(24):
    tmp = np.cos(x)
    x = np.cos(tmp)
    
print (x)
0.7390851312923253
In [5]:
# better way

x = np.zeros((24, 1))
x[0] = 0.3

for i in range(23):
    x[i+1] = np.cos(x[i])
    
print (x)
[[0.3       ]
 [0.95533649]
 [0.57733404]
 [0.83792068]
 [0.66900973]
 [0.78443622]
 [0.70778665]
 [0.75980276]
 [0.72497188]
 [0.74851807]
 [0.73269821]
 [0.74337234]
 [0.73619044]
 [0.74103194]
 [0.73777234]
 [0.73996881]
 [0.73848959]
 [0.73948617]
 [0.73881493]
 [0.73926712]
 [0.73896253]
 [0.73916771]
 [0.73902951]
 [0.7391226 ]]
In [6]:
# better way

x = 10
for i in range(24):
    x = np.cos(x)

print (x)
0.7390735444682907

2) example of $\sqrt{2} \;(x^2 = 2)$

In [7]:
# Use an idea of a fixed point

x = 2
for i in range(10):
    x = 2/x
    print (x)
1.0
2.0
1.0
2.0
1.0
2.0
1.0
2.0
1.0
2.0
In [8]:
x = np.linspace(-3, 3, 100)
y = 2/x

plt.figure(figsize = (8,6))
plt.plot(x, y, linewidth = 2)
plt.plot(x, x, linewidth = 2)
plt.axvline(x=0, color = 'k', linestyle = '--')
plt.axhline(y=0, color = 'k', linestyle = '--')
plt.legend(['2/x','x'])
plt.axis('equal')
plt.ylim([-1,1])
plt.show()
In [9]:
# How to overcome 
# Use an idea of a fixed point +   kind of *|damping|*

x = 3
for i in range(10):
    x = (x + 2/x)/2
    print (x)
1.8333333333333333
1.4621212121212122
1.414998429894803
1.4142137800471977
1.4142135623731118
1.414213562373095
1.414213562373095
1.414213562373095
1.414213562373095
1.414213562373095
In [10]:
x = np.linspace(-4, 4, 100)
y = (x + 2/x)/2

plt.figure(figsize = (8,6))
plt.plot(x, y, linewidth = 2)
plt.plot(x, x, linewidth = 2)
plt.axvline(x=0, color = 'k', linestyle = '--')
plt.axhline(y=0, color = 'k', linestyle = '--')
plt.axis('equal')
plt.ylim([-1,1])
plt.show()

Think about why it gives different results.

1.2. System of Linear Equations


$$ \begin{align*} 4x_1 - x_2 + x_3 &= 7 \\ 4x_1 - 8x_2 + x_3 &= -21 \\ -2x_1 + x_2 + 5x_3 &= 15 \end{align*} $$
In [11]:
import numpy as np
import matplotlib.pyplot as plt

# matrix inverse

A = np.array([[4, -1, 1], [4, -8, 1], [-2, 1, 5]])
b = np.array([[7, -21, 15]]).T

x = np.linalg.inv(A).dot(b)

print (x)
[[2.]
 [4.]
 [3.]]

This solution only possible for small size problems. There are many iterative methods for large problems.

$$ \begin{align*} 4x_1 - x_2 + x_3 &= 7 & x_1 &= \frac{1}{4}x_2 - \frac{1}{4}x_3 + \frac{7}{4}\\ 4x_1 - 8x_2 + x_3 &= -21 & \implies \quad x_2 &= \frac{1}{2}x_1 + \frac{1}{8}x_3 + \frac{21}{8}\\ -2x_1 + x_2 + 5x_3 &= 15 & x_3 &= \frac{2}{5}x_1 - \frac{1}{5}x_2 + \frac{15}{5} \end{align*} $$


  • In a matrix form


$$\begin{bmatrix} x_1\\ x_2\\ x_3\\ \end{bmatrix} = \begin{bmatrix} 0 &\frac{1}{4} & - \frac{1}{4}\\ \frac{1}{2} & 0 & \frac{1}{8}\\ \frac{2}{5} & -\frac{1}{5} & 0\\ \end{bmatrix} \begin{bmatrix} x_1\\ x_2\\ x_3\\ \end{bmatrix} + \begin{bmatrix} \frac{7}{4}\\ \frac{21}{8}\\ 3\\ \end{bmatrix} $$


  • iteration


$$ \begin{bmatrix} x_1^{(k+1)}\\ x_2^{(k+1)}\\ x_3^{(k+1)}\\ \end{bmatrix} = \begin{bmatrix} 0 &\frac{1}{4} & - \frac{1}{4}\\ \frac{1}{2} & 0 & \frac{1}{8}\\ \frac{2}{5} & -\frac{1}{5} & 0\\ \end{bmatrix} \begin{bmatrix} x_1^{(k)}\\ x_2^{(k)}\\ x_3^{(k)}\\ \end{bmatrix} + \begin{bmatrix} \frac{7}{4}\\ \frac{21}{8}\\ 3\\ \end{bmatrix} $$
In [12]:
# Iterative way

A = np.array(([[0, 1/4, -1/4 ], 
               [4/8, 0, 1/8], 
               [2/5, -1/5, 0]]))
b = np.array([[7/4, 21/8, 15/5]]).T

# initial point
x = np.array([[1, 1, 2]]).T

A = np.asmatrix(A)
b = np.asmatrix(b)
x = np.asmatrix(x)

for i in range(20):
    x = A*x + b
    
print (x)
[[2.]
 [4.]
 [3.]]

Remark) try this one

$$ \begin{align*} x_1 &= -3x_1 + x_2 - x_3 + 7\\ x_2 &= 4x_1 - 7x_2 + x_3 + 21\\ x_3 &= 2x_1 - x_2 + -4x_3 + 15 \end{align*} $$


$$\begin{bmatrix} x_1\\ x_2\\ x_3\\ \end{bmatrix} \leftarrow \begin{bmatrix} -3 & 1 & -1 \\ 4 & -7 & 1 \\ 2 & -1 & -4 \end{bmatrix} \begin{bmatrix} x_1\\ x_2\\ x_3\\ \end{bmatrix} + \begin{bmatrix} 7\\ 21\\ 15\\ \end{bmatrix} $$


Convergence check


$$x \leftarrow Ax + b$$


$$\begin{align*} x_{k+1} &= Ax_k +b\\ &= A(Ax_{k-1} + b) + b = A^2X_{k-1} + Ab + b\\ & \vdots\\ &= A^{k+1}x_0 + A^kb + \cdots + Ab + b \end{align*}$$
In [13]:
# think about why this one does not work

A = np.array(([[3, 1, -1 ], 
               [4, 7, 1], 
               [2, -1, -4]]))
b = np.array([[7, 21, 15]]).T

# initial point
x = np.array([[1, 2, 2]]).T

for i in range(10):
    x = A.dot(x) + b
    
print (x)
[[1076845340]
 [ 660465147]
 [-237408283]]
In [14]:
# stability, check eigenvalue of A

A = np.array(([[3, 1, -1 ], 
               [4, 7, 1], 
               [2, -1, -4]]))

np.linalg.eig(A)
Out[14]:
(array([ 7.82147369,  1.65496639, -3.47644008]),
 array([[-0.21213573, -0.71387137,  0.17464301],
        [-0.97612458,  0.60136269, -0.15942549],
        [ 0.04668226, -0.3588183 ,  0.97163951]]))
In [15]:
# stability, check eigenvalue of A

A = np.array(([[0, 1/4, -1/4 ], 
               [4/8, 0, 1/8], 
               [2/5, -1/5, 0]]))

np.linalg.eig(A)
Out[15]:
(array([ 0.33471648+0.j        , -0.16735824+0.28987297j,
        -0.16735824-0.28987297j]),
 array([[-0.52873647+0.j        , -0.10993842+0.35839967j,
         -0.10993842-0.35839967j],
        [-0.83865529+0.j        ,  0.40608435-0.36739725j,
          0.40608435+0.36739725j],
        [-0.13074806+0.j        ,  0.74804945+0.j        ,
          0.74804945-0.j        ]]))
In [16]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')