Output Feedback Control and PID Control

By Prof. Seungchul Lee
http://iai.postech.ac.kr/
Industrial AI Lab at POSTECH

# 1. Feedback Control¶

• Consider the constant gain plant $G$

• Input $u(t)$
• Output $y(t)$
• So far, we have learnt about dynamics of plant $G$

• Control problem
• The output of this plant track a desired reference trajectory $r(t)$

## 1.1 Open Loop Control¶

• The simplest solution to the tracking problem is to use a constant pre-compensator $G$
• $C = \frac{1}{G}$
• Then $y(t) = r(t)$

• Do you see any problems of this solution?
• Model uncertainty
• Disturbance
• Consider the important practical issue of model uncertainty.
• It is always the case that the true system we wish to control will deviate from the nominal model used in control design
• Suppose uncertain plant $G'$ and disturbance $d(t)$

• Open loop controller $C=\frac{1}{G}$

$$y(t) = \frac{G'}{G} r(t) + d(t)$$

## 1.2. Closed Loop Control (= Negative Feedback Control)¶

• An alternative solution is to purchase a sensor and use feedback control
• Use a constant gain compensator $K$, that multiplies the measured tracking error $e(t) = r(t) - y(t)$

\begin{align*} u &= Ke\\ e &= r - y\\ y &= Gu \end{align*}

\begin{align*} e = \frac{1}{1+GK}r \\\\ y = \frac{GK}{1+GK}r \end{align*}
• As $\lvert GK \rvert \rightarrow \infty, \; e(t) \rightarrow 0$ and $y(t) \rightarrow r(t)$
• Feedback controller
• Use of feedback with sufficiently high gain provides an approximate solution to the tracking problem even in the presence of system uncertainty and disturbance

\begin{align*} e = \frac{1}{1+G'K}r - \frac{1}{1+G'K}d\\\\ y = \frac{G'K}{1+G'K}r + \frac{1}{1+G'K}d \end{align*}

## 1.3. Transfer Function for Closed Loop Systems¶

• Feedback changes the system transfer function
• Change system dynamics
• Change poles and zeros
• Might change system stability

\begin{align*} E &= R-Y \\ Y &= KGE \\\\ Y &= KG(R-Y) \\ & = KGR - KGY \\\\ (1+KG)Y &= KGR \\\\ H = \frac{Y}{R} &= \frac{KG}{1+KG} \end{align*}

\begin{align*} E &= R-KY \\ Y &= GE \\\\ Y &= G(R-KY) \\ & = GR - KGY \\\\ (1+KG)Y &= GR \\\\ H = \frac{Y}{R} &= \frac{G}{1+KG} \end{align*}

• Example 1
$$G(s) = \frac{1}{s+1}, \quad \text{pole at} -1$$

\begin{align*} H(s) = \frac{KG}{1+KG} = \frac{\frac{K}{s+1}}{1+\frac{K}{s+1}} = \frac{K}{s+1+K}, \quad \text{new pole at } -(1+K) \end{align*}
• Example 2
$$G(s) = \frac{1}{s-1}, \quad \text{pole at} +1, \text{unstable}$$

\begin{align*} H(s) = \frac{KG}{1+KG} = \frac{\frac{K}{s-1}}{1+\frac{K}{s-1}} = \frac{K}{s-1+K}, \quad \text{new pole at } (1-K) \end{align*}

if $k>1$, the closed loop system $H(s)$ becomes stable

## 1.4. Visualize Feedback¶

If $G = \frac{1}{s}$

$$H = \frac{Y}{R} = \frac{1}{1+s} = 1-s+s^2-s^3 + \cdots$$

• Feedback = autoregressive = the infinite length response

# 2. Open Loop vs. Closed Loop¶

• Suppose that there is a car
• input: force $u(t)$
• output: velocity $y(t)$
• transfer function $G(s)$

• Suppose that there is a car driving with a sine wave disturbance $d$.

## 2.1. Open loop¶

• Block diagram
• reference velocity $r(t)$ that is desired by a driver
• assume the force produced by a engine is $u = Kr$

• The system input $u$

$$u = Kr$$

• Calculating $y$

$$y = Gu + d = GKr + d$$

• Error

\begin{align*} e &= r - y = r - GKr - d\\\\ &= r(1 - GK) - d \end{align*}

## 2.2. Closed loop¶

• Think about how we drive
• We step on the gas or put the brake on based on the desired speed and the current one
• We care about the difference $r-y$
• The term of "Negative feedback" is coming from $-y$

• Now assume $K$ is a controller
• The system input $u$

$$u = K(r-y)$$

• Calculating $y$

\begin{align*} y &= GK(r-y) + d \\\\ &= KGr-KGy + d \\\\ (1+KG)y &= KGr + d \\\\ \therefore y &= {KGr + d \over 1 + KG} = \frac{KG}{1+KG}r + \frac{1}{1+KG}d \end{align*}

• Error
• When $K$ is large, the error becomes small

$$e = r - y = r - {KGr+d \over 1+KG} = {r - d \over 1 + KG} = \frac{1}{1 + KG}r - \frac{1}{1+KG}d$$

• Benefits of Feedback
• Stability
• Uncertainty
• Disturbance

## 2.3. Model Uncertainty and Disturbance¶

• Let us suppose that the predicted model is $G(s) = 2$, and actually $G(s) = 1$.
• Let's design the $K$ value when the desired output speed is 100 km/h.

### 2.3.1. Open loop¶

• In the open loop model, $y$ is

$$y_{\text{model}} = G_{\text{model}}Kr + d = 2Kr + d$$

• In order for $y$ to approach $100$ reference input

$$K = 0.5$$

• Then the actual $y$ is

$$y_{\text{true}} = G_{\text{true}}Kr + d = Kr + d$$

• The model and true errors are

\begin{align*} e_{\text{model}} &= r - y_{\text{model}} = r(1 - G_{\text{model}}K) - d = r(1 - 2K) - d\\\\ e_{\text{true}} &= r - y_{\text{true}} = r(1 - G_{\text{true}}K) - d = r(1 - K) - d \end{align*}

• The discrepancy from model uncertainty is

$$y_{\text{model}} - y_{\text{true}} = Kr$$

• The open loop system cannot change the poles of system (Stability)

• Predicting models incorrectly has a critical impact on speed (Uncertainty, Low Robustness)

• Disturbance directly affects the system (Disturbance rejection)

### 2.3.2. Closed loop¶

• In the closed loop model, $y$ is

$$y_{\text{model}} = {KG_{\text{model}}r + d \over 1 + KG_{\text{model}}} = {2Kr + d \over 1 + 2K} = \frac{2K}{1+2K}r + \frac{1}{1+2K}d$$

• In order for $y$ to approach $100$ reference input, the larger $K$ is better.

$$K = 100$$

• Then the actual $y$ is

$$y_{\text{true}} = {KG_{\text{true}}r + d \over 1 + KG_{\text{true}}} = {Kr + d \over 1 + K}= \frac{K}{1+K}r + \frac{1}{1+K}d$$

• The model and true errors are

\begin{align*} e_{\text{model}} &= r - y_{\text{model}} = {r - d \over 1 + KG_{\text{model}}} = {r - d \over 1 + 2K} = \frac{1}{1+2K}r - \frac{1}{1+2K}d \\\\ e_{\text{true}} &= r - y_{\text{true}} = {r - d \over 1 + KG_{\text{true}}} = {r - d \over 1 + K}= \frac{1}{1+K}r - \frac{1}{1+K}d \end{align*}

• The discrepancy from model uncertainty (assume $d=0$) is

$$y_{\text{model}} - y_{\text{true}} = \frac{K}{2K^2+3K+1}r \;\approx \;0$$

• The closed loop system can change the poles of system (Stability)

• Model uncertainty has a reduced impact on speed (Uncertainty, Robustness)

• Disturbance little affects the system (Disturbance rejection)

# 3. PID Control¶

For the car model

• velocity $y$
• input force $u$

$$\dot{y} = \frac{c}{m}u$$

In a block diagram

In a Laplace transform

We want to achieve

$$y \rightarrow r \quad \text{as} \quad t \rightarrow \infty \,\,(e=r-y \rightarrow 0)$$

## 3.1. P Control¶

The proportional term produces an output value that is proportional to the current error value. The proportional response can be adjusted by multiplying the error by a constant $k_P$, called the proportional gain constant.

$$u = k_P e$$

• Small error yields small control signals

• Nice and smooth

• So-called proportional regulation (P regulator)

A high proportional gain results in a large change in the output for a given change in the error. If the proportional gain is too high, the system can become unstable. In contrast, a small gain results in a small output response to a large input error, and a less responsive or less sensitive controller. Industrial practice indicate that the proportional term should contribute the bulk of the output change.

In [1]:
c = 1;
m = 1;

G = tf(c/m,[1 0]);

k = 5;
C = k;

Gcl = feedback(C*G,1,-1);

x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));   % reference

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])


What if the true system is:

Caveat: the "real" model is augmented to include a wind resistance term:

\begin{align*} \dot{y} &= \frac{c}{m}u - \gamma y \\\\ u & = k_P e = k_P (r - y) \end{align*}

At steday-state

\begin{align*} \dot{y} = 0 &= \frac{c}{m}u-\gamma y = \frac{c}{m}k_P(r-y) -\gamma y \\\\ \implies y &= \frac{ck_P}{ck_P+m\gamma}r \end{align*}

• The steady-state error is the difference between the desired final output and the actual one. Because a non-zero error is required to drive it, a proportional controller generally operates with a steady-state error.
In [2]:
gamma = 1;
Gtr = tf(c/m,[1 gamma]);
C = k;

Gcl = feedback(C*Gtr,1,-1);

x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])


## 3.2. PI Control¶

• Stability
• Tracking
• Robustness

The contribution from the integral term is proportional to both the magnitude of the error and the duration of the error. The integral controller is the sum of the instantaneous error over time and gives the accumulated offset that should have been corrected previously. The accumulated error is then multiplied by the integral gain ($k_I$) and added to the controller output.

The integral term accelerates the movement of the process towards setpoint and eliminates the residual steady-state error that occurs with a pure proportional controller. However, since the integral term responds to accumulated errors from the past, it can cause the present value to overshoot the setpoint value.

$$u(t) = k_P \, e(t) + k_I \int_0^t e(\tau)d\tau$$

In [3]:
Gtr = tf(c/m,[1 gamma]);

kP = 5;
kI = 5;
C = tf([kP kI],[1 0]);

Gcl = feedback(C*Gtr,1,-1);

x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])


## 3.3. PID Control¶

The derivative of the process error is calculated by determining the slope of the error over time and multiplying this rate of change by the derivative gain $k_D$.

Derivative action predicts system behavior and thus improves settling time and stability of the system. The implementations of D controllers include an additional low-pass filtering for the derivative term to limit the high-frequency gain and noise. Derivative action is seldom used in practice though because of its variable impact on system stability in real-world applications.

$$u(t) = k_P \, e(t) + k_I \int_0^t e(\tau)d\tau + k_D \frac{d e(t)}{d t}$$

PID: by far the most used low-level controller

• P: contributes to stability, medium-rate responsiveness

• I: tracking and disturbance rejection, slow-rate responsive, may cause oscillations

• D: fast-rate responsiveness, sensitive to noise

Feedback has a remarkable ability to fight uncertainty in model parameters !

The goal of this problem is to show how each of the term, $k_P, k_I$ and $k_D$ contributes to obtaining the common goals of:

• Fast rise time

• Minimal overshot

$k_P = 1,k_I = 0, k_D = 0$

In [4]:
Gtr = tf(c/m,[1 gamma]);

kP = 1;
kI = 0;
kD = 0;
C = tf([kD kP kI],[1 0]);

Gcl = feedback(C*Gtr,1,-1);

x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])


$k_P = 1, k_I = 1, k_D = 0$

In [5]:
Gtr = tf(c/m,[1 gamma]);

kP = 1;
kI = 1;
kD = 0;
C = tf([kD kP kI],[1 0]);

Gcl = feedback(C*Gtr,1,-1);

x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])


$k_P = 1, k_I = 10, k_D = 0$

In [6]:
Gtr = tf(c/m,[1 gamma]);

kP = 1;
kI = 10;
kD = 0;
C = tf([kD kP kI],[1 0]);

Gcl = feedback(C*Gtr,1,-1);

x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])


$k_P = 1, k_I = 2, k_D = 0.1$

In [7]:
Gtr = tf(c/m,[1 gamma]);

kP = 1;
kI = 2;
kD = 0.1;
C = tf([kD kP kI],[1 0]);

Gcl = feedback(C*Gtr,1,-1);

x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])


Another Example

$$G(s) = \frac{1}{s^2 + 10s + 20}$$
In [8]:
s = tf('s');
G = 1/(s^2 + 10*s + 20);

x0 = 0;
t = linspace(0,2,100);
r = 1*ones(size(t));

[y,tout] = lsim(G,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,1*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,2])


In [9]:
kP = 300;
C = pid(kP,0,0);
Gcl = feedback(C*G,1,-1);

x0 = 0;
t = linspace(0,2,100);
r = 1*ones(size(t));

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,1*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,2])


In [10]:
kP = 300;
kD = 10;
C = pid(kP,0,kD);
Gcl = feedback(C*G,1,-1);

x0 = 0;
t = linspace(0,2,100);
r = 1*ones(size(t));

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,1*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,2])


In [11]:
kP = 30;
kI = 70;
C = pid(kP,kI);
Gcl = feedback(C*G,1);

x0 = 0;
t = linspace(0,2,100);
r = 1*ones(size(t));

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,1*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,2])


In [12]:
kP = 350;
kI = 300;
kD = 50;
C = pid(kP,kI,kD);
Gcl = feedback(C*G,1,-1);

x0 = 0;
t = linspace(0,2,100);
r = 1*ones(size(t));

[y,tout] = lsim(Gcl,r,t,x0);

plot(tout,y, 'linewidth', 2), hold on
plot(tout,1*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,2])


## 3.4. General Tips for Designing a PID Controller¶

When you are designing a PID controller for a given system, follow the steps shown below to obtain a desired response.

• Obtain an open-loop response and determine what needs to be improved

• Add a proportional control to improve the rise time

• Add a derivative control to reduce the overshoot

• Adjust each of the gains $k_P$, $k_I$, and $k_D$ until you obtain a desired overall response.

Lastly, please keep in mind that you do not need to implement all three controllers (proportional, derivative, and integral) into a single system, if not necessary. For example, if a PI controller meets the given requirements (like the above example), then you don't need to implement a derivative controller on the system. Keep the controller as simple as possible.

# 4. MKC System as Closed Loop¶

• mass

$$m\ddot{y} = u$$

• mass and spring