Control:
Output Feedback Control and PID Control
Table of Contents
Reference
- Control Bootcamp by Prof. Steve Brunton
- Control of Mobile Robots by Prof. Magnus Egerstedt
1. Feedback Control¶
Control theory is concerned with influencing the behavior of dynamical systems through external inputs. In this chapter, we begin with the fundamental idea of using feedback to control a system and achieve desired behavior, such as tracking a reference input.
We start by considering a simple plant and gradually build up to closed-loop feedback control, comparing its robustness to that of open-loop strategies.
Consider a plant represented by a scalar transfer function $G$. The plant receives an input $u(t)$ and produces an output $y(t)$. The control objective is for the output $y(t)$ to track a given reference input $r(t)$ as closely as possible.
1.1. Open-Loop Control¶
Basic Idea
In the simplest setting, we assume perfect knowledge of the plant model. Suppose the plant is a constant gain $G > 0$. Then a straightforward way to achieve perfect tracking is to pre-compensate for this gain:
$$ C = \frac{1}{G} \quad \Rightarrow \quad y(t) = G C r(t) = r(t) $$
This approach, known as open-loop control, directly adjusts the input without considering the actual system output.
Limitations
Despite its simplicity, open-loop control suffers from two major drawbacks:
Model Uncertainty: The actual plant, denoted $G'$, may differ from the nominal model $G$ used in the controller design.
Disturbances: The system may be affected by unknown external inputs or noise, denoted by $d(t)$.
Analysis Under Uncertainty and Disturbance
Suppose the plant is actually $G'$ and a disturbance $d(t)$ enters the system. The output under open-loop control becomes:
$$ y(t) = G' C r(t) + d(t) = \frac{G'}{G} r(t) + d(t) $$
If $G' \neq G$, the output deviates from $r(t)$, and any disturbance $d(t)$ directly passes to the output without attenuation.
Open-loop control is sensitive to model inaccuracies and external disturbances.
1.2. Closed Loop Control (= Negative Feedback Control)¶
Concept
To overcome the limitations of open-loop control, we introduce feedback. By measuring the system output and comparing it to the reference, we can correct deviations in real time.
Define the tracking error as:
$$ e(t) = r(t) - y(t) $$
Let the controller be a constant gain $K$. The control input is then:
$$ u(t) = K e(t) = K (r(t) - y(t)) $$
The plant responds as:
$$ y(t) = G u(t) = G K (r(t) - y(t)) $$
Solving for $y(t)$:
$$ (1 + G K) y(t) = G K r(t) \quad \Rightarrow \quad y(t) = \frac{G K}{1 + G K} r(t) $$
And the error is:
$$ e(t) = \frac{1}{1 + G K} r(t) $$
As the gain $K$ increases:
$$ \lim_{K \to \infty} e(t) = 0, \quad \lim_{K \to \infty} y(t) = r(t) $$
Thus, feedback can drive the error to zero even when the system model is imperfect.
Robustness to Uncertainty and Disturbance
Let the actual plant be $G'$, and let an additive disturbance $d(t)$ affect the output. The system equations become:
$$ y(t) = G' K (r(t) - y(t)) + d(t) \quad \Rightarrow \quad (1 + G' K) y(t) = G' K r(t) + d(t) $$
Solving:
$$ y(t) = \frac{G' K}{1 + G' K} r(t) + \frac{1}{1 + G' K} d(t) $$
$$ e(t) = \frac{1}{1 + G' K} r(t) - \frac{1}{1 + G' K} d(t) $$
As $K \to \infty$:
The reference tracking improves: $y(t) \to r(t)$
The disturbance rejection improves: $y(t) \to 0$ for any $d(t)$
Key Benefit:
Feedback mitigates the effects of both model mismatch and disturbance.
1.3. Closed-Loop Transfer Function¶
General Formulation
Feedback modifies the system's transfer characteristics. Consider:
- Controller: $C(s) = K$
- Plant: $G(s)$
- Input: $r(s)$
- Output: $y(s)$
- Error: $e(s) = r(s) - y(s)$
Then,
$$ y(s) = G(s) C(s) e(s) = G K (r(s) - y(s)) $$
Solving:
$$ (1 + G K) y(s) = G K r(s) \quad \Rightarrow \quad H(s) = \frac{y(s)}{r(s)} = \frac{G K}{1 + G K} $$
This is the closed-loop transfer function, which governs the input-output behavior of the feedback system.
The controller can be placed in the feedback loop.
This is a slightly different structure than the standard configuration and must be analyzed accordingly.
From the block diagram, we observe that the feedback path applies gain before subtraction:
$$ E(s) = R(s) - K Y(s) $$
The output of the plant is given by:
$$ Y(s) = G(s) E(s) $$
Substitute $E(s)$:
$$ Y(s) = G(s) \left(R(s) - K Y(s)\right) $$
$$ Y(s) = G R - G K Y \quad \Rightarrow \quad (1 + G K) Y(s) = G R(s) $$
Dividing both sides by $R(s)$:
$$ H(s) = \frac{Y(s)}{R(s)} = \frac{G(s)}{1 + G(s) K} $$
Remarks
This transfer function is identical in form to the standard negative feedback case, but arises here due to gain in the feedback path instead of the forward path.
The denominator $1 + G(s)K$ determines the closed-loop poles, and thus directly affects system stability and dynamics.
Example 1: Stable Plant
$$ \begin{align*} G(s) &= \frac{1}{s + 1}\\\\ H(s) &= \frac{KG}{1+KG} = \frac{\frac{K}{s+1}}{1+\frac{K}{s+1}} = \frac{K}{s+1+K} \end{align*} $$
- Original pole at $s = -1$
- New pole at $s = -(1 + K)$ (moves further left with increasing $K$)
- More negative pole $\rightarrow$ faster and more stable response
Example 2: Unstable Plant
$$ \begin{align*} G(s) &= \frac{1}{s - 1} \\\\ H(s) &= \frac{KG}{1+KG} = \frac{\frac{K}{s-1}}{1+\frac{K}{s-1}} = \frac{K}{s-1+K} \end{align*} $$
- Original pole at $s = +1$ (unstable)
- New pole at $s = 1 - K$
- If $K > 1$, then $s < 0$ $\rightarrow$the closed-loop system is stabilized
Feedback can stabilize an open-loop unstable system if gain is sufficiently high.
Feedback alters the location of system poles, thus affecting stability and transient response.
1.4. Feedback as an Autoregressive System¶
Feedback introduces internal memory by recursively using past output values. Consider:
$$ G(s) = \frac{1}{s}, \quad C = 1 \quad \Rightarrow \quad H(s) = \frac{1}{1 + s} $$
The transfer function expands as:
$$ H(s) = \frac{1}{1 + s} = 1 - s + s^2 - s^3 + \cdots $$
This infinite series indicates a system with autoregressive behavior — the output depends on its own past values.
Conclusion: Feedback systems inherently possess memory, enabling long-term correction and robustness.
2. Open-Loop vs. Closed-Loop Control¶
To illustrate the fundamental difference between open-loop and closed-loop control, consider a simple real-world example: a car.
- Input: Force applied by the engine, $u(t)$
- Output: Velocity of the car, $y(t)$
- System: Transfer function of the car’s dynamics, $G(s)$
Suppose a disturbance, such as wind or slope, adds a component $d(t)$ to the velocity.
The goal is for the car’s velocity $y(t)$ to track a desired reference velocity $r(t)$.
2.1. Open-Loop Control¶
In an open-loop system, the input is generated based solely on the reference signal.
The control law does not account for the actual output.
The driver applies a force proportional to the reference:
$$ u = K r $$
Then the system output becomes:
$$ y = G u + d = G K r + d $$
The tracking error is:
$$ \begin{aligned} e &= r - y = r - G K r - d \\ &= r(1 - G K) - d \end{aligned} $$
Observations
- If the gain $K$ is chosen to match $G^{-1}$, perfect tracking is achieved only under ideal conditions (no disturbance, perfect model).
- Model mismatch or disturbance directly degrades performance.
2.2. Closed-Loop Control¶
In closed-loop control, the control input is adjusted based on the tracking error
$$e = r - y$$
This mimics how humans drive: they adjust acceleration or braking based on the difference between the desired and actual speed.
Let the controller apply:
$$ u = K(r - y) $$
Then the system becomes:
$$ \begin{align*} y &= GK(r-y) + d \\\\ &= KGr-KGy + d \\\\ (1+KG)y &= KGr + d \\\\ \therefore y &= {KGr + d \over 1 + KG} = \frac{KG}{1+KG}r + \frac{1}{1+KG}d \end{align*} $$
The corresponding tracking error is:
$$e = r - y = r - {KGr+d \over 1+KG} = {r - d \over 1 + KG} = \frac{1}{1 + KG}r - \frac{1}{1+KG}d$$
Benefits of Feedback
- Stability enhancement
- Robustness to model uncertainty
- Disturbance rejection
As $K \to \infty$:
- $y \to r$
- $e \to 0$
- $d$ is suppressed
2.3. Model Uncertainty and Disturbance¶
Let’s assume the nominal model is $G_{\text{model}} = 2$, but the true plant is $G_{\text{true}} = 1$.
We compare open-loop and closed-loop performance under this discrepancy, with the desired velocity set to $r = 100$ km/h.
2.3.1. Open-Loop Control¶
We design the controller based on the model:
$$ y_{\text{model}} = G_{\text{model}} K r + d = 2 K r + d $$
To make $y_{\text{model}} = 100$, set:
$$ K = 0.5 $$
Then, the actual output becomes:
$$ y_{\text{true}} = G_{\text{true}} K r + d = 0.5 r + d $$
The tracking errors are:
$$ \begin{aligned} e_{\text{model}} &= r(1 - G_{\text{model}} K) - d = r(1 - 2 \cdot 0.5) - d = -d \\ e_{\text{true}} &= r(1 - G_{\text{true}} K) - d = r(1 - 0.5) - d = 0.5 r - d \end{aligned} $$
Model discrepancy results in a mismatch:
$$ y_{\text{model}} - y_{\text{true}} = K r = 50 $$
Conclusion: Open-loop control is highly sensitive to model error and offers no attenuation of disturbances.
2.3.2. Closed-Loop Control¶
Design based on the model:
$$ y_{\text{model}} = \frac{K G_{\text{model}} r + d}{1 + K G_{\text{model}}} = \frac{2 K r + d}{1 + 2 K} $$
Choose a large gain, say:
$$ K = 100 $$
Then:
$$ y_{\text{model}} = \frac{200 r + d}{201}, \quad y_{\text{true}} = \frac{100 r + d}{101} $$
The tracking errors become:
$$ \begin{aligned} e_{\text{model}} &= r - y_{\text{model}} = \frac{1}{1 + 2K} r - \frac{1}{1 + 2K} d \\ e_{\text{true}} &= r - y_{\text{true}} = \frac{1}{1 + K} r - \frac{1}{1 + K} d \end{aligned} $$
For $d = 0$, the discrepancy is:
$$ y_{\text{model}} - y_{\text{true}} = \frac{200 r}{201} - \frac{100 r}{101} = \frac{K}{2K^2 + 3K + 1} r \approx 0 $$
Conclusion: With high-gain feedback, closed-loop performance is robust to model error and attenuates disturbances.
3.PID Control¶
To regulate a car’s velocity using force input, consider the following simplified plant model:
- Output: velocity $y(t)$
- Input: force $u(t)$
The car's dynamics are governed by:
$$ \dot{y}(t) = \frac{c}{m} u(t) $$
where:
- $m$: mass of the car
- $c$: control effectiveness constant
Taking the Laplace Transform (assuming zero initial conditions):
$$ Y(s) = \frac{c}{m s} U(s) $$
The control objective is to design a control law $u(t)$ such that:
$$ \lim_{t \to \infty} y(t) = r(t), \quad \text{or} \quad \lim_{t \to \infty} e(t) = 0 \quad \text{where} \quad e(t) = r(t) - y(t) $$
3.1. Proportional (P) Control¶
In proportional control, the control signal is proportional to the instantaneous error:
$$ u(t) = k_P \, e(t) $$
- $k_P$: proportional gain
- A small error results in a small control effort
- Provides simple, smooth, and immediate correction
Trade-offs
- If $k_P$ is too small, the system responds sluggishly
- If $k_P$ is too large, the system may overshoot or become unstable
- Proportional control alone cannot eliminate steady-state error in the presence of persistent disturbances
c = 1;
m = 1;
G = tf(c/m,[1 0]);
k = 5;
C = k;
Gcl = feedback(C*G,1,-1);
x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t)); % reference
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])
Example: P Control with Model Mismatch
Suppose the true system includes aerodynamic drag (e.g., wind resistance):
$$ \dot{y} = \frac{c}{m} u - \gamma y, \quad \text{with} \quad u = k_P (r - y) $$
At steady state ($\dot{y} = 0$):
$$ \frac{c}{m} k_P (r - y) = \gamma y \quad \Rightarrow \quad y = \frac{c k_P}{c k_P + m \gamma} r $$
The steady-state error is:
$$ e_{\text{ss}} = r - y = \frac{m \gamma}{c k_P + m \gamma} r $$
Conclusion: P control yields a non-zero steady-state error in the presence of model uncertainty or constant disturbance.
gamma = 1;
Gtr = tf(c/m,[1 gamma]);
C = k;
Gcl = feedback(C*Gtr,1,-1);
x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])
3.2 Proportional-Integral (PI) Control¶
To eliminate steady-state error, we introduce an integral term:
$$ u(t) = k_P \, e(t) + k_I \int_0^t e(\tau) d\tau $$
- $k_I$: integral gain
- Integrates error over time, eliminating persistent deviations
Benefits
- Guarantees zero steady-state error for constant references and disturbances
- Improves tracking accuracy and disturbance rejection
Caveats
- Integral action may cause overshoot and oscillations
- Introduces a pole at the origin, potentially affecting stability margins
Gtr = tf(c/m,[1 gamma]);
kP = 5;
kI = 5;
C = tf([kP kI],[1 0]);
Gcl = feedback(C*Gtr,1,-1);
x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])
3.3. Proportional-Integral-Derivative (PID) Control¶
The full PID controller includes a derivative term:
$$ u(t) = k_P \, e(t) + k_I \int_0^t e(\tau) d\tau + k_D \frac{d e(t)}{dt} $$
- Reacts to the rate of error change, improving damping and responsiveness
Practical Considerations
- The D-term is rarely used alone and often filtered to suppress high-frequency noise
- Most industrial applications use P or PI controllers due to their effectiveness and simplicity
- Full PID is used when precise tuning of transient behavior is required
$k_P = 1,k_I = 0, k_D = 0$
Gtr = tf(c/m,[1 gamma]);
kP = 1;
kI = 0;
kD = 0;
C = tf([kD kP kI],[1 0]);
Gcl = feedback(C*Gtr,1,-1);
x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])
$k_P = 1, k_I = 1, k_D = 0$
Gtr = tf(c/m,[1 gamma]);
kP = 1;
kI = 1;
kD = 0;
C = tf([kD kP kI],[1 0]);
Gcl = feedback(C*Gtr,1,-1);
x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])
$k_P = 1, k_I = 10, k_D = 0$
Gtr = tf(c/m,[1 gamma]);
kP = 1;
kI = 10;
kD = 0;
C = tf([kD kP kI],[1 0]);
Gcl = feedback(C*Gtr,1,-1);
x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])
$k_P = 1, k_I = 2, k_D = 0.1$
Gtr = tf(c/m,[1 gamma]);
kP = 1;
kI = 2;
kD = 0.1;
C = tf([kD kP kI],[1 0]);
Gcl = feedback(C*Gtr,1,-1);
x0 = 0;
t = linspace(0,5,100);
r = 70*ones(size(t));
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,70*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,75])
Another Example
$$G(s) = \frac{1}{s^2 + 10s + 20}$$
s = tf('s');
G = 1/(s^2 + 10*s + 20);
x0 = 0;
t = linspace(0,2,100);
r = 1*ones(size(t));
[y,tout] = lsim(G,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,1*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,2])
kP = 300;
C = pid(kP,0,0);
Gcl = feedback(C*G,1,-1);
x0 = 0;
t = linspace(0,2,100);
r = 1*ones(size(t));
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,1*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,2])
kP = 300;
kD = 10;
C = pid(kP,0,kD);
Gcl = feedback(C*G,1,-1);
x0 = 0;
t = linspace(0,2,100);
r = 1*ones(size(t));
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,1*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,2])
kP = 30;
kI = 70;
C = pid(kP,kI);
Gcl = feedback(C*G,1);
x0 = 0;
t = linspace(0,2,100);
r = 1*ones(size(t));
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,1*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,2])
kP = 350;
kI = 300;
kD = 50;
C = pid(kP,kI,kD);
Gcl = feedback(C*G,1,-1);
x0 = 0;
t = linspace(0,2,100);
r = 1*ones(size(t));
[y,tout] = lsim(Gcl,r,t,x0);
plot(tout,y, 'linewidth', 2), hold on
plot(tout,1*ones(size(tout)),'--k'), hold off
xlabel('t'), ylim([0,2])
3.4 Guidelines for PID Controller Design¶
When designing a PID controller, the following process is recommended:
(1) Analyze the open-loop response of the system
(2) Introduce proportional control ($k_P$) to improve rise time
(3) Add derivative control ($k_D$) to reduce overshoot and improve damping
(4) Add integral control ($k_I$) to eliminate steady-state error
(5) Tune $k_P$, $k_I$, and $k_D$ iteratively to balance performance criteria:
- Rise time
- Overshoot
- Settling time
- Steady-state error
4. MKC System as Closed Loop¶
Mechanical systems involving mass ($m$), damping ($c$), and stiffness ($k$) provide intuitive physical examples of second-order dynamical systems. These systems can be naturally interpreted in terms of feedback control, where the dynamics of the spring and damper contribute feedback forces that regulate motion.
4.1 Mass-Only System¶
Consider a body of mass $m$ subjected to an external force input $u(t)$:
$$ m\ddot{y}(t) = u(t) $$
Transfer function (Laplace domain):
$$ \frac{Y(s)}{U(s)} = \frac{1}{m s^2} $$
- Pure double integrator; marginally stable
- No natural regulation (restoring or damping) of motion
4.2 Mass-Spring System (MK)¶
Introducing a spring with stiffness $k$ adds a restoring force $-k y(t)$. The equation of motion becomes:
$$ m\ddot{y}(t) + k y(t) = u(t) \quad \text{or} \quad m\ddot{y}(t) = u(t) - k y(t) $$
$$m\ddot{y} + ky= u$$
- This system exhibits oscillatory behavior with no energy loss
- The spring introduces proportional feedback, which pulls the mass back toward equilibrium
This diagram rearranges the MK system into a feedback loop, revealing the system's internal structure:
$$ G(s) = \frac{1}{ms^2}, \quad \text{Feedback: } k $$
The external input $u(t)$ is compared to the feedback force $k y(t)$, which comes from the spring.
The error signal is $u(t) - k y(t)$, which drives the double integrator (i.e., the mass).
This feedback structure interprets the spring as providing proportional negative feedback, where the force is proportional to displacement.
Interpretation
The spring force $-k y(t)$ can be viewed as proportional control (P control) acting on the mass.
The mass $m$ acts as the plant with dynamics $\frac{1}{ms^2}$, representing double integration from force to position.
This system self-regulates position through feedback from the spring.
This feedback formulation offers insight into the system's stability and dynamic behavior using control-theoretic tools.
4.3 Mass-Spring-Damper System (MKC)¶
Adding damping (with coefficient $c$), the equation becomes:
$$ m\ddot{y}(t) + c\dot{y}(t) + k y(t) = u(t) \quad \text{or} \quad m\ddot{y}(t) = u(t) - c\dot{y}(t) - k y(t) $$
This is the canonical form of a second-order linear system.
Transfer function:
$$ \frac{Y(s)}{U(s)} = \frac{1}{m s^2 + c s + k} $$
We now revisit the MKC system using two common block diagram configurations—one in single-block form and one in feedback form. These perspectives are mathematically equivalent but reveal different structural insights.
This block diagram shows the transfer function of the mass-spring-damper system as a single linear block:
$$ \frac{1}{m s^2 + c s + k} $$
- Input: External force $u(t)$
- Output: Displacement $y(t)$
- This is a canonical second-order system, often used for time-domain or frequency response analysis.
Feedback Structure Representation
This more detailed block diagram reveals the internal feedback structure of the MKC system:
$$m\ddot{y} = u - ky - c\dot{y}$$
Forward Path:
- A pure double integrator: $\frac{1}{m s^2}$
(represents Newton's law: $m \ddot{y} = F_{\text{net}}$)
Feedback Path:
- Proportional feedback: $k y(t)$ (spring force)
- Derivative feedback: $c s y(t) = c \dot{y}(t)$ (damping force)
Interpretation:
The spring and damper forces are subtracted from the input force $u(t)$ to produce the net force acting on the mass:
$$ u(t) - k y(t) - c \dot{y}(t) $$
This feedback structure illustrates that the physical dynamics of the MKC system naturally implement PD control:
- The spring provides proportional feedback
- The damper provides derivative feedback
Another block diagram:
The block diagram above shows how the MKC system can be visualized as a feedback control system:
The input force $u(t)$ drives the system
The motion $y(t)$ is fed back through two paths:
- Spring path: $k y(t)$ (proportional feedback)
- Damper path: $c \dot{y}(t)$ (derivative feedback)
- The net force on the mass is:
$$ u(t) - c\dot{y}(t) - k y(t) $$
- This is then passed through the plant dynamics $1 / (m s)$ and an integrator to produce $y(t)$
Key Insight:
- The spring acts as a proportional controller (P)
- The damper acts as a derivative controller (D)
- The mass acts as an integrator
Hence, the MKC system behaves like a built-in PD-controlled loop using the laws of physics.
from IPython.display import YouTubeVideo
YouTubeVideo('-fNoz5K5FHA', width = "560", height = "315")
Control Theory by Brian Douglas
from IPython.display import YouTubeVideo
YouTubeVideo('UR0hOmjaHp0', width = "560", height = "315")
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')