Control:

Stability

Table of Contents





1. Stability of Open Loop System

Consider a rational transfer function of the form


$$ G(s) = \frac{N(s)}{D(s)}, $$


where $N(s)$ and $D(s)$ are polynomials in $s$. The roots of the denominator polynomial $D(s)$ are called the poles of the transfer function and determine the system's stability properties.


Stability Criterion

A linear time-invariant (LTI) system is said to be asymptotically stable if all poles of its transfer function lie strictly in the left-half of the complex $s$-plane (i.e., all poles have negative real parts).


Equivalently, the system is stable if and only if the roots of the characteristic equation


$$ D(s) = 0 $$


all lie in the open left-half plane (LHP).



Example: First-Order System

Consider the simple first-order transfer function:


$$ G(s) = \frac{1}{s + a}, $$


with inverse Laplace transform


$$ \mathscr{L}^{-1}\left\{G(s)\right\} = e^{-at} u(t). $$


  • If $a > 0$, then the pole is located at $s = -a$ in the LHP. The exponential term $e^{-at}$ decays to zero as $t \to \infty$, and the system is stable.

  • If $a < 0$, then the pole is located in the right-half plane (RHP). The response grows exponentially without bound, and the system is unstable.



Example: Higher-Order System

Consider a system with three poles:


$$ G(s) = \frac{1}{(s + 1)(s + 3)(s - 2)}. $$


This system can be decomposed into partial fractions:


$$ G(s) = \frac{A}{s + 1} + \frac{B}{s + 3} + \frac{C}{s - 2}, $$


with inverse Laplace transform


$$ \mathscr{L}^{-1}\left\{G(s)\right\} = A e^{-t} + B e^{-3t} + C e^{2t}. $$


The terms $A e^{-t}$ and $B e^{-3t}$ decay over time, but the term $C e^{2t}$ grows without bound. Since this exponential term dominates for large $t$, the overall system response diverges.



Even if most poles of a system lie in the left-half plane, the presence of a single pole in the right-half plane is sufficient to render the system unstable. Therefore, for open-loop stability, all poles must lie strictly in the LHP.





2. Routh-Hurwitz Criterion

In analyzing the stability of linear time-invariant (LTI) systems, it is often necessary to determine whether the poles of a system's transfer function lie entirely in the left-half of the complex plane. For low-order systems, this can be done by solving the characteristic polynomial directly. However, for higher-order systems, solving for the roots becomes computationally intensive or even analytically infeasible.


The Routh-Hurwitz criterion offers a practical alternative: it enables us to assess the number of roots of a polynomial that lie in the right-half plane without explicitly calculating them.


Example Motivation

Consider the following transfer function:


$$ G(s) = \frac{1}{s^4 + 3s^3 - 5s^2 + s + 2} $$


The characteristic polynomial is


$$ D(s) = s^4 + 3s^3 - 5s^2 + s + 2 $$


Note that the coefficients are not all of the same sign. This is a strong indication that the system is unstable. In fact, if a transfer function is constructed by multiplying first-order factors of the form $(s - p_i)$, then a negative coefficient must result from the presence of at least one pole in the right-half plane.


However, the converse is not true. That is, even if all coefficients of a characteristic polynomial are positive, the system may still be unstable.


Example with Positive Coefficients

Consider the transfer function:


$$ G(s) = \frac{1}{s^4 + 2s^3 + 3s^2 + 10s + 8} $$


Although all coefficients are positive, this does not guarantee that all poles lie in the left-half plane. A Routh-Hurwitz analysis is required to determine the actual pole locations.


To further illustrate this, factor the denominator as follows:


$$ \begin{align*} G(s) &= \frac{1}{(s^2 - s + 4)(s + 2)(s + 1)} \end{align*} $$


The quadratic factor $s^2 - s + 4$ has complex conjugate roots with positive real parts if applicable. Hence, coefficient signs alone are not a sufficient condition for stability.


The Routh-Hurwitz criterion provides a necessary and sufficient condition for determining the number of right-half plane poles based on a tabular computation using the coefficients of the characteristic polynomial.


2.1. Normal Case: Constructing the Routh Array

The Routh-Hurwitz criterion provides a systematic way to determine the number of roots of a characteristic polynomial that lie in the right-half complex plane (RHP), without explicitly computing the roots.


Given a characteristic polynomial of degree six:


$$ D(s) = A s^6 + B s^5 + C s^4 + D s^3 + E s^2 + F s + G $$


we construct the Routh array, a tabular structure filled using the polynomial's coefficients.



The number of roots in the RHP is equal to the number of sign changes in the first column of the Routh array.


Construction Method

(1) The first row is formed from the even-powered coefficients:


$$A,\ C,\ E,\ \ldots$$


(2) The second row is formed from the odd-powered coefficients:


$$B,\ D,\ F,\ \ldots$$


(3) Subsequent rows are computed using a recursive formula involving determinants of elements in the two preceding rows.

(4) Continue until the entire array is filled.


Once the array is complete, inspect the signs of the first column from top to bottom. Each change in sign indicates one RHP root.


Example

Consider the transfer function:


$$ G(s) = \frac{1}{s^4 + 2s^3 + 3s^2 + 10s + 8} $$


The corresponding characteristic polynomial is:


$$ D(s) = s^4 + 2s^3 + 3s^2 + 10s + 8 $$


Construct the Routh array:



In the first column: $1,\ 2,\ -2,\ \cdots$. There is a sign change from 2 to -2, indicating at least one pole in the right-half plane.


Even though all the coefficients of the polynomial are positive, the system is not stable. This illustrates the power of the Routh-Hurwitz criterion: it detects instability without solving for the roots.


2.2. Special Case 1: Zero in a Row

In some cases, a zero appears in the first column of the Routh array while non-zero elements remain in the same row. This situation prevents the direct computation of the next row using the standard Routh array procedure, as division by zero is undefined.


Key Observations

  • When a zero appears in the first column but not the entire row, the system is potentially unstable.
  • The presence of the zero causes a breakdown in the recursive computation of subsequent rows.


Practical Approach

To proceed, we replace the zero with a small positive number $\epsilon$ and continue filling the array symbolically.

  • After completing the array with $\epsilon$ in place, we analyze the sign changes in the first column.
  • Finally, we take the limit as $\epsilon \to 0^+$ to determine the true number of sign changes.

This method gives the correct count of right-half plane (RHP) poles, even though the array includes a symbolic placeholder.



When you finish completing the table, you can take the limit as epsilon $\epsilon$ goes to zero



Stability Insight

Even without completing the table:

  • The presence of a zero in the first column, followed by a non-zero element in the same row, guarantees instability.
  • Completing the array confirms this by resulting in a sign change in the first column, indicating at least one RHP root.

  • Zero in first column + non-zero in row → Unstable
  • Use $\epsilon$ substitution if you wish to determine the exact number of RHP roots
  • This is a common pathological case that highlights the robustness of the Routh-Hurwitz method

This technique is essential when analyzing higher-order systems where numerical root-finding is infeasible.


2.3. Special Case 2: Entire Row of Zeros

The second special case in the Routh-Hurwitz criterion occurs when an entire row of the Routh array becomes zero. This situation requires special handling to continue the analysis.


Let the characteristic polynomial be:


$$ D(s) = s^5 + 2s^4 + 6s^3 + 10s^2 + 8s + 12 $$



Suppose that the Routh array constructed from this polynomial yields a row of zeros. To resolve this, define an auxiliary polynomial using the row directly above the zero row.


In this case, the row above the zero row is:


$$ 6s^2 + 12 \quad \Rightarrow \quad P(s) = s^2 + 2 $$



This auxiliary polynomial is then used to derive the rest of the Routh array. Note that:

  • $P(s)$ is a factor of $D(s)$
  • The roots of $P(s)$ indicate the presence of purely imaginary poles
  • This typically corresponds to a marginally stable system

We can factor the original polynomial as:


$$ \begin{align*} D(s) &= P(s) \cdot R(s) \\\\\\ s^5 + 2s^4 + 6s^3 + 10s^2 + 8s +12 & = \underbrace{(s^2 + 2)}_{\text{marginally stable}}\;\;\underbrace{(s^3 + 2s^2 + 4s + 6)}_{\text{stable}} \end{align*} $$

This decomposition reveals:

  • The factor $s^2 + 2$ corresponds to poles on the imaginary axis (marginally stable)
  • The remaining factor $R(s)$ can be analyzed separately for additional stability information


To fully determine the stability of the system, apply the Routh-Hurwitz criterion to $R(s)$.

  • If $R(s)$ has all roots in the left-half plane, the system is said to be marginally stable due to the imaginary roots of $P(s)$.
  • If $R(s)$ has right-half plane roots, the system is unstable.

In [ ]:
from IPython.display import YouTubeVideo
YouTubeVideo('WBCZBOB3LCA', width = "560", height = "315")
Out[ ]:
In [ ]:
from IPython.display import YouTubeVideo
YouTubeVideo('oMmUPvn6lP8', width = "560", height = "315")
Out[ ]:
In [ ]:
from IPython.display import YouTubeVideo
YouTubeVideo('wGC5C_7Yy-s', width = "560", height = "315")
Out[ ]:




3. Stability with State-Space Representation

To understand stability in the context of state-space systems, it is helpful to begin with scalar systems and then generalize to vector and matrix equations.


3.1. Scalar Case

Consider the simple first-order differential equation:


$$ \dot{x} = a x $$


Its solution is:


$$ x(t) = e^{a t} x(0) $$


The behavior of the system depends on the sign of the scalar coefficient $a$:


$$ \begin{cases} a > 0 & \text{unstable (exponential growth)} \\ a < 0 & \text{asymptotically stable (exponential decay)} \\ a = 0 & \text{critically stable (constant)} \end{cases} $$



3.2. State-Space Generalization

Now consider the linear time-invariant (LTI) system in state-space form:


$$ \begin{aligned} \dot{x} &= A x + B u \\ y &= C x + D u \end{aligned} $$


In this context, stability depends on the properties of the state matrix $A$. Unlike the scalar case, we cannot directly assess stability using the inequality $A > 0$ or $A < 0$, since $A$ is a matrix. Instead, we examine the eigenvalues of $A$.


Role of Eigenvalues

Suppose $v$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda$:


$$ A v = \lambda v $$


This implies that in the direction of $v$, the dynamics behave like a scalar system with coefficient $\lambda$. Thus, the stability of the system is governed by the real parts of the eigenvalues of $A$:


$$ \begin{cases} \text{Re}(\lambda) > 0 & \text{unstable} \\ \text{Re}(\lambda) < 0 & \text{asymptotically stable} \\ \text{Re}(\lambda) \leq 0 & \text{critically or marginally stable (if no repeated zero eigenvalues)} \end{cases} $$


To determine the stability of a linear system described by state-space equations, examine the eigenvalues of the state matrix $A$. The sign of their real parts directly determines the qualitative behavior of the system's natural response.





4. Stability of Closed-Loop Systems

When analyzing the stability of a closed-loop system, it is often useful to relate it to the properties of the corresponding open-loop system. Even though the closed-loop system involves feedback, its stability can be studied using the open-loop transfer function.


4.1 Root Locus: Stability in the Time Domain



Consider a standard unity-feedback configuration, where the closed-loop transfer function is given by


$$ H(s) = \frac{K G(s)}{1 + K G(s)} $$


The characteristic equation that determines the closed-loop poles is the denominator:


$$ 1 + K G(s) = 0 $$


Solving this equation gives the locations of the closed-loop poles as the gain $K$ varies. This leads to the concept of the root locus, which is the set of all possible closed-loop pole locations in the complex $s$-plane as $K$ ranges from $0$ to $\infty$.


To lie on the root locus, a complex number $s^*$ must satisfy both of the following conditions:


$$ \begin{cases} |K G(s^*)| = 1 \quad \Rightarrow \quad K = \frac{1}{|G(s^*)|} \\ \angle K G(s^*) = (2k+1)\pi \quad \text{for some integer } k \end{cases} $$


These conditions ensure that the feedback loop satisfies the Nyquist phase and magnitude criteria for oscillation or instability.


Stability Criterion

The stability of the closed-loop system depends on the location of the poles:

  • If all closed-loop poles lie in the left-half plane (LHP), the system is asymptotically stable
  • If any pole lies on the right-half plane (RHP), the system is unstable
  • If poles lie on the imaginary axis but not repeated, the system is marginally stable

The distance of the poles from the imaginary axis also provides insight into how "close" the system is to becoming unstable. Poles near the imaginary axis result in slow dynamics or oscillatory behavior, while poles far into the LHP lead to faster, more damped responses.


Thus, the root locus method provides a powerful graphical tool for selecting a suitable value of $K$ that ensures desired stability and transient performance.



4.2. Relative Stability: Frequency-Domain Perspective

The stability of a closed-loop system can also be inferred from the frequency response of the open-loop transfer function. This approach provides not only a binary answer to stability, but also a measure of how stable the system is — referred to as relative stability.



Suppose the Bode plot of the open-loop transfer function $G(s)$ is given. The question is:

Can we determine the stability of the closed-loop system from this open-loop frequency response?



Phase Crossover and Gain Crossover

Instability occurs in feedback systems when the phase shift around the loop reaches $180^\circ$ (or $\pi$ radians), and the magnitude of the loop transfer function is greater than 1.
In this situation, the feedback becomes positive rather than negative, leading to signal amplification and eventual instability.


This motivates two key definitions:

  • Phase crossover frequency: the frequency $\omega_{\text{pc}}$ where the phase of $G(j\omega)$ is $-180^\circ$
  • Gain crossover frequency: the frequency $\omega_{\text{gc}}$ where the magnitude $|G(j\omega)|$ is 1 (or 0 dB)


Gain Margin and Phase Margin

To assess how close the system is to instability, we define two measures:

  • Gain margin (GM):
    The reciprocal of the magnitude at the phase crossover frequency

$$ \text{Gain Margin (dB)} = -20 \log_{10} |G(j\omega_{\text{pc}})| $$


  • Phase margin (PM):
    The difference between $-180^\circ$ and the phase of $G(j\omega)$ at the gain crossover frequency

$$ \text{Phase Margin} = \angle G(j\omega_{\text{gc}}) + 180^\circ $$


Both margins must be positive for the closed-loop system to be stable. Furthermore:

  • A larger gain margin means the system can tolerate a greater increase in gain $K$ before becoming unstable.
  • A larger phase margin means the system can tolerate more phase delay (e.g., from dynamics or computation) before becoming unstable.

These margins provide an intuitive and quantitative sense of relative stability, and they can be directly estimated from open-loop Bode plots — making them essential tools for robust control design.


More intuitively,

  • Gain margin indicates how much you can increase the loop gain $K$ before the system goes unstable
  • Phase margin indicates the amount of phase lag (time delay) you can add before the system goes unstable


by Prof. Richard Hill from the University of Detroit Mercy

In [ ]:
from IPython.display import YouTubeVideo
YouTubeVideo('Ky8In4J0v4w', width = "560", height = "315")
Out[ ]:

4.3. Relative Stability in MATLAB

Open-loop transfer function:

$$G(s) = \frac{1}{s^3+2s^2+s} = \frac{1}{s(s+1)^2}$$

In [ ]:
G = tf(1,[1 2 1 0]);


Gm - Gain margin

Pm - Phase margin

Wcg - Gain crossover frequency

Wcp - Phase crossover frequency

In [ ]:
margin(G)
[Gm,Pm,Wcg,Wcp] = margin(G)
Gm =

     2


Pm =

   21.3877


Wcg =

     1


Wcp =

    0.6823


No description has been provided for this image
In [ ]:
20*log10(2)
ans =

    6.0206



Is stable the closed-loop system with a unity negative feedback?

In [ ]:
step(feedback(G,1,-1))

No description has been provided for this image

Gain margin


$$CG(s) = 2 \times \frac{1}{s^3+2s^2+s}$$

In [ ]:
GmG = Gm*G;
margin(GmG)

No description has been provided for this image
In [ ]:
step(feedback(GmG,1,-1))
xlim([0,50])

No description has been provided for this image

$$CG(s) = 4 \times \frac{1}{s^3+2s^2+s}$$

In [ ]:
GmG = 4*G;
margin(GmG)

No description has been provided for this image
In [ ]:
step(feedback(GmG,1,-1))
xlim([0,30])
ylim([-100,100])

No description has been provided for this image

Phase margin

  • Add more delay

$$CG(s) = e^{-j \Phi} \times \frac{1}{s^3+2s^2+s}$$

In [ ]:
PmG = exp(-j*Pm/180*pi)*G
margin(PmG)
PmG =
 
  (0.9311-0.3647i)
  ----------------
  s^3 + 2 s^2 + s
 
Continuous-time transfer function.


No description has been provided for this image
In [ ]:
FG = feedback(PmG,1,-1);

t = linspace(0,100,1000);
u = ones(size(t));

[y, tout] = lsim(FG,u,t,0);
plot(tout, abs(y))

No description has been provided for this image




5. Stability via the Nyquist Plot

The Nyquist plot provides a powerful graphical method for assessing the stability and robustness of a feedback system directly from the open-loop transfer function $G(s)$.


The Nyquist plot is a parametric plot of $G(j\omega)$ in the complex plane as the frequency $\omega$ varies from $0$ to $\infty$.
It represents the frequency response of the open-loop system and encodes both magnitude and phase in a single curve.


5.1. Key Stability Metrics from the Nyquist Plot

Gain Margin

The gain margin is defined as the inverse of the magnitude of the open-loop transfer function at the frequency where the phase is $-180^\circ$:


$$ K_m = \frac{1}{|G(j\omega)|} \quad \text{when} \quad \angle G(j\omega) = -180^\circ $$


  • $K_m$ represents the maximum gain that the system can tolerate before becoming unstable.
  • From the Nyquist plot, $K_m$ is the reciprocal of the distance from the point $-1 + j0$ to the Nyquist curve along the ray of angle $-180^\circ$.

Phase Margin

The phase margin $\Phi_m$ is the amount of additional phase lag that can be tolerated before the system becomes unstable. It is defined as:


$$ \Phi_m = \angle G(j\omega) + 180^\circ \quad \text{at} \quad |G(j\omega)| = 1 $$


  • This corresponds to the angular distance between the Nyquist plot and the $-1 + j0$ point when the plot crosses the unit circle (magnitude = 1).
  • A larger phase margin indicates that the system can tolerate more time delay or dynamic lag without losing stability.

Stability Conditions

A closed-loop system with unity feedback is stable if:

  • The Nyquist plot does not encircle the critical point $-1 + j0$
  • Both the gain margin $K_m$ and phase margin $\Phi_m$ are positive

The Nyquist plot is especially valuable for systems with time delays or high-order dynamics, where algebraic techniques (such as factoring or root locus) are less tractable.



$$G(s) = \frac{20}{(s+1)(s+2)(s+3)}$$

In [ ]:
s = tf('s');
G = 20/((s+1)*(s+2)*(s+3));

In [ ]:
margin(G)
[Gm,Pm,Wcg,Wcp] = margin(G)
Gm =

    3.0000


Pm =

   44.4630


Wcg =

    3.3166


Wcp =

    1.8382


No description has been provided for this image
In [ ]:
nyquist(G)

axis([-6,12,-7,7])
axis equal

No description has been provided for this image
In [ ]:
nyquist(Gm*G)

axis([-6,12,-7,7])
axis equal

No description has been provided for this image
In [ ]:
% expect instability for large Km

rlocus(G)

axis equal

No description has been provided for this image
In [ ]:
PmG = exp(-j*Pm/180*pi)*G

margin(PmG)
PmG =
 
      (14.27-14.01i)
  ----------------------
  s^3 + 6 s^2 + 11 s + 6
 
Continuous-time transfer function.


No description has been provided for this image
In [ ]:
nyquist(PmG)

axis([-6,12,-7,7])
axis equal

No description has been provided for this image
In [ ]:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')