Dynamical Systems:
Linear Transform
Table of Contents
1. Linear Transforms: Fourier and Laplace Transforms¶
1.1. Inner Product and Orthogonal Decomposition¶
The concept of an inner product is fundamental in linear algebra and signal analysis. For real-valued vectors $\vec{x}, \vec{y} \in \mathbb{R}^n$, the inner product (or dot product) is defined as
$$ \vec{x}\cdot\vec{y} = \langle \vec{x}, \vec{y} \rangle = \vec{x}^T \vec{y} = \sum_{i=1}^n x_i y_i $$
This quantity serves as a measure of similarity between two vectors. Two vectors are said to be orthogonal if their inner product is zero:
$$ \langle \vec{x}, \vec{y} \rangle = 0 \quad \Longrightarrow \quad \vec{x} \perp \vec{y}. $$
Suppose $\{\hat{b}_i\}_{i=1}^n$ is an orthogonal basis of $\mathbb{R}^n$. Then any vector $\vec{x} \in \mathbb{R}^n$ can be uniquely expressed as a linear combination:
$$\vec{x} = c_1\hat{b}_1 + c_2\hat{b}_2 + \cdots + c_n\hat{b}_n$$
To determine the coefficient $c_i$, we take the inner product with $\hat{b}_i$:
$$ \begin{align*} \langle \vec{x}, \hat{b}_i \rangle &= c_1\langle \hat{b}_1, \hat{b}_i \rangle + \cdots + c_i\langle \hat{b}_i, \hat{b}_i \rangle + \cdots + c_n \langle \hat{b}_n, \hat{b}_i \rangle\\ \\ \therefore c_i & = {\langle x, \hat{b}_i \rangle \over \langle \hat{b}_i, \hat{b}_i \rangle} \end{align*} $$
This expression reveals that $c_i$ measures the amount of $\hat{b}_i$ contained in $\vec{x}$, or equivalently, the projection of $\vec{x}$ onto the direction of $\hat{b}_i$.
1.2. Complex Inner Product and the Hermitian Form¶
In complex vector spaces, the inner product is generalized using the Hermitian transpose. For complex-valued vectors $\vec{x}, \vec{y} \in \mathbb{C}^n$, the inner product is defined as
$$ \langle \vec{x}, \vec{y} \rangle = \vec{y}^H \vec{x} = \sum_{i=1}^n \overline{y}_i x_i, $$
where $\vec{y}^H$ or $ \overline{y}_i$ denotes the conjugate transpose of $\vec{y}$.
1.3. Fourier Transform as Projection onto Sinusoidal Basis¶
Let $x(t)$ be a real or complex-valued signal. The complex exponential $e^{j\omega t}$ serves as a natural basis function in continuous-time signal processing. We can ask: how much of the frequency component $e^{j\omega t}$ is contained in $x(t)$?
The answer lies in computing the inner product:
$$ \langle x(t), e^{j\omega t} \rangle = \int_{-\infty}^\infty x(t) \cdot \overline{e^{j\omega t}} \, dt = \int_{-\infty}^\infty x(t) e^{-j\omega t} \, dt. $$
This expression defines the Fourier Transform of $x(t)$:
Definition (Fourier Transform):
$$ X(j\omega) = \int_{-\infty}^\infty x(t) e^{-j\omega t} \, dt. $$
The transform $X(j\omega)$ represents the frequency spectrum of $x(t)$, describing the signal's content at each frequency $\omega$.
1.4. Laplace Transform as Generalized Exponential Projection¶
To generalize the Fourier Transform, we consider projecting a signal onto exponentially decaying sinusoids of the form:
$$ e^{-\sigma t} e^{j\omega t} = e^{-\sigma t + j\omega t} $$
Then we can write:
$$ \begin{align*} \left\langle x, e^{-\sigma + j\omega t} \right\rangle &= \int \left(e^{-\sigma+j \omega t}\right)^H x(t) \, dt \\ & = \int x(t)e^{-\sigma-j\omega t}dt \\ & = \int x(t)e^{-st}dt = X(s) \end{align*} $$
where
$$s = \sigma + j\omega$$
Although this is not a Hermitian inner product in the strict functional-analytic sense (since $e^{-st} \notin L^2([0, \infty))$ for many values of $s$), we can interpret it informally as a projection of $x(t)$ onto the complex exponential $e^{-st}$. This projection provides information about how much of this exponential mode is present in the signal.
Definition (Laplace Transform):
$$ X(s) = \int_0^\infty x(t) e^{-st} \, dt, \qquad s \in \mathbb{C}. $$
This transformation encodes not only the frequency content of $x(t)$ (via $\omega$), but also its growth or decay rate (via $\sigma$), making it particularly suitable for analyzing systems with transient and steady-state behavior.
2. Laplace Transform and the Eigenfunction Property of LTI Systems¶
2.1. Laplace Transform¶
Convolution: General Response of an LTI System
For a Linear Time-Invariant (LTI) system with impulse response $h(t)$, the system's output $y(t)$ to an input $x(t)$ is given by convolution:
$$ \begin{align*} y(t) &= H\{x(t)\} \\ &= (h * x)(t) = \int_{-\infty}^{\infty} h(\tau)\, x(t - \tau)\, d\tau \end{align*} $$
This equation expresses the output as a weighted sum of shifted impulse responses, where the input $x(t)$ acts as a weighting function.
Special Input: Exponential Function
Let the input be a complex exponential function of the form:
$$ x(t) = e^{st}, \quad \text{where} \quad s = \sigma + j\omega $$
Here, $s$ is a complex number:
- $\sigma$ determines exponential growth/decay (real part)
- $\omega$ determines oscillatory behavior (imaginary part)
We now evaluate the system response to this input:
$$ \begin{align*} y(t) &= \int_{-\infty}^{\infty} h(\tau)\, x(t - \tau)\, d\tau \\ &= \int_{-\infty}^{\infty} h(\tau)\, e^{s(t - \tau)}\, d\tau \\ &= e^{st} \int_{-\infty}^{\infty} h(\tau)\, e^{-s\tau}\, d\tau \end{align*} $$
This result reveals a crucial property:
- The output is simply the input $e^{st}$ scaled by a complex number that depends on $s$.
Definition: Laplace Transform
We now define the Laplace Transform of the impulse response $h(t)$ as:
$$ H(s) = \int_{-\infty}^{\infty} h(\tau)\, e^{-s\tau}\, d\tau $$
This defines the transfer function of the system in the complex frequency domain. Substituting back:
$$ y(t) = H(s)\, e^{st} $$
2.2. Eigenfunction and Eigenvalue Interpretation¶
We have shown that:
$$ H\{e^{st}\} = H(s)\, e^{st} $$
This means:
- The complex exponential $e^{st}$ is an eigenfunction of any LTI system.
- The associated eigenvalue is the transfer function $H(s)$.
This result has deep implications:
- LTI systems act diagonally on the space of exponential signals
- All dynamics of the system are captured in the frequency response $H(s)$
Why Complex Exponentials?
- Exponential functions form a complete basis for analyzing linear differential equations
- In Fourier analysis, we project onto pure oscillations ($s = j\omega$)
- In Laplace analysis, we project onto growing/decaying oscillations ($s = \sigma + j\omega$), which allows us to include initial conditions and study transient behaviors
Example 1: Exponentially Decaying Function
Let
$$ x(t) = e^{-t}u(t) = \begin{cases} e^{-t}, & t > 0 \\ 0, & t < 0 \end{cases} $$
Compute the Laplace transform:
$$ \begin{align*} X(s) &= \int_0^{\infty} e^{-t} \cdot e^{-st}\, dt \\ &= \int_0^{\infty} e^{-(s+1)t}\, dt \\ &= \left[ \frac{e^{-(s+1)t}}{-(s+1)} \right]_0^{\infty} \\ &= \frac{1}{s + 1}, \quad \text{for } \text{Re}(s) > -1 \end{align*} $$
- The condition $\text{Re}(s) > -1$ ensures convergence of the integral.
- This is known as the Region of Convergence (ROC).
Example 2: Linear Combination of Exponentials
Let
$$ x(t) = 3e^{-2t}u(t) - 2e^{-t}u(t) $$
Compute the Laplace transform:
$$ X(s) = \underbrace{3 \over s+2}_{\text{Re}(s) > -2} - \underbrace{2 \over s+1}_{\text{Re}(s) > -1} = {3(s+1) - 2(s+2) \over (s+2)(s+1)} = \underbrace{s-1 \over s^2 + 3s + 2}_{\text{Re}(s)>-1} $$
The ROC is determined by the rightmost pole, in this case at $s = -1$.
2.3. Unit Impulse Signal¶
The unit-impulse signal acts as a pulse with unit area but zero width
$$\delta(t) = \lim_{\epsilon\rightarrow0}p_\epsilon(t)$$
The unit-impulse function is represented by an arrow with the number 1, which represents its area.
Compute the Laplace transform:
$$ \begin{align*} X(s) &= \int_{-\infty}^{\infty} \delta(t)\, e^{-st}\, dt \\ &= e^{-s \cdot 0} = 1 \end{align*} $$
The delta function "picks out" the value of the integrand at $t = 0$.
This is an application of the sifting property:
$$ \int_{-\infty}^{\infty} f(t)\, \delta(t)\, dt = f(0) $$
2.4. Unit Step Signal¶
The unit step function, denoted by $u(t)$, is defined as the integral of the Dirac delta function $\delta(t)$:
$$ u(t) = \int_{-\infty}^t \delta(\tau)\, d\tau = \begin{cases} 1, & t \geq 0 \\ 0, & t < 0 \end{cases} $$
- $u(t)$ is a causal signal: it is zero for $t < 0$ and one for $t \geq 0$.
- It is often used to model systems that are "switched on" at $t = 0$.
To compute the Laplace transform of $u(t)$:
$$ U(s) = \mathcal{L}\{u(t)\} = \int_0^{\infty} u(t)\, e^{-st}\, dt $$
Since $u(t) = 1$ for $t \geq 0$, we have:
$$ \begin{align*} U(s) &= \int_0^{\infty} e^{-st}\, dt \\ &= \left[ \frac{e^{-st}}{-s} \right]_0^{\infty} \\ &= \frac{1}{s}, \qquad \text{for } \text{Re}(s) > 0 \end{align*} $$
The Region of Convergence (ROC) is $\text{Re}(s) > 0$, which ensures convergence of the integral.
Note:
The unit step function $u(t)$ is the integral of the Dirac delta function $\delta(t)$:
$$ u(t) = \int_{-\infty}^{t} \delta(\tau) \, d\tau $$
- This means the step function "accumulates" the impulse.
- The delta function is the derivative of the step:
$$ \delta(t) = \frac{d}{dt} u(t) $$
Taking Laplace transforms of both sides:
- For the delta function:
$$ \mathcal{L}\{\delta(t)\} = 1 $$
- For the unit step function:
$$ \mathcal{L}\{u(t)\} = \frac{1}{s}, \quad \text{Re}(s) > 0 $$
2.5. Convolution Property of the Laplace Transform¶
Let the Laplace transforms of two time-domain functions be defined as:
$$ f(t) \;\; \mathop{\longleftrightarrow}^{\mathscr{L}} \;\; F(s), \qquad F(s) = \int_{0}^{\infty} f(t)e^{-st}dt $$
$$ g(t) \;\; \mathop{\longleftrightarrow}^{\mathscr{L}} \;\; G(s), \qquad G(s) = \int_{0}^{\infty} g(t)e^{-st}dt $$
Then their convolution in the time domain corresponds to multiplication in the Laplace domain:
$$ (f * g)(t) = \int_0^t f(\tau)g(t - \tau)d\tau \quad \longleftrightarrow \quad F(s)G(s) $$
Proof:
Let
$$ y(t) = (f * g)(t) = \int_0^t f(\tau)g(t - \tau)d\tau $$
Take the Laplace transform:
$$ \mathcal{L}\{y(t)\} = \int_0^\infty \left[ \int_0^t f(\tau)g(t - \tau)d\tau \right] e^{-st}dt $$
Swap the order of integration:
$$ = \int_0^\infty f(\tau) \left[ \int_\tau^\infty g(t - \tau) e^{-st} dt \right] d\tau $$
Let $u = t - \tau \Rightarrow t = u + \tau$:
$$ = \int_0^\infty f(\tau) \left[ \int_0^\infty g(u) e^{-s(u + \tau)} du \right] d\tau $$
$$ = \int_0^\infty f(\tau) e^{-s\tau} \left[ \int_0^\infty g(u)e^{-su} du \right] d\tau = F(s)G(s) $$
Application: LTI System Response
If an LTI system has impulse response $h(t)$ and input $x(t)$, then the output is:
$$ y(t) = h(t) * x(t) \quad \longrightarrow \quad Y(s) = H(s)X(s) $$
2.6. Laplace Transform of a Derivative¶
Let
$$ \mathscr{L}\{x(t)\} = X(s), \quad \text{where } X(s) = \int_0^\infty x(t) e^{-st} dt $$
We want to find the Laplace transform of the derivative:
$$ y(t) = \dot{x}(t) $$
Using integration by parts:
$$ \int_0^\infty \dot{x}(t) e^{-st} dt = \Big[ x(t) e^{-st} \Big]_0^\infty - \int_0^\infty x(t)(-s)e^{-st} dt $$
Assuming that $x(t)e^{-st} \to 0$ as $t \to \infty$ (i.e., the Laplace transform converges), the boundary term becomes:
$$ \lim_{t \to \infty} x(t)e^{-st} - x(0) = 0 - x(0) = -x(0) $$
Thus:
$$ \mathscr{L}\{\dot{x}(t)\} = -x(0) + s \int_0^\infty x(t) e^{-st} dt = sX(s) - x(0) $$
Therefore,
$$ \begin{align*} \mathscr{L}\left\{\frac{dx(t)}{dt}\right\} &= sX(s) - x(0)\\ &= sX(s) \qquad \qquad \text{if the system is at rest} \end{align*} $$
In a block diagram:
Solving Differential Equations with Laplace Transforms
Laplace transforms convert linear differential equations into algebraic equations in the complex frequency domain. After solving the algebraic equation, we apply the inverse Laplace transform to recover the time-domain solution.
Example 1: First-Order ODE
Consider the differential equation:
$$ \dot{y}(t) + y(t) = \delta(t) $$
Applying the Laplace transform to both sides, and using:
$$ \mathscr{L}\{\dot{y}(t)\} = sY(s) - y(0) $$
Assuming zero initial condition $y(0) = 0$:
$$ sY(s) + Y(s) = 1 $$
Factoring:
$$ Y(s)(s + 1) = 1 \quad \Rightarrow \quad Y(s) = \frac{1}{s + 1} $$
Inverse Laplace:
$$ y(t) = e^{-t} u(t) $$
Example 2: Second-Order ODE
$$ \ddot{y}(t) + 3\dot{y}(t) + 2y(t) = \delta(t) $$
Laplace transform (with $y(0) = 0$, $\dot{y}(0) = 0$):
$$ s^2Y(s) + 3sY(s) + 2Y(s) = 1 $$
Factoring:
$$ Y(s)\left(s^2 + 3s + 2\right) = 1 \quad \Rightarrow \quad Y(s) = \frac{1}{(s + 1)(s + 2)} $$
Partial fraction decomposition:
$$ \frac{1}{(s + 1)(s + 2)} = \frac{1}{s + 1} - \frac{1}{s + 2} $$
Inverse Laplace:
$$ y(t) = \left(e^{-t} - e^{-2t} \right) u(t) $$
2.7. Laplace Transform of an Integral¶
Let us compute the Laplace transform of the integral of a signal:
$$ y(t) = \int_{-\infty}^{t} x(\tau)\, d\tau $$
If $x(t)$ is causal (i.e., $x(t) = 0$ for $t < 0$), this simplifies to:
$$ y(t) = \int_{0}^{t} x(\tau)\, d\tau = (u * x)(t) $$
This is the convolution of $x(t)$ with the unit step function $u(t)$.
Using the convolution property of the Laplace transform:
$$ \mathscr{L}\{u(t) * x(t)\} = \mathscr{L}\{u(t)\} \cdot \mathscr{L}\{x(t)\} $$
Since:
$$ \mathscr{L}\{u(t)\} = \frac{1}{s}, \quad \text{Re}(s) > 0 $$
we have:
$$ \mathscr{L}\left\{\int_0^t x(\tau)\, d\tau\right\} = \frac{1}{s} X(s) $$
This result shows that integration in the time domain corresponds to division by $s$ in the Laplace domain.
3. Transfer Function¶
The transfer function of a Linear Time-Invariant (LTI) system is defined as the Laplace transform of its impulse response:
$$ \begin{align*} y(t) &= h(t) * x(t) \\\\ Y(s) &= H(s) \cdot X(s) \\\\ \therefore \quad H(s) &= \frac{Y(s)}{X(s)} \end{align*} $$
- The transfer function $H(s)$ characterizes the input-output behavior of an LTI system in the frequency (complex $s$) domain.
- It encodes all system dynamics assuming zero initial conditions.
3.1. Transfer Function from a Differential Equation¶
For an $N^{\text{th}}$-order linear differential equation:
$$ \sum_{k=0}^{N} a_k \frac{d^k y(t)}{dt^k} = \sum_{k=0}^{M} b_k \frac{d^k x(t)}{dt^k} $$
Taking the Laplace transform (assuming zero initial conditions):
$$ \sum_{k=0}^{N} a_k s^k Y(s) = \sum_{k=0}^{M} b_k s^k X(s) $$
Solving for the transfer function:
$$ H(s) = \frac{Y(s)}{X(s)} = \frac{\sum\limits_{k=0}^{M} b_k s^k}{\sum\limits_{k=0}^{N} a_k s^k} $$
This form is a rational function in $s$.
Example
Given the differential equation:
$$ \ddot{y}(t) + 3\dot{y}(t) + 2y(t) = 2\dot{x}(t) - 3x(t) $$
Taking the Laplace transform:
$$ (s^2 + 3s + 2)Y(s) = (2s - 3)X(s) $$
Thus, the transfer function is:
$$ H(s) = \frac{Y(s)}{X(s)} = \frac{2s - 3}{s^2 + 3s + 2} = \frac{2(s - \frac{3}{2})}{(s + 1)(s + 2)} $$
3.2. Poles and Zeros¶
- Zeros: Values of $s$ that make the numerator zero
- Poles: Values of $s$ that make the denominator zero
General form:
$$ H(s) = \tilde{b} \cdot \frac{\prod\limits_{k=1}^{M}(s - c_k)}{\prod\limits_{k=1}^{N}(s - d_k)} $$
- $c_k$: zeros
- $d_k$: poles
Poles and zeros determine system stability, transient behavior, and frequency response.
3.3. Frequency Response from $H(s)$¶
To analyze how the system responds to sinusoidal inputs, substitute $s = j\omega$:
$$ H(j\omega) = H(s)\big|_{s = j\omega} $$
- Magnitude: $\lvert H(j\omega) \rvert$
- Phase: $\angle H(j\omega)$
This gives the system's frequency response, often visualized via:
- Bode plots
- Nyquist plots
- Magnitude-phase diagrams
These graphical tools allow you to assess gain and phase shift introduced by the system at each frequency.
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')