Introduction to Laplace Transform Analysis

The one-sided Laplace transform of a signal $ x(t)$ is defined by

$\displaystyle X(s) \isdef {\cal L}_s\{x\} \isdef \int_0^\infty x(t) e^{-st}dt
$

where $ t$ is real and $ s=\sigma + j\omega$ is a complex variable. The one-sided Laplace transform is also called the unilateral Laplace transform. There is also a two-sided, or bilateral, Laplace transform obtained by setting the lower integration limit to $ -\infty$ instead of 0. Since we will be analyzing only causalD.1 linear systems using the Laplace transform, we can use either. However, it is customary in engineering treatments to use the one-sided definition.

When evaluated along the $ s=j\omega$ axis (i.e., $ \sigma=0$), the Laplace transform reduces to the unilateral Fourier transform:

$\displaystyle X(j\omega) = \int_0^\infty x(t) e^{-j\omega t}dt.
$

The Fourier transform is normally defined bilaterally ( $ 0\leftarrow
-\infty$ above), but for causal signals $ x(t)$, there is no difference. We see that the Laplace transform can be viewed as a generalization of the Fourier transform from the real line (a simple frequency axis) to the entire complex plane. We say that the Fourier transform is obtained by evaluating the Laplace transform along the $ j\omega$ axis in the complex $ s$ plane.

An advantage of the Laplace transform is the ability to transform signals which have no Fourier transform. To see this, we can write the Laplace transform as

$\displaystyle X(s) = \int_0^\infty x(t) e^{-(\sigma + j\omega)t} dt
= \int_0^\infty \left[x(t)e^{-\sigma t}\right] e^{-j\omega t} dt .
$

Thus, the Laplace transform can be seen as the Fourier transform of an exponentially windowed input signal. For $ \sigma>0$ (the so-called ``strict right-half plane'' (RHP)), this exponential weighting forces the Fourier-transformed signal toward zero as $ t\to\infty$. As long as the signal $ x(t)$ does not increase faster than $ \exp(Bt)$ for some $ B$, its Laplace transform will exist for all $ \sigma>B$. We make this more precise in the next section.

Existence of the Laplace Transform

A function $ x(t)$ has a Laplace transform whenever it is of exponential order. That is, there must be a real number $ B$ such that

$\displaystyle \lim_{t\to\infty} \left\vert x(t)e^{-Bt}\right\vert=0
$

As an example, every exponential function $ Ae^{\alpha t}$ has a Laplace transform for all finite values of $ A$ and $ \alpha$. Let's look at this case more closely.

The Laplace transform of a causal, growing exponential function

$\displaystyle x(t) = \left\{\begin{array}{ll}
A e^{\alpha t}, & t\geq 0 \\ [5pt]
0, & t<0 \\
\end{array}\right.,
$

is given by

\begin{eqnarray*}
X(s) &\isdef & \int_0^\infty x(t) e^{-st}dt
= \int_0^\infty A...
...alpha \\ [5pt]
\infty, & \sigma<\alpha \\
\end{array} \right.
\end{eqnarray*}

Thus, the Laplace transform of an exponential $ Ae^{\alpha t}$ is $ A/(s-\alpha)$, but this is defined only for re$ \left\{s\right\}>\alpha$.


Analytic Continuation

It turns out that the domain of definition of the Laplace transform can be extended by means of analytic continuation [14, p. 259]. Analytic continuation is carried out by expanding a function of $ s\in{\bf C}$ about all points in its domain of definition, and extending the domain of definition to all points for which the series expansion converges.

In the case of our exponential example

$\displaystyle X(s) = \frac{A}{\alpha - s}, \quad ($re$\displaystyle \left\{s\right\}>\alpha) \protect$ (D.1)

the Taylor series expansion of $ X(s)$ about the point $ s=s_0$ in the $ s$ plane is given by

\begin{eqnarray*}
X(s) &=& X(s_0) + (s-s_0) X^\prime (s_0)
+ (s-s_0)^2\frac{X^...
...\
&\isdef & \sum_{n=0}^\infty (s-s_0)^n\frac{X^{(n)}(s_0)}{n!}
\end{eqnarray*}

where, writing $ X(s)$ as $ (\alpha-s)^{-1}$ and using the chain rule for differentiation,

\begin{eqnarray*}
X^\prime (s_0) &\isdef & X^{(1)}(s_0) \isdef \left.\frac{d X(s...
...pha-s)^{-4}(-1)\right\vert _{s=s_0} = \frac{3!}{(\alpha-s)^4}\\
\end{eqnarray*}

and so on. We also used the factorial notation $ n!\isdeftext
n(n-1)(n-2)\cdots 3\cdot 2\cdot 1$, and we defined the special cases $ 0!\isdeftext 1$ and $ X^{(0)}(s_0)\isdeftext X(s_0)$, as is normally done. The series expansion of $ X(s)$ can thus be written

$\displaystyle X(s)$ $\displaystyle =$ $\displaystyle \frac{1}{\alpha-s_0}
+ \frac{s-s_0}{(\alpha-s_0)^2}
+ \frac{(s-s_0)^2}{(\alpha-s_0)^3}
+ \cdots$  
  $\displaystyle =$ $\displaystyle \sum_{n=0}^\infty \frac{(s-s_0)^n}{(\alpha-s_0)^{n+1}}.
\protect$ (D.2)

We now ask for what values of $ s$ does the series Eq.$ \,$(D.2) converge? The value $ s=\alpha$ is particularly easy to check, since

$\displaystyle X(\alpha) = \sum_{n=0}^\infty \frac{(\alpha-s_0)^n}{(\alpha-s_0)^{n+1}}
= \sum_{n=0}^\infty \frac{1}{\alpha-s_0}
= \infty \frac{1}{\alpha-s_0}.
$

Thus, the series clearly does not converge for $ s=\alpha$, no matter what our choice of $ s_0$ might be. We must therefore accept the point at infinity for $ H(\alpha)$. This is eminently reasonable since the closed form Laplace transform we derived, $ H(s) = 1/(\alpha - s)$ does ``blow up'' at $ s=\alpha$. The point $ s=\alpha$ is called a pole of $ H(s) = 1/(\alpha - s)$.

More generally, let's apply the ratio test for the convergence of a geometric series. Since the $ n$th term of the series is

$\displaystyle \frac{(s-s_0)^n}{(\alpha-s_0)^{n+1}}
$

the ratio test demands that the ratio of term $ n+1$ over term $ n$ have absolute value less than $ 1$. That is, we require

$\displaystyle 1 > \left\vert
\left.
\frac{(s-s_0)^{n+1}}{(\alpha-s_0)^{n+2}}
\r...
...lpha-s_0)^{n+1}}}
\right\vert
= \left\vert\frac{s-s_0}{\alpha-s_0}\right\vert,
$

or,

$\displaystyle \zbox {\left\vert s-s_0\right\vert < \left\vert\alpha-s_0\right\vert.}
$

We see that the region of convergence is a circle about the point $ s=s_0$ having radius approaching but not equal to $ \vert\alpha-s_0\vert$. Thus, the circular disk of convergence is centered at $ s=s_0$ and extends to, but does not touch, the pole at $ s=\alpha$.

The analytic continuation of the domain of Eq.$ \,$(D.1) is now defined as the union of the disks of convergence for all points $ s_0\neq \alpha$. It is easy to see that a sequence of such disks can be chosen so as to define all points in the $ s$ plane except at the pole $ s=\alpha$.

In summary, the Laplace transform of an exponential $ x(t)=A e^{\alpha
t}$ is

$\displaystyle X(s) = \frac{A}{s-\alpha}
$

and the value is well defined and finite for all $ s\neq \alpha$.

Analytic continuation works for any finite number of poles of finite order,D.2 and for an infinite number of distinct poles of finite order. It breaks down only in pathological situations such as when the Laplace transform is singular everywhere on some closed contour in the complex plane. Such pathologies do not arise in practice, so we need not be concerned about them.


Relation to the z Transform

The Laplace transform is used to analyze continuous-time systems. Its discrete-time counterpart is the $ z$ transform:

$\displaystyle X_d(z) \isdef \sum_{n=0}^\infty x_d(nT) z^{-n}
$

If we define $ z=e^{sT}$, the $ z$ transform becomes proportional to the Laplace transform of a sampled continuous-time signal:

$\displaystyle X_d(e^{sT}) = \sum_{n=0}^\infty x_d(nT) e^{-snT}
$

As the sampling interval $ T$ goes to zero, we have

$\displaystyle \lim_{T\to 0}
X_d(e^{sT})T =
\lim_{\Delta t\to 0}
\sum_{n=0}^\infty x_d(t_n) e^{-st_n} \Delta t
= \int_{0}^\infty x_d(t) e^{-st} dt
\isdef X(s)
$

where $ t_n\isdef nT$ and $ \Delta t \isdef t_{n+1} - t_n = T$.

In summary,

$\textstyle \parbox{0.8\textwidth}{the {\it z} transform\ (times the sampling in...
...o0$, the Laplace transform\ of
the underlying continuous-time signal $x_d(t)$.}$

Note that the $ z$ plane and $ s$ plane are generally related by

$\displaystyle \zbox {z = e^{sT}.}
$

In particular, the discrete-time frequency axis $ \omega_d \in(-\pi/T,\pi/T)$ and continuous-time frequency axis $ \omega_a \in(-\infty,\infty)$ are related by

$\displaystyle \zbox {e^{j\omega_d T} = e^{j\omega_a T}.}
$

For the mapping $ z=e^{sT}$ from the $ s$ plane to the $ z$ plane to be invertible, it is necessary that $ X(j\omega_a )$ be zero for all $ \vert\omega_a \vert\geq \pi/T$. If this is true, we say $ x(t)$ is bandlimited to half the sampling rate. As is well known, this condition is necessary to prevent aliasing when sampling the continuous-time signal $ x(t)$ at the rate $ f_s=1/T$ to produce $ x(nT)$, $ n=0,1,2,\ldots\,$ (see [84, Appendix G]).


Laplace Transform Theorems

Linearity

The Laplace transform is a linear operator. To show this, let $ w(t)$ denote a linear combination of signals $ x(t)$ and $ y(t)$,

$\displaystyle w(t) = \alpha x(t) + \beta y(t),
$

where $ \alpha$ and $ \beta$ are real or complex constants. Then we have

\begin{eqnarray*}
W(s) &\isdef & {\cal L}_s\{w\} \isdef {\cal L}_s\{\alpha x(t) ...
..._0^\infty y(t) e^{-st} dt\\
&\isdef & \alpha X(s) + \beta Y(s).
\end{eqnarray*}

Thus, linearity of the Laplace transform follows immediately from the linearity of integration.


Differentiation

The differentiation theorem for Laplace transforms states that

$\displaystyle {\dot x}(t) \leftrightarrow s X(s) - x(0),
$

where $ {\dot x}(t) \isdef \frac{d}{dt}x(t)$, and $ x(t)$ is any differentiable function that approaches zero as $ t$ goes to infinity. In operator notation,

$\displaystyle \zbox {{\cal L}_{s}\{{\dot x}\} = s X(s) - x(0).}
$


Proof: This follows immediately from integration by parts:

\begin{eqnarray*}
{\cal L}_{s}\{{\dot x}\} &\isdef & \int_{0}^\infty {\dot x}(t)...
...y} -
\int_{0}^\infty x(t) (-s)e^{-s t} dt\\
&=& s X(s) - x(0)
\end{eqnarray*}

since $ x(\infty)=0$ by assumption.


Corollary: Integration Theorem

$\displaystyle \zbox {{\cal L}_{s}\left\{\int_0^t x(\tau)d\tau\right\} = \frac{X(s)}{s}}
$

Thus, successive time derivatives correspond to successively higher powers of $ s$, and successive integrals with respect to time correspond to successively higher powers of $ 1/s$.


Laplace Analysis of Linear Systems

The differentiation theorem can be used to convert differential equations into algebraic equations, which are easier to solve. We will now show this by means of two examples.

Moving Mass

Figure D.1 depicts a free mass driven by an external force along an ideal frictionless surface in one dimension. Figure D.2 shows the electrical equivalent circuit for this scenario in which the external force is represented by a voltage source emitting $ f(t)$ volts, and the mass is modeled by an inductor having the value $ L=m$ Henrys.

Figure D.1: Physical diagram of an external force driving a mass along a frictionless surface.
\begin{figure}\input fig/forcemass.pstex_t
\end{figure}

Figure: Electrical equivalent circuit of the force-driven mass in Fig.D.1.
\begin{figure}\input fig/forcemassec.pstex_t
\end{figure}

From Newton's second law of motion ``$ f=ma$'', we have

$\displaystyle f(t) = m\,a(t) \isdef m\,{\dot v}(t) \isdef m\,{\ddot x}(t).
$

Taking the unilateral Laplace transform and applying the differentiation theorem twice yields

\begin{eqnarray*}
F(s) &=& m\,{\cal L}_s\{{\ddot x}\}\\
&=& m\left[\,s {\cal L...
...right\}\\
&=& m\left[s^2\,X(s) - s\,x(0) - {\dot x}(0)\right].
\end{eqnarray*}

Thus, given

  • $ F(s) = $ Laplace transform of the driving force $ f(t)$,
  • $ x(0) = $ initial mass position, and
  • $ {\dot x}(0)\isdeftext v(0) = $ initial mass velocity,
we can solve algebraically for $ X(s)$, the Laplace transform of the mass position for all $ t\ge 0$. This Laplace transform can then be inverted to obtain the mass position $ x(t)$ for all $ t\ge 0$. This is the general outline of how Laplace-transform analysis goes for all linear, time-invariant (LTI) systems. For nonlinear and/or time-varying systems, Laplace-transform analysis cannot, strictly speaking, be used at all.

If the applied external force $ f(t)$ is zero, then, by linearity of the Laplace transform, so is $ F(s)$, and we readily obtain

$\displaystyle X(s)
= \frac{x(0)}{s} + \frac{{\dot x}(0)}{s^2}
= \frac{x(0)}{s} + \frac{v(0)}{s^2}.
$

Since $ 1/s$ is the Laplace transform of the Heaviside unit-step function

$\displaystyle u(t)\isdef \left\{\begin{array}{ll}
0, & t<0 \\ [5pt]
1, & t\ge 0 \\
\end{array}\right.,
$

we find that the position of the mass $ x(t)$ is given for all time by

$\displaystyle x(t) = x(0)\,u(t) + v(0)\,t\,u(t).
$

Thus, for example, a nonzero initial position $ x(0)=x_0$ and zero initial velocity $ v(0)=0$ results in $ x(t)=x_0$ for all $ t\ge 0$; that is, the mass ``just sits there''.D.3 Similarly, any initial velocity $ v(0)$ is integrated with respect to time, meaning that the mass moves forever at the initial velocity.

To summarize, this simple example illustrated use the Laplace transform to solve for the motion of a simple physical system (an ideal mass) in response to initial conditions (no external driving forces). The system was described by a differential equation which was converted to an algebraic equation by the Laplace transform.


Mass-Spring Oscillator Analysis

Consider now the mass-spring oscillator depicted physically in Fig.D.3, and in equivalent-circuit form in Fig.D.4.

Figure D.3: An ideal mass $ m$ sliding on a frictionless surface, attached via an ideal spring $ k$ to a rigid wall. The spring is at rest when the mass is centered at $ x=0$.
\includegraphics{eps/massspringwall}

Figure D.4: Equivalent circuit for the mass-spring oscillator.
\begin{figure}\input fig/tankec.pstex_t
\end{figure}

By Newton's second law of motion, the force $ f_m(t)$ applied to a mass equals its mass times its acceleration:

$\displaystyle f_m(t)=m{\ddot x}(t).
$

By Hooke's law for ideal springs, the compression force $ f_k(t)$ applied to a spring is equal to the spring constant $ k$ times the displacement $ x(t)$:

$\displaystyle f_k(t)=kx(t)
$

By Newton's third law of motion (``every action produces an equal and opposite reaction''), we have $ f_k = -f_m$. That is, the compression force $ f_k$ applied by the mass to the spring is equal and opposite to the accelerating force $ f_m$ exerted in the negative-$ x$ direction by the spring on the mass. In other words, the forces at the mass-spring contact-point sum to zero:

\begin{eqnarray*}
f_m(t) + f_k(t) &=& 0\\
\Rightarrow\; m {\ddot x}(t) + k x(t) &=& 0
\end{eqnarray*}

We have thus derived a second-order differential equation governing the motion of the mass and spring. (Note that $ x(t)$ in Fig.D.3 is both the position of the mass and compression of the spring at time $ t$.)

Taking the Laplace transform of both sides of this differential equation gives

\begin{eqnarray*}
0 &=& {\cal L}_s\{m{\ddot x}+ k x\} \\
&=& m{\cal L}_s\{{\ddo...
...orem again)} \\
&=& ms^2 X(s) - msx(0) - m{\dot x}(0) + k X(s).
\end{eqnarray*}

To simplify notation, denote the initial position and velocity by $ x(0)=x_0$ and $ {\dot x}(0)={\dot x}_0=v_0$, respectively. Solving for $ X(s)$ gives

\begin{eqnarray*}
X(s) &=& \frac{sx_0 + v_0}{s^2 + \frac{k}{m}}
\;\isdef \; \fr...
...ta_r \;\isdef \; \tan^{-1}\left(\frac{v_0}{{\omega_0}x_0}\right)
\end{eqnarray*}

denoting the modulus and angle of the pole residue $ r$, respectively. From §D.1, the inverse Laplace transform of $ 1/(s+a)$ is $ e^{-at}u(t)$, where $ u(t)$ is the Heaviside unit step function at time 0. Then by linearity, the solution for the motion of the mass is

\begin{eqnarray*}
x(t) &=& re^{-j{\omega_0}t} + \overline{r}e^{j{\omega_0}t}
= ...
...ga_0}t - \tan^{-1}\left(\frac{v_0}{{\omega_0}x_0}\right)\right].
\end{eqnarray*}

If the initial velocity is zero ($ v_0=0$), the above formula reduces to $ x(t) = x_0\cos({\omega_0}t)$ and the mass simply oscillates sinusoidally at frequency $ {\omega_0}=
\sqrt{k/m}$, starting from its initial position $ x_0$. If instead the initial position is $ x_0=0$, we obtain

\begin{eqnarray*}
x(t) &=& \frac{v_0}{{\omega_0}}\sin({\omega_0}t)\\
\;\Rightarrow\; v(t) &=& v_0\cos({\omega_0}t).
\end{eqnarray*}


Next Section:
Analog Filters
Previous Section:
Allpass Filters