# Why in Fourier Transform, sigma = 0?

Started by February 9, 2008
```hi!
i'm working on some mathematical problem in frequency shifting or
Translation frequency of a signal...

i'm confuse what transformation should use, is it Fourier or Laplace?

i don't understand why in Fourier Transform, the sigma have to be 0?

and i found that if i use Laplace Transform first shift teorem, it will
give me a phase and magnitude shift...is it true?

thanks
```
```On Feb 10, 3:20 pm, "c1910" <c_19...@hotmail.com> wrote:
> hi!
> i'm working on some mathematical problem in frequency shifting or
> Translation frequency of a signal...
>
> i'm confuse what transformation should use, is it Fourier or Laplace?
>
> i don't understand why in Fourier Transform, the sigma have to be 0?
>
> and i found that if i use Laplace Transform first shift teorem, it will
> give me a phase and magnitude shift...is it true?
>
> thanks

Fourier is the one. Sigma is zero since you are looking at steady-
state properties along the jw axis. Fourier takes account of transient
behaviour too and for many signals (but not all) you can interchange
the two.
```
```On Feb 9, 9:20 pm, "c1910" <c_19...@hotmail.com> wrote:
>
> i'm working on some mathematical problem in frequency shifting or
> Translation frequency of a signal...
>
> i'm confuse what transformation should use, is it Fourier or Laplace?
>
> i don't understand why in Fourier Transform, the sigma have to be 0?
>

let's look at this a little bit pedagogically.  i have some pretty
strong opinions on how these courses commonly called "Signals and
Systems" (after the Oppenheim and Willsky book commonly used in recent
times, in the olden days we called this topic "Linear System Theory"
and was even *more* haphazardly taught in many undergrad EE
curriculums) teach this.  what they should teach first.  this topic of
"Signals and Systems" or "Linear System Theory" is super-important as
it lays the foundations of Control Systems engineering, Communications
Systems (including Statistical Communications), Linear Electric
Circuits (which precedes Electronics), Distributed Networks (a.k.a.
transmission lines), Analog and/or Digital Filters, and DSP.

first, of course, there is the concept of the signal, what we call
"x(t)" for continuous-time contexts and "x[n]" for discrete-time (it's
the sampling and reconstruction theorem that ties these two together,
but we get into that later).   x(t) or x[n] can be vectors of signals,
which is very important in control theory and state-variable systems,
but i'm gonna leave that issue alone here.

then there are these "systems", (mathematicians might call them
"operators" or "mappings" denoted by "T{ }" - i think "T" is for
"tranformation" or similar) with an input x(t) (or x[n]) and an output
y(t) (or y[n]).  if these systems are "linear" then the superposition
property applies:

if     y1(t) = T{ x1(t) }   and   y2(t) = T{ x2(t) }

then   y1(t) + y2(t)  =  T{ x1(t) + x2(t) }

and these systems are "time-invariant" if delaying (or advancing) the
input by any amount results in the very same output as the undelayed
except that it is delayed (or advanced) byt the very same amount.

if     y(t) = T{ x(t) }

then   y(t-tau)  =  T{ x(t-tau)  }

*IF* the above two properties apply, you have what we in the EE
profession call a "linear, time-invariant system" (or LTI for short)
and all sorts of nifty things can be done to describe the behavior of
these LTI systems *in general*.  and we begin with convolution.  now
i'm gonna change the "T" operator symbol with "LTI" to indicate that
we are now only considering linear, time-invariant systems.

if the above is true, we can *always* relate the input x(t) to the
output y(t) by way of the convolution integral:

if  h(t) = LTI{ delta(t) }

then

+inf
y(t)  =  integral{ x(u) h(t-u) du }
-inf

or

+inf
y(t)  =  integral{ h(u) x(t-u) du }
-inf

where h(t) is the "impulse response", whatever the system does in
response to x(t) being set to the dirac impulse function.  proving
this is not so hard, but i'm not gonna do it here.  so, if we bang the
LTI system with a dirac impulse and measure or calculate the output,
we know how the system will respond for *any* input.  the impulse
response tells us everything we need to know about the input/output
characteristics of an LTI system.

this might be *everything* we need to know, but in the state variable
model (often used in controls systems, but sometimes in
communications, such as the Kalman filter), sometimes the input/output
characteristics aren't enough.  we might have acceptable input/output
behavior but all sorts of hell is happening inside (like it's
unstable) and if the system is "not completely observable" we might
not know it from looking at the output until even our floating-point
numbers are exceeding in limit and the thing goes non-linear and then
you're screwed.  time to drop the pants and bend over.  but, then
again, state-variable systems with their vector signals is beyond our
scope at the moment.

okay, so now we have a general formula that relates the input x(t) to
the output y(t) in an LTI system.  we might start asking questions
about, knowing h(t), what the system does for certain specific classes
of input.  because exponential functions have an interesting property
that the derivatives of (and integrals of and even delayed copies of)
these exponential functions are themselves exponential functions, with
the same coefficient to "t" in the exponent, it turns out that such
exponential functions are "eigenfunctions" to these LTI systems.  that
is, if an exponential function goes in, and exponential function
(possibly/likely scaled in amplitude) will come out.

if x(t) = exp(s*t)

then

y(t) = LTI{ x(t) } = H(s) * exp(s*t) = H(s) * x(t)

where

+inf
H(sigma) =  integral{ h(t) * exp(-s*t) dt }
-inf

this constant, H(s), is the "eigen value" which simply scales the
input eigenfunction, x(t), to result in the output, y(t), which is the
same species of aminal as the input.  but that constant is different
for different LTI systems so it depends on the descriptor of the LTI
system which is h(t), and the above integral show precisely what that
dependence is.  now, this integral might look familiar to you.

whacking an LTI system (or *any* system) with a true real exponential
(from -inf < t < +inf) might be kinda difficult since, unless sigma is
zero, the thing blows up to pretty damn big values somewhere.

now, assuming we do things with complex numbers for generality, a
particularly interesting subset of exponential functions are
sinusoidal functions:

exp(j*omega*t) = cos(omega*t) + j*sin(omega*t)

but since they are exponentials (in form), then these are
eigenfunction the above fact about exponential eigenfunctions are
true:

if x(t) = exp(j*omega*t) = cos(omega*t) + j*sin(omega*t)

then

y(t) = LTI{ x(t) } = H(j*omega) * exp(j*omega*t) = H(j*omega) *
x(t)

where

+inf
H(sigma) =  integral{ h(t) * exp(-j*omega*t) dt }
-inf

now THIS integral oughta look familiar.

get back to it in some number of hours.

> and i found that if i use Laplace Transform first shift teorem, it will
> give me a phase and magnitude shift...is it true?
>

dunno what you mean.

r b-j
```
```On Feb 10, 4:11 pm, robert bristow-johnson <r...@audioimagination.com>
wrote:

it looks like i screwed some things up.  so i'm repeating it,
hopefully with the screwups fixed.

> On Feb 9, 9:20 pm, "c1910" <c_19...@hotmail.com> wrote:
>
>
>
> > i'm working on some mathematical problem in frequency shifting or
> > Translation frequency of a signal...
>
> > i'm confuse what transformation should use, is it Fourier or Laplace?
>
> > i don't understand why in Fourier Transform, the sigma have to be 0?
>

let's look at this a little bit pedagogically.  i have some pretty
strong opinions on how these courses commonly called "Signals and
Systems" (after the Oppenheim and Willsky book commonly used in recent
times, in the olden days we called this topic "Linear System Theory"
and was even *more* haphazardly taught in many undergrad EE
curriculums) teach this.  what they should teach first.  this topic of
"Signals and Systems" or "Linear System Theory" is super-important as
it lays the foundations of Control Systems engineering, Communications
Systems (including Statistical Communications), Linear Electric
Circuits (which precedes Electronics), Distributed Networks (a.k.a.
transmission lines), Analog and/or Digital Filters, and DSP.

first, of course, there is the concept of the signal, what we call
"x(t)" for continuous-time contexts and "x[n]" for discrete-time (we
use square brackets for discrete signals or sequences of numbers
and round pareths for continuous-time or continuous frequency.  it's
the sampling and reconstruction theorem that ties these two together,
but we get into that later.   x(t) or x[n] can be vectors of signals,
which is very important in control theory and state-variable systems,
but i'm gonna leave that issue alone here.

then there are these "systems", (mathematicians might call them
"operators" or "mappings" denoted by "T{ }" - i think "T" is for
"tranformation" or similar) with an input x(t) (or x[n]) and an output
y(t) (or y[n]).  if these systems are "linear" then the superposition
property applies:

if     y1(t) = T{ x1(t) }   and   y2(t) = T{ x2(t) }

then   y1(t) + y2(t)  =  T{ x1(t) + x2(t) }

and these systems are "time-invariant" if delaying (or advancing) the
input by any amount results in the very same output as the undelayed
except that it is delayed (or advanced) byt the very same amount.

if     y(t) = T{ x(t) }

then   y(t-tau)  =  T{ x(t-tau)  }

*IF* the above two properties apply, you have what we in the EE
profession call a "linear, time-invariant system" (or LTI for short)
and all sorts of nifty things can be done to describe the behavior of
these LTI systems *in general*.  and we begin with convolution.  now
i'm gonna change the "T" operator symbol with "LTI" to indicate that
we are now only considering linear, time-invariant systems.

if the above is true, we can *always* relate the input x(t) to the
output y(t) by way of the convolution integral:

if  h(t) = LTI{ delta(t) }

then

+inf
y(t)  =  integral{ x(u) h(t-u) du }
-inf

or

+inf
y(t)  =  integral{ h(u) x(t-u) du }
-inf

where h(t) is the "impulse response", whatever the system does in
response to x(t) being set to the dirac impulse function.  proving
this is not so hard, but i'm not gonna do it here.  so, if we bang the
LTI system with a dirac impulse and measure or calculate the output,
we know how the system will respond for *any* input.  the impulse
response tells us everything we need to know about the input/output
characteristics of an LTI system.

this might be *everything* we need to know, but in the state variable
model (often used in controls systems, but sometimes in
communications, such as the Kalman filter), sometimes the input/output
characteristics aren't enough.  we might have acceptable input/output
behavior but all sorts of hell is happening inside (like it's
unstable) and if the system is "not completely observable" we might
not know it from looking at the output until even our floating-point
numbers are exceeding in limit and the thing goes non-linear and then
you're screwed.  time to drop the pants and bend over.  but, then
again, state-variable systems with their vector signals is beyond our
scope at the moment.

okay, so now we have a general formula that relates the input x(t) to
the output y(t) in an LTI system.  we might start asking questions
about, knowing h(t), what the system does for certain specific classes
of input.  because exponential functions have an interesting property
that the derivatives of (and integrals of and even delayed copies of)
these exponential functions are themselves exponential functions, with
the same coefficient to "t" in the exponent, it turns out that such
exponential functions are "eigenfunctions" to these LTI systems.  that
is, if an exponential function goes in, and exponential function
(possibly/likely scaled in amplitude) will come out.

if x(t) = exp(s*t)

then

y(t) = LTI{ x(t) } = H(s) * exp(s*t) = H(s) * x(t)

where

+inf
H(s) =  integral{ h(t) * exp(-s*t) dt }
-inf

this constant, H(s), is the "eigen value" which simply scales the
input eigenfunction, x(t), to result in the output, y(t), which is the
same species of aminal as the input.  but that constant is different
for different LTI systems so it depends on the descriptor of the LTI
system which is h(t), and the above integral show precisely what that
dependence is.  now, this integral might look familiar to you.

whacking an LTI system (or *any* system) with a true real exponential
(from -inf < t < +inf) might be kinda difficult since, unless sigma is
zero, the thing blows up to pretty damn big values somewhere.

now, assuming we do things with complex numbers for generality, a
particularly interesting subset of exponential functions are
sinusoidal functions:

exp(j*omega*t) = cos(omega*t) + j*sin(omega*t)

but since they are exponentials (in form), then these are
eigenfunction the above fact about exponential eigenfunctions are
true:

if x(t) = exp(j*omega*t) = cos(omega*t) + j*sin(omega*t)

then

y(t) = LTI{ x(t) } = H(j*omega)*exp(j*omega*t) = H(j*omega)*x(t)

where

+inf
H(j*omega) =  integral{ h(t) * exp(-j*omega*t) dt }
-inf

now THIS integral oughta look familiar.

okay, moving with the gospel of linear system theory according to rbj
(Eric, you, as the Minister of Algorithms, oughta be preaching this
sermon).

so now we know what an LTI system does to the general input, x(t), but
it's an icky convolution integral that doesn't ostensibly lead to much
what an LTI system will do to a particular class of signals we call
"exponential functions" for which sinusoids (or the complex sinusoid,
exp(j*omega*t)) are a subset.  we find out that this specific class of
signal, the exponentials and sinusoids, are eigenfunctions of LTI
systems.  *any* LTI system.  the cool thing that sorta began with
Fourier (and maybe Laplace, for all i know) was that they were able to
"showed" it) that by going from the general x(t) to the specific
exp(s*t) was not losing generality, at least not for any decently
continuous signal or system which are the ones we electrical engineers

these guys showed that you could sum up a bunch of sinusoids (if
you're Fourier) or sinusoids with a little bit of an exponential
dampening factor (if you're Laplace) and get *any* general signal
(well, at least decently continuous ones).

Fourier () first shows that

if  x(t) is periodic with period T: x(t) = x(t-T) forall t

then

+inf
x(t) =  SUM{ X[k] * exp(j*k*(2*pi/T)*t) }
k=-inf

where

T/2
X[k] = (1/T) integral{x(t)*exp(-j*k*(2*pi/T)*t) dt}
-T/2

so if x(t) is periodic, but otherwize general, we can represent it as
a sum of sinusoids and, using the superposition property that is
axiomatic of LTI systems, run each sinusoid through the system
individually, the output for each term would be

LTI{X[k]*exp(j*k*(2*pi/T)*t)}
= X[k]*H(j*k*(2*pi/T))*exp(j*k*(2*pi/T)*t)

and the output of the whole thing is

y(t) = LTI{ x(t) }

= LTI{ SUM{X[k]*exp(j*k*(2*pi/T)*t)} }
k

= SUM{ X[k]*H(j*k*(2*pi/T))*exp(j*k*(2*pi/T)*t) }
k

so, the output is also periodic with period T

+inf
y(t) =  SUM{ Y[k] * exp(j*k*(2*pi/T)*t) }
k=-inf

and it's Fourier coefs,

T/2
Y[k] = (1/T) integral{y(t)*exp(-j*k*(2*pi/T)*t) dt}
-T/2

are known from and related to the input coefs as

Y[k] = X[k] * H(j*k*(2*pi/T)) .

then Fourier generalizes further and asks "what if the input *isn't*
periodic?  what if we approximate our non-periodic input as a periodic
function with period of one day or one year?"  is repeating every year
close enough for non-periodic to you?  (it's not if you're doing
celestial mechanics or seasonal meteorology.)

so, then we let the -T/2 above go to -infinity and the +T/2 above go
to +infinity, do a little Riemann integral magic and what we get is

+inf
x(t) =  integral{ X(f)*exp(+j*2*pi*f*t) df}
-inf

where

+inf
X(f) =  integral{ x(t)*exp(-j*2*pi*f*t) dt}
-inf

These two integrals are what happens to the Fourier series

+inf
x(t) =  SUM{ X[k] * exp(j*k*(2*pi/T)*t) }
k=-inf

where

T/2
X[k] = (1/T) integral{x(t)*exp(-j*k*(2*pi/T)*t) dt}
-T/2

when you force T to go to infinity.  the 1/T becomes "df" the
X[k]*exp(j*k*(2*pi/T)*t) becomes X(f)*exp(+j*2*pi*f*t) in the limit of
the Riemann summation.

That is the Fourier Integral or Fourier Transform (and inverse).
likewize, for an LTI system,

if   y(t) =  LTI{ x(t) }

and  h(t) =  LTI{ delta(t) }

then

Y(f) = H(f)*X(f)   ("*" means multiplication not convolution)

we can show that just like we did for the Fourier series case.

So the Fourier Transform is there to represent our input, whether it's
periodic or not, as an infinite summation (the integral) of sinusoids
of infinitesimally close frequencies.  these sinusoids are in that

but the Fourier Integral has trouble converging for some legitimate
signals, such as the unit step function.  if, instead we represent the
unit step function as a similar infinite summation (integral) of
exponentially-damped sinusoids (these are still in that class of
exponential eigenfunctions), the resulting integral *can* usually be
made to converge, and that is what the Laplace Transform is.

outa time again.  we'll see if i can make another installment on
this.  maybe not.

r b-j

```
```On 10 Feb, 03:20, "c1910" <c_19...@hotmail.com> wrote:
> hi!
> i'm working on some mathematical problem in frequency shifting or
> Translation frequency of a signal...
>
> i'm confuse what transformation should use, is it Fourier or Laplace?
>
> i don't understand why in Fourier Transform, the sigma have to be 0?
>
> and i found that if i use Laplace Transform first shift teorem, it will
> give me a phase and magnitude shift...is it true?

The sigma = 0 simplification simplifies computations
and calculations. The FT is a simplified version of
the LT. What can be done with the FT can also be done
with the LT. The FT computations are significantly
simpler, though; no equivalent to the FFT exists for
the LT.

Rune
```
```On Feb 11, 2:33 am, Rune Allnor <all...@tele.ntnu.no> wrote:
> The sigma = 0 simplification simplifies computations
> and calculations. The FT is a simplified version of
> the LT. What can be done with the FT can also be done
> with the LT. The FT computations are significantly
> simpler, though; no equivalent to the FFT exists for
> the LT.

While the Fourier and Laplace transforms are certainly related, one
has to be careful about statements that one is a "simplified version"
of the other.  In particular, the types of functions for which the
transforms converge are radically different for the two transforms.
There are plenty of functions for which the LT is well-defined but the
FT is not, and conversely there are functions where the FT is defined
but the (two-sided) LT is not.

Also, the applications are not entirely the same.  In particular, the
one-sided Laplace transform is particularly suited (at least in
analytical calculations) to initial-value problems.  Also, in complex
analysis the Laplace transform has a special role because it is an
analytic function for a semi-infinite portion of the complex plane.

Finally, there *are* fast algorithms to compute the Laplace transform
numerically, and in fact there are algorithms that work in O(n) time.
See:

* V. Rokhlin, "A fast algorithm for the discrete Laplace
transformation," J. Complexity 4, 12-32 (1988).
* J. Strain, "A fast Laplace transform based on Laguerre functions,"
Mathematics of Computation 58, 275-283 (1992).

The real problem is not the computation of the Laplace transform, but
the numerical computation of the *inverse* Laplace transform.  Various
techniques have been proposed for this problem [see e.g. V. Kryzhniy,
Inverse Problems 22, 579-597 (2006)], but getting something
simultaneously accurate, stable, and efficient has been a challenge.

Regards,
Steven G. Johnson
```
```On Feb 11, 12:53 pm, stevenj....@gmail.com wrote:
> While the Fourier and Laplace transforms are certainly related, one
> has to be careful about statements that one is a "simplified version"
> of the other.  In particular, the types of functions for which the
> transforms converge are radically different for the two transforms.
> There are plenty of functions for which the LT is well-defined but the
> FT is not, and conversely there are functions where the FT is defined
> but the (two-sided) LT is not.

(I should have said: but the (two-sided) LT is not, except along the
imaginary axis in the complex plane.  Normally, if one is doing the
LT, one restricts it to functions where the LT is defined at least for
a strip of real values; see e.g. A. H. Zemanian, Generalized Integral
Transformations, Dover 1968.  In any case, the point is that one has
to be careful about switching between the LT and the FT because of the
domain of convergence issues.)
```