# Transforms

Started by December 8, 2012
```Sirs,

I would like to know the reason for having different transforms (fourier,
cosine, z transform etc.)

are all these related to conversion from time domain to frequency domain?

Thanks,
```
```manishp <58525@dsprelated> wrote:

> I would like to know the reason for having different
> transforms (fourier, cosine, z transform etc.)

Sine transform, cosine transform and (exponential) Fourier
transform are related, but with different boundary conditions.

Z-transform and Laplace transform are related, and, for the most
part have different uses from the other transforms.

> are all these related to conversion from time domain to frequency domain?

-- glen
```
```On Sun, 09 Dec 2012 04:18:33 +0000, glen herrmannsfeldt wrote:

> manishp <58525@dsprelated> wrote:
>
>> I would like to know the reason for having different transforms
>> (fourier, cosine, z transform etc.)
>
> Sine transform, cosine transform and (exponential) Fourier transform are
> related, but with different boundary conditions.
>
> Z-transform and Laplace transform are related, and, for the most part
> have different uses from the other transforms.
>
>> are all these related to conversion from time domain to frequency
>> domain?

And yes, they are all related to conversion from time domain to frequency
domain.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
```
```On 12/9/12 12:47 AM, Tim Wescott wrote:
> On Sun, 09 Dec 2012 04:18:33 +0000, glen herrmannsfeldt wrote:
>
>> manishp<58525@dsprelated>  wrote:
>>
>>> I would like to know the reason for having different transforms
>>> (fourier, cosine, z transform etc.)
>>
>> Sine transform, cosine transform and (exponential) Fourier transform are
>> related, but with different boundary conditions.
>>
>> Z-transform and Laplace transform are related, and, for the most part
>> have different uses from the other transforms.
>>
>>> are all these related to conversion from time domain to frequency
>>> domain?
>
> And yes, they are all related to conversion from time domain to frequency
> domain.

well, he didn't explicitly mention Hilbert.

but pretty much the others are the same thing.  just applied in
different contexts and combinations:

continuous time       vs.  discrete time
continuous frequency  vs.  discrete frequency

different regions of convergence, but pretty much i treat the Fourier
Transform as the same as the double-sided Laplace with s=jw.  the
one-sided Laplace is the same as the double-sided if you deal with the
unit step function explicitly.  no difference there.

and the DTFT is the same as the Z Transform with z=e^(jw)

and the Z Transform (or the DTFT) is just the Laplace Transform (or
Fourier) with the time-domain sampled with dirac impulses.

--

r b-j                  rbj@audioimagination.com

"Imagination is more important than knowledge."

```
```Many thanks
```
```On Sun, 09 Dec 2012 01:06:35 -0500, robert bristow-johnson wrote:

> On 12/9/12 12:47 AM, Tim Wescott wrote:
>> On Sun, 09 Dec 2012 04:18:33 +0000, glen herrmannsfeldt wrote:
>>
>>> manishp<58525@dsprelated>  wrote:
>>>
>>>> I would like to know the reason for having different transforms
>>>> (fourier, cosine, z transform etc.)
>>>
>>> Sine transform, cosine transform and (exponential) Fourier transform
>>> are related, but with different boundary conditions.
>>>
>>> Z-transform and Laplace transform are related, and, for the most part
>>> have different uses from the other transforms.
>>>
>>>> are all these related to conversion from time domain to frequency
>>>> domain?
>>
>> And yes, they are all related to conversion from time domain to
>> frequency domain.
>
> well, he didn't explicitly mention Hilbert.
>
> but pretty much the others are the same thing.  just applied in
> different contexts and combinations:
>
>      continuous time       vs.  discrete time continuous frequency  vs.
>      discrete frequency
>
> different regions of convergence, but pretty much i treat the Fourier
> Transform as the same as the double-sided Laplace with s=jw.  the
> one-sided Laplace is the same as the double-sided if you deal with the
> unit step function explicitly.  no difference there.
>
> and the DTFT is the same as the Z Transform with z=e^(jw)
>
> and the Z Transform (or the DTFT) is just the Laplace Transform (or
> Fourier) with the time-domain sampled with dirac impulses.

I think there are some mathematical difficulties that make it difficult
to unify them so blithely and remain entirely consistent -- but I pretty
much do what you do, and it works for me.  (And I don't recall what those
details are -- I just recall getting scolded over them by mathematicians).

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
```
```On 12/9/12 10:18 AM, Tim Wescott wrote:
> On Sun, 09 Dec 2012 01:06:35 -0500, robert bristow-johnson wrote:
>
>> On 12/9/12 12:47 AM, Tim Wescott wrote:
>>> On Sun, 09 Dec 2012 04:18:33 +0000, glen herrmannsfeldt wrote:
>>>
>>>> manishp<58525@dsprelated>   wrote:
>>>>
>>>>> I would like to know the reason for having different transforms
>>>>> (fourier, cosine, z transform etc.)
>>>>
>>>> Sine transform, cosine transform and (exponential) Fourier transform
>>>> are related, but with different boundary conditions.
>>>>
>>>> Z-transform and Laplace transform are related, and, for the most part
>>>> have different uses from the other transforms.
>>>>
>>>>> are all these related to conversion from time domain to frequency
>>>>> domain?
>>>
>>> And yes, they are all related to conversion from time domain to
>>> frequency domain.
>>
>> well, he didn't explicitly mention Hilbert.

just to be clear to the OP, the Hilbert Transform does not transform
from one domain to the other.  what comes out of the H.T. is in the same
domain as what goes into it.

>>
>> but pretty much the others are the same thing.  just applied in
>> different contexts and combinations:
>>
>>       continuous time       vs.  discrete time continuous frequency  vs.
>>       discrete frequency
>>
>> different regions of convergence, but pretty much i treat the Fourier
>> Transform as the same as the double-sided Laplace with s=jw.  the
>> one-sided Laplace is the same as the double-sided if you deal with the
>> unit step function explicitly.  no difference there.
>>
>> and the DTFT is the same as the Z Transform with z=e^(jw)
>>
>> and the Z Transform (or the DTFT) is just the Laplace Transform (or
>> Fourier) with the time-domain sampled with dirac impulses.
>
> I think there are some mathematical difficulties that make it difficult
> to unify them so blithely and remain entirely consistent -- but I pretty
> much do what you do, and it works for me.  (And I don't recall what those
> details are -- I just recall getting scolded over them by mathematicians).

i think it's ROC issues and consistent limits on the integral.  for
instance, you need to do some hand-waving to calculate the continuous
Fourier transform of the Heaviside unit step.  but add a little real and
positive sigma to that j*w (w="omega") and it converges just fine (and
you don't have that 1/2 dirac impulse function).

just from a symbolic POV, you know that the F.T. and the one-sided L.T.
*must* agree (with the substitution of s=j*w when x(t) is 0 for t<0 and
they both converge.  a simple example:

x(t)  =  e^(-alpha*t) u(t)

it *is* clear to me that the DTFT is precisely the F.T. of the sampled
x(t) with time and frequency normalized so that the sampling period is 1
and Nyquist is pi.

it is clear to me that the Z transform is precisely the L.T. of the same
sampled x(t) (without that T or 1/T scaling that i sometimes bitch about
regarding the sampling theorem) with the symbolic substitution of z =
e^(sT).

connecting the concepts the other way, it is clear to me that the DTFT
is precisely the Z transform with z=e^(j*w)  (T=1 here) just like the
F.T. is the L.T. with s=j*w.

we get into fights here at comp.dsp about the precise meaning of the
DFT.  it seems that no one argues that the DFT is the DTFT or ZT
evaluated at N equally-spaced points on the unit circle z=e^(j*w).  i
guess what we disagree with is exactly what is going into the DTFT or
ZT.  i say it doesn't matter, because sampling the frequency domain
causes periodic extension and time-aliasing in the time domain anyway,
whatever it is that goes into the DTFT or ZT.

but, unless the computer has some form of symbolic representation of
information, the only way you can get a computer to deal with anything
Fourier, is with the DFT.  so the DFT is not quite in the same concept
matrix that the other four are:

(mono-spaced font is needed)

continuous time            discrete time

F.T.            <--->      DTFT

^                           ^
|                           |
|                           |
V                           V

L.T.            <--->       Z transform

the L.T. and ZT are both "double-sided".  bottom limit of the integral
or summation is -inf.  the expressions you commonly see in textbooks
that have the bottom limit 0 (the "single-sided L.T." or ZT) is readily
dealt with in the double-sided version by use of the unit step function.

to add the DFT to this concept matrix, i think you need to also add
Fourier Series:

continuous time            discrete time

discrete
frequency
Fourier series   <--->       DFT

^                           ^
|                           |
|                           |
V                           V
continuous
frequency        F.T.            <--->      DTFT

^                           ^
|                           |
|                           |
V                           V

L.T.            <--->       Z transform

the DFT periodicity deniers might have a problem with this because it
implies that the DFT and the DFS are the same thing (which they are).
"discrete frequency" means "periodic in time" just as "discrete time"
means "periodic in frequency".  there is no escaping that fact.

then the last thing to tell the OP, in case he/she was wondering, is
that the FFT is nothing other than the DFT, but a particularly efficient
(or fast) method to calculate it.  all of the theorems regarding the
signals in both domains that apply to the DFT (and the DFS, which is the
same thing) also apply to the FFT.  what is different are the cost
metrics of calculation and, when you really drill down into it, what the
cost metrics of rounding or quantization error.  but thems are details.

if some mathematician yells at you for the blithe generalizations above,
please point them in my direction.

--

r b-j                  rbj@audioimagination.com

"Imagination is more important than knowledge."

```
```On Saturday, December 8, 2012 10:32:18 PM UTC-5, manishp wrote:
> Sirs,
>
>
>
> I would like to know the reason for having different transforms (fourier,
>
> cosine, z transform etc.)
>
>
>
> are all these related to conversion from time domain to frequency domain?
>
>
>
> Thanks,

Try looking up "Integral Transforms", there you will find some of the motivations for their uses.

I.e.,

Recall ln(x) = integral[0,infinity]  (1/x) dx.

And this of course turns muliplication into addition. Which really proves useful for exponents and roots.

IHTH,
Clay
```
```On Sat, 08 Dec 2012 21:32:18 -0600, "manishp" <58525@dsprelated>
wrote:

>Sirs,
>
>I would like to know the reason for having different transforms (fourier,
>cosine, z transform etc.)
>
>are all these related to conversion from time domain to frequency domain?
>
>Thanks,

Hello manishp,
transforms allow you to examine a signal, or
a signal processing network, from different
points of view.

For example, think of a simple discrete differentiator
network.  The time-domain difference equation for
the network will tell you exactly what arithmetic
you must perform to compute the network's output
sequence given some input sequence.  Obtaining the
z-transform of the differentiator will enable you
to determine over what frequency range the
differentiator has acceptable (reasonably accurate)
performance.

[-Rick-]

```
```On Dec 8, 7:32&#2013266080;pm, "manishp" <58525@dsprelated> wrote:
> Sirs,
>
> I would like to know the reason for having different transforms (fourier,
> cosine, z transform etc.)
>
> are all these related to conversion from time domain to frequency domain?
>
> Thanks,

Windowed Fourier transforms are basically "coherent state" transforms
adapted to the Heisenberg group (i.e. the translation group for the
time-frequency plane). The affine group (or group adapted to the time-
scale plane) yields coherent states that correspond to wavelets.

Windowed Fourier is relatively simple as are the other transforms in
that family. They suffer a rather large problem: they're adapted to a
linear scale and they use a fixed-size windowing. The reason that's
bad is because the amount of action that takes place at a given
frequency is proportional to the number of cycles that happen at that
frequency. So a time window should be of a size inversely proportional
to the frequency. By the Heisenberg relation that means the frequency
window should be proportional to the frequency -- i.e. the frequency
should be put on a logarithmic scale. In other words: as octaves.

Time-scale transforms (meaning wavelets) fix that problem. But they
also have their own problems: badly shaped windows.

This generalizes to arbitrary symmetry groups (like the Euclidean
group for space or Galilean or Poincare' group for space-time). In
that case, the transforms can be used to extract objects in a manner
that is robust against symmetry transforms. For the Euclidean group
that means, for instance, the ability to extract letters based on a
template, independently of how the letter is sheared, resized,
rotated, flipped or shifted. For moving objects, it means the ability
to accurately gauge (and target) objects (like missiles).

The hybrid of the time-frequency and time-scale are the S-transforms.
They have some rather unusual and extremely properties that even the
research literature doesn't (yet) know of. Their main disadvantage is
the difficulty in getting them to run efficiently on computer, though
that's no longer a problem like it once was.

All these transforms have inverses.

I'll do a quick run up to that, since I have an interest in it right
now, because of the above-mentioned "unusual and heretofore unknown
properties." Among other things, it recovers the concept (well-known
to physicists) of "instantaneous frequency" and it leads directly to a
*non-linear* transform that removes the problem of spectral leakage.

Here's the run up. Use the following notations: 1^x = exp(2 pi x).
Fourier transforms written as:
f#(n) = integral f(t) 1^{-nt} dt, f(t) = integral f#(n) 1^{nt} dn.

The short-time Fourier transforms use a windowing function g(t) and
are defined by:
f_F(q, p) = integral f(t) (g(t - q) 1^{p(t - q)})* dt
q = time-domain location, p = frequency-domain location.
()* = conjugation

The inverse is
f(t) = integral f_F(q, p) g(t - q) 1^{p(t - q)} dq dp
which requires the condition integral |g(t)|^2 dt = 1, which is
usually achieved by suitably rescaling the function g(t).

Normally in the literature, you see the transforms written only as
f_F(q, p) = integral f(t) (g(t - q) 1^{pt})* dt
f(q, p) = integral f_F(q, p) g(t - q) 1^{pt} dq dp
where the extra factor 1^{-pq} is lost. It's better to keep it in,
since with it in, the phase of monochromatic signals is kept intact.

The points (q, p) make up the time-frequency plane. So, this provides
a time-frequency spectrum for f(t). Normally you only map the
amplitude |f_F(q, p)| or its square -- which is a really bad thing to
do!

It's better to map the amplitude as brightness and the phase as color.
Then you'll end up seeing some rather interesting (and revealing)
patterns. Colorizing the transform shows the first signs of the
emergence of the Holy Grail that I'm leading up to.

The time-scale transforms work in the time-scale plane, with
coordinates (q, s) where now p is replaced by "scale" s. Since
integrals have the measure (dq ds/s^2), it's better to replace s by p
= 1/s, and treat this like the time-frequency plane. In that case, the
transform is
f_W(q, p) = integral f(t) (|p|^{1/2} g(p(t - q)))* dt
f(q, p) = integral f_W(q, p) |p|^{1/2} g(p(t - q)) dq dp.
The condition is that the one which forces you to make weird choices
for the function g:
integral |g#(n)|^2 dn/|n| = 1.

But the time-windowing is now inversely proportional to p. So it's
working with octaves.

The S-transform fixes the problem with g. In its absolutely most
general form it is defined by
f_S(q, p) = integral f(t) (|p| g(p(t - q)))* dt
and has inverse
f(t) = integral f_S(q, p) 1^{p(t - q)} dq dp
The condition is that the Fourier-transform g#(1) = 1. Consequently,
it is more common to rewrite the windowing function with an extra
factor 1^{p(t - q)} taken out. So the transform then becomes:
f_S(q, p) = integral f(t) |p| (g(p(t - q)) 1^{p(t - q)})* dt.
Under this revision, the sole condition on g is that
1 = g#(0) = integral g(t) dt.
In the literature (as with the short-time Fourier transform), the
1^{p(t - q)} is replaced by 1^{pt}. This simplifies the formula for
the inverse.

This, like the wavelet and Fourier transform, has discrete forms. It's
difficult (but not impossible) to implement S-transforms efficiently.
The best way that comes to mind (and what I'm presently using) is to
do the transform on a p-by-p basis -- in the time domain -- rescaling
f(t) to suit the frequency p and using a simple lookup table for g(t).
Then the integral would be written as
f_S(q, p) = integral f(q + L/p) g(L) 1^{-L} dL.
In particular, the windowing function g(L) = 1 for L between -1/2 and
1/2; g(L) = 0 else has the effect of matching a *single cycle*
centered on t = q to the wave form. This preserves the frequency. For
a monochrome wave f(t) = A 1^{nt + C} it produces the transform f_S(q,
p) = A 1^{nq + C} sinc(n/p - 1).

The same wave form is found when tuning into it at any frequency p in
the same ballpark as n and -- when the phase is color-coded as
mentioned above -- the stripe patterns are clearly seen, and oscillate
at the same frequency, no matter what frequency you tune into them at.
It only needs to be in the same ballpark (because of the sinc factor).

So, under the color-coded spectrograph, it shows up clearly as a
distinct object and you can easily separate it out from whatever other
objects its overlaid on top of. For color-coded phases, you see
clearly-distinguishable candy-stripe patterns corresponding to those
points where there are sound elements.

The reason this happens with the S-transform more than with the others
is due to an unusual property of the transform that hasn't seen the
light of the literature to date. Recall the Physicists' definition
of ...
Instantaneous Frequency = Rate of Change of Phase.
This applies to any waveform. But it applies especially well when the
waveform has been segregated into different bands. The segregation
need not be exact (because of the above-mentioned "stability in the
same ballpark" property). Even the sloppiest segregation will yield
separation of the component objects.

The actual formula for the instantaneous frequency is one seen in
quantum theory. Consider the complex value z = root(A) 1^B. Its
conjugate is z* = A 1^{-B}. Their differentials are:
dz = (dA/A + 2 pi i dB) z
dz* = (dA/A - 2 pi i dB) z*
Thus, combining, you get:
z* dz - dz* z = 4 pi i A^2 dB = 4 pi i z* z dB.
Consequently, the instantaneous frequency n = dB/dt comes out of the
expression
n = 1/(4 pi i z* z) (z* dz/dt - dz*/dt z)
which is basically the same formula used for defining the matter
current density in quantum theory.

The reason this is relevant for the S-transform is that it's lurking
behind the scenes there -- the S-transform has a Parseval identity and
it directly involves the instantaneous frequency.

Go back to the monochrome wave. Its amplitude was modulated to A
sinc(n/p - 1). The amplitude squared, A^2, can be recovered by
integrating with respect to (n/p - 1).
A^2 = integral (A sinc(n/p - 1))^2 d(n/p - 1)
= integral (A sinc(n/p - 1))^2 n/p^2 dp.
= integral f_S(q, p) n/p^2 dp.
This yields the amplitude at a particular time q. Integrating over all
q yields the total "energy" of the wave -- or would, if the wave were
localized (monochrome waves are not). Nonetheless, this generalizes.
The key to the generalization is to replace n by the instantaneous
frequency. The resulting formula is:
integral f(t)* F(t) dt = integral 1/(4 pi i) f_S(q, p)* (d/dq - d/
dq*) F_S(q, p) dp/p^2 dq
where the d/dq* applies to the left to f_S(q, p)*. For a single wave
form this yields:
integral |f(t)|^2 dt = integral |f_S(q, p)|^2 n(q, p) dp/p^2 dq.

So the instantaneous frequency is built right into the very core of
the S-transform, literally.

Finally, this leads up to the Holy Grail. Since the natural frequency
of this component is n(q, p), then it is just as natural to redraw the
spectrograph by moving this amplitude up from frequency p to frequency
n.

What this does, as a result, is essentially plug up the "spectral
leakage" and refocuses the waveform component back to its natural
frequency. The above integral and Parseval identity ultimately yield
the appropriate formula for making that conversion. The re-defined
spectral density is given by
rho(q, n) = integral |f_S(q, p)|^2 delta(n - n(q, p)) dp
which is a non-linear transform conditioned by the S-transform.

This is what I'm in the process of setting up right now.
```