```On Tuesday, July 14, 2009 at 11:58:13 PM UTC+12, somanath17 wrote:
> Hi All,
>
> I have studied three diff kinds of transforms, The laplace transform, the
> z transform and the fourier transform. As per my understanding the usage of
> the above transforms are:
> Laplace Transforms are used primarily in continuous signal studies, more
> so in realizing the analog circuit equivalent and is widely used in the
> study of transient behaviors of systems.
>
> The Z transform is the digital equivalent of a Laplace transform and is
> used for steady state analysis and is used to realize the digital circuits
> for digital systems.
>
> The Fourier transform is a particular case of z-transform, i.e z-transform
> evaluated on a unit circle and is also used in digital signals and is more
> so used to in spectrum analysis and calculating the energy density as
> Fourier transforms always result in even signals and are used for
> calculating the energy of the signal.
>
> Is my understanding correct. What more technical differences exist and
> where do all these differences find their application. Would be really
> helpful if someone can give an understanding of this and provide links
> where i can look up for the same.
>
> Thanks,
> Soma

Thing to remember is, none of them work for nonlinear systems! Then you need Volterra-series and the like
```
```On 4/26/2018 8:03 AM, ccrs336@gmail.com wrote:
> On Tuesday, July 14, 2009 at 5:28:13 PM UTC+5:30, somanath17 wrote:
>> Hi All,
>>
>> I have studied three diff kinds of transforms, The laplace transform, the
>> z transform and the fourier transform. As per my understanding the usage of
>> the above transforms are:
>> Laplace Transforms are used primarily in continuous signal studies, more
>> so in realizing the analog circuit equivalent and is widely used in the
>> study of transient behaviors of systems.
>>
>> The Z transform is the digital equivalent of a Laplace transform and is
>> used for steady state analysis and is used to realize the digital circuits
>> for digital systems.
>>
>> The Fourier transform is a particular case of z-transform, i.e z-transform
>> evaluated on a unit circle and is also used in digital signals and is more
>> so used to in spectrum analysis and calculating the energy density as
>> Fourier transforms always result in even signals and are used for
>> calculating the energy of the signal.
>>
>> Is my understanding correct. What more technical differences exist and
>> where do all these differences find their application. Would be really
>> helpful if someone can give an understanding of this and provide links
>> where i can look up for the same.
>>
>> Thanks,
>> Soma
>
>
Well, Fourier transform is a special case of the Laplace transform.  The
Digital Fourier Transform (DFT usually implemented as the Fast Fourier
Transform - FFT) bears the same relation to the Z transform as the
Fourier transfrm bears to the Laplace transform.

Best wishes,
--Phil

--
Best wishes,
--Phil
pomartel At Comcast(ignore_this) dot net
```
```On Tuesday, July 14, 2009 at 5:28:13 PM UTC+5:30, somanath17 wrote:
> Hi All,
>
> I have studied three diff kinds of transforms, The laplace transform, the
> z transform and the fourier transform. As per my understanding the usage of
> the above transforms are:
> Laplace Transforms are used primarily in continuous signal studies, more
> so in realizing the analog circuit equivalent and is widely used in the
> study of transient behaviors of systems.
>
> The Z transform is the digital equivalent of a Laplace transform and is
> used for steady state analysis and is used to realize the digital circuits
> for digital systems.
>
> The Fourier transform is a particular case of z-transform, i.e z-transform
> evaluated on a unit circle and is also used in digital signals and is more
> so used to in spectrum analysis and calculating the energy density as
> Fourier transforms always result in even signals and are used for
> calculating the energy of the signal.
>
> Is my understanding correct. What more technical differences exist and
> where do all these differences find their application. Would be really
> helpful if someone can give an understanding of this and provide links
> where i can look up for the same.
>
> Thanks,
> Soma

```
```On Jul 17, 10:28&#2013266080;am, "Me" <inva...@invalid.invalid> wrote:
>
> 'Fraid not, old chap.
>
> The sampled signal has an inherent factor of T as explained.
>
> Else, if you model the sampling circuitry that originates the
> signals with which you deal, where do the impulses come from?

i dunno what you think i meant, but the words i forgot to say are: the
Z Transform is ...

> > essentially, the Laplace Transform of the *unscaled* (no leading T)
> > sampled signal

... with the substitution of z for for e^(sT).

the Z transform operates on the discrete samples, x[n], and doesn't
have T anywhere in it.  it is as true for signals sampled at 48 kHz as
it is for 48 MHz.  it's only an operator on x[n].  but, if you attach
those x[n] to a bunch of dirac deltas spaced by some given T (but
*not* scaled in amplitude by T like you would for reconstruction to x
(t)), the Laplace transform of that evaluated at some s is the same
value of the Z transform of x[n] evaluated at z = e^(sT).  that's what
i meant by "no leading T".

sometimes i forget to say everything i mean, even when i spiel.

r b-j
```
```"robert bristow-johnson" <rbj@audioimagination.com> wrote in message

> essentially, the Laplace Transform of the *unscaled* (no leading T)
> sampled signal:

'Fraid not, old chap.

The sampled signal has an inherent factor of T as explained.

Else, if you model the sampling circuitry that originates the
signals with which you deal, where do the impulses come from?

```
```On Jul 15, 2:50&#2013266080;am, "Me" <inva...@invalid.invalid> wrote:
> "Tim Wescott" <t...@seemywebsite.com> wrote in message
>
> news:CcKdnbtkBvUWVsHXnZ2dnUVZ_gJi4p2d@web-ster.com...
>
> > Note that by carefully defining the sampling process you can derive the z
> > transform from the Laplace transformin a way that is exact. &#2013266080;The usual
> > derivation defines sampling as multiplication by a series of impulses of
> > infinite height and finite area; this gives everyone gas pains but is
> > generally a nice way of thinking of it. &#2013266080;You can avoid impulses at the
> > expense of convenience (but not rigor, as far as I can tell).
>
> There's no need for those "gas pains" ...
>
> 1. The transform for the Unit Impulse is derived from a limiting process.
>
> 2. A practical limiting process for us is the Nyquist criterion.
>
> 3. According to Nyquist, once the sampling pulses have become sufficiently
> frequent to ensure that the amplitude-modulated sidebands are not aliasing
> with each other, no further advantage (OK, yeah, low pass filtering becomes
> easier) is gained by increasing the frequency of the sampling pulses, and,
> thereby
> reducing their width.
>
> 4. To make life easier for ourselves, we like to use a previously calculated
> transform in our analysis, that of the Unit Impulse.
>
> 5. However, because of the stopping-short in our limiting process, our
> sampling signal is not a train of Unit Impulses, but a train of T*d(t).
>
> Simple to derive ..... a Unit Impulse is T* (1/T), but our sampling pulses
> are T * 1 ie, 1 volt. So to convert Unit Impulses so that they can represent our
> sampling pulses, we must multiply by T
>
> 6. Most texts omit this crucial facor of T, the sampling interval, and so
> bring about the "gas pains" .... (Cue: Airry r. been (sp?))
>
> 7. Bristow-Johnson has a spiel on the reconstruction process, and finds it
> necessary to introduce this T factor to make things work.
>
> 8. If you introduce it at the sampling stage, then all is sweetness and
> light.
>
> 9. Conclusion. Change all the textbooks to describe sampling as a train of
> T*d(t) and not as a train of d(t), then the gas pains disappear, and peace reigns
> throughout the realm.

essentially, the Laplace Transform of the *unscaled* (no leading T)
sampled signal:

SUM{ x(t) * delta(t - n*T) }   (over all integer n)

which is

SUM{ x[n] * delta(t - n*T) }

(where x[n] is x(n*T) )

is the same as the Z Transform.

the leading T is necessary, in my opinion, only to understand how to
properly do reconstruction if you are modeling reconstruction to do
sample rate conversion or interpolation.  that is where this sinc
function (or windowed-sinc) comes from.  but, to see where the Z
transform comes from, leave the T factor out (and put it back in when
you do ideal reconstruction, that what i intended to be what my spiel

r b-j

```
```I thought you were proposing that the width of each pulse
should be T.  Maybe I misunderstood you.  I am not really
sure now what you mean by T.d(t) but maybe I should ponder a
bit further.

Regards,
John

Me wrote:
> Sorry, I was misled by your original reply. Thinking further on the
> matter, there is no sin(x)/x anomaly, because what I proposed is
> just a scaling of the amplitude by a factor of T (or 1/T depending which
> way you look at it), but with no change of shape.
>
> I was /am positing a reperesentation of scaled Unit Impulses.
>
> The issue of integration is not relevant.
>
> Even if sin(x)/x were involved, then in the linear systems which we
> discuss, it does not matter what the ordre of analysis is, whether
> the sin(x)/x is considered at the beginning or at the end.
>
> In my approach, it is considered at the end because it is not introduced
> at the start. Sure, we start off with a sampling pulse of the form
> U(nT) - U((n+1)T) in which sin(x)/x might be relevant, but by the
> simple conversion to T.d(t) it is definitely not relevant.
>
> NO. If youuse the representation that I suggest you do NOT have
> to consider the distortion because it ain't there.
>
>
>
> "John Monro" <johnmonro@optusnet.com.au> wrote in message
> news:4a5edbea\$0\$4046\$afc38c87@news.optusnet.com.au...
>> There would be a disagreement if there were only two choices:
>>     1.  Represent the signal as a series of pulses of duration T sec.
>>     2.  Represent the signal as a series of infinite-amplitude, zero-width
>> pulses.
>>
>> In fact there is a third choice. It seems I failed to make it clear that I
>> think it is useful to represent the sampled signal as a series of pulses
>> of finite ampilitude and non-zero duration.  The values of the amplitude
>> and duration are not defined (and don't need to be defined). Only the
>> integral over a very short period of time is defined. (I claim no
>> originality for this thought :=).
>>
>> Regarding the sin(x)/x distortion, simple reconstruction by holding each
>> sample for one full sample-period does indeed give the same sin(x)/x
>> distortion as the scheme you are siggesting, but the distortion occurs at
>> the last step in the reproduction process and we don't have to worry about
>> distortion before then.
>>
>> If we use the representation you suggest, we either have to agree to
>> ignore the distortion that is (conceptually) introduced right at the
>> sampling stage, or we include the distortion, which messes up the
>> representation.  Neither seems very satisfactory to me.
>>
>> Regards,
>> John
>>
>>
>
>
```
```Sorry, I was misled by your original reply. Thinking further on the
matter, there is no sin(x)/x anomaly, because what I proposed is
just a scaling of the amplitude by a factor of T (or 1/T depending which
way you look at it), but with no change of shape.

I was /am positing a reperesentation of scaled Unit Impulses.

The issue of integration is not relevant.

Even if sin(x)/x were involved, then in the linear systems which we
discuss, it does not matter what the ordre of analysis is, whether
the sin(x)/x is considered at the beginning or at the end.

In my approach, it is considered at the end because it is not introduced
at the start. Sure, we start off with a sampling pulse of the form
U(nT) - U((n+1)T) in which sin(x)/x might be relevant, but by the
simple conversion to T.d(t) it is definitely not relevant.

NO. If youuse the representation that I suggest you do NOT have
to consider the distortion because it ain't there.

"John Monro" <johnmonro@optusnet.com.au> wrote in message
news:4a5edbea\$0\$4046\$afc38c87@news.optusnet.com.au...
>>
> There would be a disagreement if there were only two choices:
>     1.  Represent the signal as a series of pulses of duration T sec.
>     2.  Represent the signal as a series of infinite-amplitude, zero-width
> pulses.
>
> In fact there is a third choice. It seems I failed to make it clear that I
> think it is useful to represent the sampled signal as a series of pulses
> of finite ampilitude and non-zero duration.  The values of the amplitude
> and duration are not defined (and don't need to be defined). Only the
> integral over a very short period of time is defined. (I claim no
> originality for this thought :=).
>
> Regarding the sin(x)/x distortion, simple reconstruction by holding each
> sample for one full sample-period does indeed give the same sin(x)/x
> distortion as the scheme you are siggesting, but the distortion occurs at
> the last step in the reproduction process and we don't have to worry about
> distortion before then.
>
> If we use the representation you suggest, we either have to agree to
> ignore the distortion that is (conceptually) introduced right at the
> sampling stage, or we include the distortion, which messes up the
> representation.  Neither seems very satisfactory to me.
>
> Regards,
> John
>
>

```
```Me wrote:
> "John Monro" <johnmonro@optusnet.com.au> wrote in message
> news:4a5e9ccb\$0\$23633\$afc38c87@news.optusnet.com.au...
>> The problem with representing the sampled signal as a train of pulses of
>> duration T second is that this introduces sin(x)/x distortion of the
>> time-domain response, which is a complication we don't need.
>
>> The only requirement for a delta function is that its integral is exactly
>> unity over any interval that you choose, however short.  If you pick an
>> interval of exactly zero that introduces the infinite-amplitude problem,
>> so why do it?
>
> You seem to be disagreeing with yourself in the two paragraphs
> quoted above.
>
> The sin(x)/x distortion is introduced anyway as part of the standard
> reconstruction process.
>
>
>
There would be a disagreement if there were only two choices:
1.  Represent the signal as a series of pulses of
duration T sec.
2.  Represent the signal as a series of
infinite-amplitude, zero-width pulses.

In fact there is a third choice. It seems I failed to make
it clear that I think it is useful to represent the sampled
signal as a series of pulses of finite ampilitude and
non-zero duration.  The values of the amplitude and duration
are not defined (and don't need to be defined). Only the
integral over a very short period of time is defined. (I
claim no originality for this thought :=).

Regarding the sin(x)/x distortion, simple reconstruction by
holding each sample for one full sample-period does indeed
give the same sin(x)/x distortion as the scheme you are
siggesting, but the distortion occurs at the last step in
the reproduction process and we don't have to worry about
distortion before then.

If we use the representation you suggest, we either have to
agree to ignore the distortion that is (conceptually)
introduced right at the sampling stage, or we include the
distortion, which messes up the representation.  Neither
seems very satisfactory to me.

Regards,
John

```
```"John Monro" <johnmonro@optusnet.com.au> wrote in message
news:4a5e9ccb\$0\$23633\$afc38c87@news.optusnet.com.au...
> The problem with representing the sampled signal as a train of pulses of
> duration T second is that this introduces sin(x)/x distortion of the
> time-domain response, which is a complication we don't need.

> The only requirement for a delta function is that its integral is exactly
> unity over any interval that you choose, however short.  If you pick an
> interval of exactly zero that introduces the infinite-amplitude problem,
> so why do it?

You seem to be disagreeing with yourself in the two paragraphs
quoted above.

The sin(x)/x distortion is introduced anyway as part of the standard
reconstruction process.

```