# FFT length

Started by February 13, 2012
```Hi.

A simple question.
Given a signal (17Hz) with noise (from external devices, environment, etc),
and, say, 10 secs of data, in order to extract out the signal, which is
better to do:

a.
take the entire 10 secs of data and do a FFT and see if we could see the 17
Hz sticking out
b.
take a 1 sec FFT of the signal for every 0.5 sec (FFT at 0-1, FFT at
0.5-1.5, FFT at 1-2, FFT at 1.5-2.5, etc) and average out the FFT results.

Which is a better approach to detect the 17 Hz?

Many thanks.
```
```On Feb 13, 3:43=A0pm, "Marc2050" <maarcc@n_o_s_p_a_m.gmail.com> wrote:
> Hi.
>
> A simple question.
> Given a signal (17Hz) with noise (from external devices, environment, etc=
),
> and, say, 10 secs of data, in order to extract out the signal, which is
> better to do:
>
> a.
> take the entire 10 secs of data and do a FFT and see if we could see the =
17
> Hz sticking out
> b.
> take a 1 sec FFT of the signal for every 0.5 sec (FFT at 0-1, FFT at
> 0.5-1.5, FFT at 1-2, FFT at 1.5-2.5, etc) and average out the FFT results=
.
>
> Which is a better approach to detect the 17 Hz?
>
> Many thanks.

one big FFT is always better then many average small FFT's (look up
coherent vs incoherent averaging)

I would just compute a single point DFT at 17 Hz over the whole 10
seconds
```
```pierson <gkicomputers@yahoo.com> wrote:

(snip)
> one big FFT is always better then many average small FFT's (look up
> coherent vs incoherent averaging)

> I would just compute a single point DFT at 17 Hz over the whole 10
> seconds

I suppose so, but much of experimental physics data analysis
is based on incoherent averaging. (Usually called statistical
independence.)

If you can't rely on statistical independence, then you have
to watch carefully for systematic error. Random error you can
eventually average out, but systematic error you can't.

(Note that much of the economic meltdown was due to physicists
getting this wrong. That is, believing that some data had random
error when there were significant non-random terms.)

-- glen
```
```On 2/13/2012 12:43 PM, Marc2050 wrote:
> Hi.
>
> A simple question.
> Given a signal (17Hz) with noise (from external devices, environment, etc),
> and, say, 10 secs of data, in order to extract out the signal, which is
> better to do:
>
> a.
> take the entire 10 secs of data and do a FFT and see if we could see the 17
> Hz sticking out
> b.
> take a 1 sec FFT of the signal for every 0.5 sec (FFT at 0-1, FFT at
> 0.5-1.5, FFT at 1-2, FFT at 1.5-2.5, etc) and average out the FFT results.
>
> Which is a better approach to detect the 17 Hz?
>
> Many thanks.

10 secs gives you 1/10 Hz resolution.
1 secs gives you 1/1 Hz resolution.  Averaging these won't improve the
resolution but should improve the SNR.
THe latter is much like a RAKE receiver where there isn't guaranteed
coherence over long periods of time - so it's better to split up time

If the signal of interest doesn't vary in frequency or phase then you'd
likely be better off using the full 10 secs of data.

If the signal does vary in frequency or phase then you might be better
off using 1 second and doing multiple transforms.

I don't think you would want to compute a single point output unless you
already know a lot about the signal.  As you get to know more and more
then the question arises "why bother with an FFT or analysis at all?"

You said "extract out the signal".  What's really your objective here?
Once extracted, what are you going to do next?

Fred
```
```On Mon, 13 Feb 2012 13:21:55 -0800 (PST), pierson
<gkicomputers@yahoo.com> wrote:

>On Feb 13, 3:43&#2013266080;pm, "Marc2050" <maarcc@n_o_s_p_a_m.gmail.com> wrote:
>> Hi.
>>
>> A simple question.
>> Given a signal (17Hz) with noise (from external devices, environment, etc),
>> and, say, 10 secs of data, in order to extract out the signal, which is
>> better to do:
>>
>> a.
>> take the entire 10 secs of data and do a FFT and see if we could see the 17
>> Hz sticking out
>> b.
>> take a 1 sec FFT of the signal for every 0.5 sec (FFT at 0-1, FFT at
>> 0.5-1.5, FFT at 1-2, FFT at 1.5-2.5, etc) and average out the FFT results.
>>
>> Which is a better approach to detect the 17 Hz?
>>
>> Many thanks.
>
>one big FFT is always better then many average small FFT's (look up
>coherent vs incoherent averaging)
>
>I would just compute a single point DFT at 17 Hz over the whole 10
>seconds

Hello pierson,
averaging the magnitudes of multiple FFTs is
a *VERY* common process in the world of
signal processing.  Such a process reduces the
magnitude results.

What I'd suggest to marc2050 is to use a Goertzel
filter, or perhaps its close cousin the 'sliding DFT',
rather than using an FFT.

[-Rick-]

```
```On Feb 14, 6:16=A0am, Rick Lyons <R.Lyons@_BOGUS_ieee.org> wrote:
> On Mon, 13 Feb 2012 13:21:55 -0800 (PST), pierson
>
>
>
>
>
> <gkicomput...@yahoo.com> wrote:
> >On Feb 13, 3:43=A0pm, "Marc2050" <maarcc@n_o_s_p_a_m.gmail.com> wrote:
> >> Hi.
>
> >> A simple question.
> >> Given a signal (17Hz) with noise (from external devices, environment, =
etc),
> >> and, say, 10 secs of data, in order to extract out the signal, which i=
s
> >> better to do:
>
> >> a.
> >> take the entire 10 secs of data and do a FFT and see if we could see t=
he 17
> >> Hz sticking out
> >> b.
> >> take a 1 sec FFT of the signal for every 0.5 sec (FFT at 0-1, FFT at
> >> 0.5-1.5, FFT at 1-2, FFT at 1.5-2.5, etc) and average out the FFT resu=
lts.
>
> >> Which is a better approach to detect the 17 Hz?
>
> >> Many thanks.
>
> >one big FFT is always better then many average small FFT's (look up
> >coherent vs incoherent averaging)
>
> >I would just compute a single point DFT at 17 Hz over the whole 10
> >seconds
>
> Hello pierson,
> =A0 =A0averaging the magnitudes of multiple FFTs is
> a *VERY* common process in the world of
> signal processing. =A0Such a process reduces the
> variance (uncertainty) of your spectral
> magnitude results.

Yes, and so does coherent averaging (as well as increase your S/N). I
use incoherent averaging all the time too, but only due to practical
matters because I can average 10 4096 point FFT's easily, but cannot,
due to system limitations, perform one big 40960 point FFT. Incoherent
averaging is also complicated by the fact windowing and overlap is
usually required, reducing your "effective" sample size and increasing
noise and increasing processing time.
```
```On Tue, 14 Feb 2012 06:46:07 -0800 (PST), pierson
<gkicomputers@yahoo.com> wrote:

[Snipped by Lyons]
>
>Yes, and so does coherent averaging (as well as increase your S/N). I
>use incoherent averaging all the time too, but only due to practical
>matters because I can average 10 4096 point FFT's easily, but cannot,
>due to system limitations, perform one big 40960 point FFT. Incoherent
>averaging is also complicated by the fact windowing and overlap is
>usually required, reducing your "effective" sample size and increasing
>noise and increasing processing time.

Hi piersin,
just so Marc2050 doesn't misunderstand your
post, it would be nice to have a little more
explanation here.

On what sort of signals can you perform coherent
averaging of multiple FFTs?  I don't know if there
is such a thing as a 40960-point FFT, but how would
averaging ten 4096 FFTs be related to a single
40960-point FFT?  What would force you to use
windowing when averaging multiple FFTs?  What would
force you to perform multiple FFTs on overlapped
time-domain data?  What 'noise' is increased when
you average multiple FFT magnitudes?  Why would
computing ten 4096 FFTs take more processing time
than computing a single 40960-point FFT?

[-Rick-]
```
```Rick Lyons <R.Lyons@_bogus_ieee.org> wrote:
> On Tue, 14 Feb 2012 06:46:07 -0800 (PST), pierson wrote:

>   [Snipped by Lyons]

>>Yes, and so does coherent averaging (as well as increase your S/N).
>>I use incoherent averaging all the time too, but only due to practical
>>matters because I can average 10 4096 point FFT's easily, but cannot,
>>due to system limitations, perform one big 40960 point FFT.

(snip)
>   just so Marc2050 doesn't misunderstand your
> post, it would be nice to have a little more
> explanation here.

> On what sort of signals can you perform coherent
> averaging of multiple FFTs?

Well, the easy answer is when the phase is known.
And for incoherent averaging, the phase has to be random.
That is, you have to have reason to believe that there
is not a simple phase relationship even if you don't know it.

There is the rule from optics, for coherent sources, add
the amplitude, for incoherent sources, add the intensity.

(Which ignores the fact, mentioned by Jerry not so long ago,
that there is a coherence length. It isn't always a yes or no,
but sometimes a maybe.)

> I don't know if there is such a thing as a 40960-point
> FFT, but how would averaging ten 4096 FFTs be related
> to a single 40960-point FFT?

Well, considering that longer FFTs are built from
shorter ones, it shouldn't matter. If the phase is known,
then you can add up the 4096 point transforms appropriately.

Following the "add the intensity," in the case of the FFT

> What would force you to use windowing when averaging
> multiple FFTs?

That is an interesting question. I recently went to a Tektronix
demonstration of their new oscilloscope with built-in digital
spectrum analyzer. It seems that the default (but selectable) is
to use the Kaiser window before doing the FFT. Since you can't
rely on the signal being periodic in the transform length, you
need windowing. Windowing reduces the effect from the periodic
boundary condition, that occurs at the beginning and end
of the transform.

That seems a separate question from averaging, but maybe not.

> What would force you to perform multiple FFTs on overlapped
> time-domain data?

One obvious case is when the data comes in that way, or
when it is buffered that way. (You may fill a buffer, then slowly
write it out. And again, not know the phase relationship.)

Consider, though, the way radio-astronomy is done. Signals
are collected from widely spaced dishes, along with exact timing.
The signals can then be combined with the appropriate phase,
to get the equivalent of a very large antenna. Without the
timing, the signals would have to be considered incoherent.

> What 'noise' is increased when you average multiple FFT
> magnitudes?  Why would computing ten 4096 FFTs take more
> processing time than computing a single 40960-point FFT?

Averaging N incoherent samples decreases the noise (uncerainty)
by sqrt(N). Averaging coherently, if the phase is properly
considered, should reduce it by N. If the phase isn't accounted
for, you can get very far off.

-- glen

```
```On Wed, 15 Feb 2012 00:57:18 +0000 (UTC), glen herrmannsfeldt
<gah@ugcs.caltech.edu> wrote:

>Rick Lyons <R.Lyons@_bogus_ieee.org> wrote:
>> On Tue, 14 Feb 2012 06:46:07 -0800 (PST), pierson wrote:
>
>>   [Snipped by Lyons]
>
>>>Yes, and so does coherent averaging (as well as increase your S/N).
>>>I use incoherent averaging all the time too, but only due to practical
>>>matters because I can average 10 4096 point FFT's easily, but cannot,
>>>due to system limitations, perform one big 40960 point FFT.
>
>(snip)
>>   just so Marc2050 doesn't misunderstand your
>> post, it would be nice to have a little more
>> explanation here.
>
>> On what sort of signals can you perform coherent
>> averaging of multiple FFTs?
>
>Well, the easy answer is when the phase is known.
>And for incoherent averaging, the phase has to be random.
>That is, you have to have reason to believe that there
>is not a simple phase relationship even if you don't know it.
>
>There is the rule from optics, for coherent sources, add
>the amplitude, for incoherent sources, add the intensity.
>
>(Which ignores the fact, mentioned by Jerry not so long ago,
>that there is a coherence length. It isn't always a yes or no,
>but sometimes a maybe.)
>
>> I don't know if there is such a thing as a 40960-point
>> FFT, but how would averaging ten 4096 FFTs be related
>> to a single 40960-point FFT?
>
>Well, considering that longer FFTs are built from
>shorter ones, it shouldn't matter. If the phase is known,
>then you can add up the 4096 point transforms appropriately.
>
>Following the "add the intensity," in the case of the FFT
>
>> What would force you to use windowing when averaging
>> multiple FFTs?
>
>That is an interesting question. I recently went to a Tektronix
>demonstration of their new oscilloscope with built-in digital
>spectrum analyzer. It seems that the default (but selectable) is
>to use the Kaiser window before doing the FFT. Since you can't
>rely on the signal being periodic in the transform length, you
>need windowing. Windowing reduces the effect from the periodic
>boundary condition, that occurs at the beginning and end
>of the transform.
>
>That seems a separate question from averaging, but maybe not.
>
>> What would force you to perform multiple FFTs on overlapped
>> time-domain data?
>
>One obvious case is when the data comes in that way, or
>when it is buffered that way. (You may fill a buffer, then slowly
>write it out. And again, not know the phase relationship.)
>
>Consider, though, the way radio-astronomy is done. Signals
>are collected from widely spaced dishes, along with exact timing.
>The signals can then be combined with the appropriate phase,
>to get the equivalent of a very large antenna. Without the
>timing, the signals would have to be considered incoherent.
>
>> What 'noise' is increased when you average multiple FFT
>> magnitudes?  Why would computing ten 4096 FFTs take more
>> processing time than computing a single 40960-point FFT?
>
>Averaging N incoherent samples decreases the noise (uncerainty)
>by sqrt(N). Averaging coherently, if the phase is properly
>considered, should reduce it by N. If the phase isn't accounted
>for, you can get very far off.
>
>-- glen

Hi glen,
Good gosh you're 'quick on the trigger'.
I was hopin' to hear pierson's answers to my
questions.  In any case, thanks glen.

By the way my question of "On what sort of signals
can you perform coherent averaging of multiple FFTs?"
was a serious question.  Yes, I know that the
blocks of time samples must be 'phase coherent'
for sucg coherent analysis.
What I was wondering was *EXACTLY* what sort of
signals there are, which we might want to analyze,
that would be 'phase coherent'.  No real-world,
information-carrying, signal that I know of meets
that criteria.  The only such signals I can think of
that could be analyzed, spectrally, with coherent
averaging is the sinusoidal input signals used by
guys who are measuring the performance of A/D
converters using the FFT.

See Ya',
[-Rick-]
```
```On Feb 14, 5:53 pm, Rick Lyons <R.Lyons@_BOGUS_ieee.org> wrote:

>
> By the way my question of "On what sort of signals
> can you perform coherent averaging of multiple FFTs?"
> was a serious question.  Yes, I know that the
> blocks of time samples must be 'phase coherent'
> for sucg coherent analysis.
> What I was wondering was *EXACTLY* what sort of
> signals there are, which we might want to analyze,
> that would be 'phase coherent'.  No real-world,
> information-carrying, signal that I know of meets
> that criteria.  The only such signals I can think of
> that could be analyzed, spectrally, with coherent
> averaging is the sinusoidal input signals used by
> guys who are measuring the performance of A/D
> converters using the FFT.
>
> See Ya',
> [-Rick-]

Machine vibration analysis is a common application of synchronous
averaging. In practice the sampling is performed synchronously with a
shaft rotational rate or a tachometer signal is used to supply
information to resample to a synchronous sample frequency that can
follow speed variations. There is better coverage in the IEEE Trans on
Instrumentation and Measurement than in the Signal Processing
literature. A good search term might be "synchronous sampling". Gas
turbines in jet planes, helicopters and stationary power generation
plants are monitored and maintained with instrumentation that uses
synchronous (re)sampling. Helicopter rotor track and balance
instruments will measure imbalance and tell the maintenance operator
how much weight to put how far out on which blade to balance the
rotor. Gas turbines are balanced in the same manner. Much large
rotating industrial machinery gets such attention.

As to the application of averaging to windowed power data, the
following documents are a good practical discussion:

Windows to FFT Analysis (Part I)
Technical review No. 3 - 1987
Fourier Transform/Fast Fourier Transform) analysis and filter analysis
(analogue or digital) can be used to better understand the
applications of different weighting functions used in DFT/FFT. The
filter characteristics of the most commonly used weighting functions
(also called windows) are illustrated and discussed with respect to
their use in various practical applications of system and signal
analysis. The mathematical formulations of the analogy as well as
rigorous details of the article will be given in the Appendices in
Part II of this article to be published in Technical Review No.
4-1987.
And p29 Signals and Units

at:
http://bruel.ru/UserFiles/File/Review3_87.pdf
or
www.bksv.com/doc/bv0031.pdf
registration required at www.bksv.com, and worthwhile if you want to

Windows to FFT Analysis (Part II)
Technical review No. 4 - 1987
Technical Review 1987-4 Use of Weighting Fuctions in DFT/FFT Analysis
(Part II); Acoustic Calibrator for Intensity Measurements SystemsPart
II of the article "Use of Weighting Functions in DFT/FFT analysis"
contains the following Appendices referred to in Part I of the article
A: Analogy between filter analysis and DFT/FFT analysis,
B: Windows and figures of merit,
C: Effective Weighting of overlapped spectral averaging
D: Experimental Determination of the BT product for FFT-analysis using
different weighting functions and overlap,
E: Examples of User Defined Windows,
F: Picket Fence Effect

at:
www.bksv.com/doc/bv0032.pdf
registration required at www.bksv.com, and worthwhile if you want to