DSPRelated.com
Forums

Large FFT vs Many FFTs

Started by Edison April 5, 2007
Vladimir Vassilevsky wrote:
> > > Edison wrote: > >> Hi DSP Gurus, >> >> I've been asked to research a DSP system to take an FFT to detect >> harmonic >> distortion. > > If your goal is measuring the harmonic distortion, you don't need the > FFT. All you need is a bandstop filter and a comb filter. > > Vladimir Vassilevsky >
Vladimir, Unfortunately the presence of two widely-separated frequency components makes the comb filter impractical. Also, it suggests that Edison may be interested in looking at intermodulation distortion and cross-modulation distortion as well, so maybe the FFT is best. Regards, John
Rick Lyons wrote:
> On Thu, 05 Apr 2007 05:44:39 -0500, "Edison" <bell561@btinternet.com> > wrote: > >> Hi DSP Gurus, >> >> I've been asked to research a DSP system to take an FFT to detect harmonic >> distortion. >> >> The base frequency will be 110kHz and I need to detect up to the 5th >> harmonic. Therfore I will need a samplerate of at least 1.2MHz. >> >> The requirements are to take 1 seconds worth of data and FFT it, giving a >> 1.2 million point FFT. The requirements state that the resulting very >> narrow bin width will reduce the effect of noise. Is this a sensible thing >> to do? >> >> I have a feeling it would be better to take a number of smaller FFTs (say >> 1k point), calculate the magnitudes and average them. Thus reducing the >> effect of random noise and giving a more flexible system that requires a >> lot less memory. Is this the right way or would the method above be more >> benificial? >> >> Thanks in advance for your help. > > Edison, > I don't know the details of your application > but something just occurred to me. I once worked > on a "real-time" spectrum analysis system where a signal's > spectrum was output for display on a CRT screen. > To reduce the fluctuations of the displayed spectral > magnitudes, each real-time sequence of FFT bin magnitudes > was passed through an "exponential averager". That kind > of averaging required much less memory and fewer > computations than standard averaging. > (Just something for you to think about.) > > See Ya', > [-Rick-] >
Rick, A few comments: 1. The 'exponential averager' sounds ideal for that application, but has the disadvantage in this application that the most recent FFTs will contribute more to the final 'average' than the least recent FFTs. A standard average, on the other hand, treats the contribution of each FFT equally. 2. If you calculate the standard average by implementing a running sum of the FFTs followed by a multiply by 1/(number of FFTs) when the last FFT is added, the memory requirement is the same as for exponential averaging. 3. Assuming that the 'exponential averaging' process you mention is something like this: FFT_AVERAGE[n] = 0.9*FFT_AVERAGE[n-1] + 0.1*FFT[n] then it would appear that exponential average actually needs MORE calculation than a standard 'running sum' average. Regards, John
On Apr 5, 8:36 pm, John Monro <johnmo...@optusnet.com.au> wrote:
> Rick Lyons wrote: > > On Thu, 05 Apr 2007 05:44:39 -0500, "Edison" <bell...@btinternet.com> > > wrote: > > >> Hi DSP Gurus, > > >> I've been asked to research a DSP system to take an FFT to detect harmonic > >> distortion. > > >> The base frequency will be 110kHz and I need to detect up to the 5th > >> harmonic. Therfore I will need a samplerate of at least 1.2MHz. > > >> The requirements are to take 1 seconds worth of data and FFT it, giving a > >> 1.2 million point FFT. The requirements state that the resulting very > >> narrow bin width will reduce the effect of noise. Is this a sensible thing > >> to do? > > >> I have a feeling it would be better to take a number of smaller FFTs (say > >> 1k point), calculate the magnitudes and average them. Thus reducing the > >> effect of random noise and giving a more flexible system that requires a > >> lot less memory. Is this the right way or would the method above be more > >> benificial? > > >> Thanks in advance for your help. > > > Edison, > > I don't know the details of your application > > but something just occurred to me. I once worked > > on a "real-time" spectrum analysis system where a signal's > > spectrum was output for display on a CRT screen. > > To reduce the fluctuations of the displayed spectral > > magnitudes, each real-time sequence of FFT bin magnitudes > > was passed through an "exponential averager". That kind > > of averaging required much less memory and fewer > > computations than standard averaging. > > (Just something for you to think about.) > > > See Ya', > > [-Rick-] > > Rick, > A few comments: > > 1. The 'exponential averager' sounds ideal for that application, but > has the disadvantage in this application that the most recent FFTs will > contribute more to the final 'average' than the least recent FFTs. A > standard average, on the other hand, treats the contribution of each FFT > equally. > > 2. If you calculate the standard average by implementing a running sum > of the FFTs followed by a multiply by 1/(number of FFTs) when the last > FFT is added, the memory requirement is the same as for exponential > averaging. > > 3. Assuming that the 'exponential averaging' process you mention is > something like this: > FFT_AVERAGE[n] = 0.9*FFT_AVERAGE[n-1] + 0.1*FFT[n] > then it would appear that exponential average actually needs MORE > calculation than a standard 'running sum' average. > > Regards, > John
The early real time spectrum analyzers used the exponential average to calculate a running average estimate of the noise background for waterfall displays because it only requires one storage element per bin per channel independent of the time constant and updateable every data block. FIR running estimation of the background would have used a CIC like structure with memory size proportional to time constant. Exponential averaging also works for estimating tones of constant frequency. The OP didn't state any requirement for realtime update so one storage element per bin can suffice for linear averaging. Dale B. Dalrymple http://dbdimages.com
> > Yes, performing very large FFTs will "pull" your > desired harmonic spectral components up above the > background spectral noise
I would like to understand this better. Intuitively I would think that it rather pushes the noise down and leaving the harmonic at the same level. Or is it just a matter of what you take as reference? Also what is it that makes the distance between noise and signal become larger with larger FFTs? Is it because the bin size gets smaller and hence less noise is accumulated in this bin, whilst the signal will always be in 1 bin anyway. Or is it more related to coherent summation of the signal (power proportional to size^2) versus statistical adding of noise (power proportional to size) ?
> The topic you're exploring has been described in a > zillion technical papers and there's much information > available on the Internet.
Is there 1 paper you could refer as a good starting point? Some math is OK, but not PhD math level
On Apr 7, 2:16 am, "NewLine" <umts_remove_this_and_t...@skynet.be>
wrote:
> > Yes, performing very large FFTs will "pull" your > > desired harmonic spectral components up above the > > background spectral noise > > I would like to understand this better. > > ... > > The topic you're exploring has been described in a > > zillion technical papers and there's much information > > available on the Internet. > > Is there 1 paper you could refer as a good starting point? Some math is OK, > but not PhD math level
Some of the best references for your purpose may be old app notes from "dynamic signal analyzer" or "real time spectrum analyzers" from companies like Agilent(HP) or Bruel&Kjaer or Scientific- Atlanta(Spectral Dynamics). Examples are: AN 243 Fundamentals of Signal Analysis http://cp.literature.agilent.com/litweb/pdf/5952-8898E.pdf Trigonometric Transforms--a Unique Introduction to the FFT by fred harris http://ultranalog.com/sd375/trigonometric_transform.pdf Well, that comes to two not one, but the price is right. Dale B. Dalrymple http://dbdimages.com
On 5 Apr, 12:44, "Edison" <bell...@btinternet.com> wrote:

> The requirements are to take 1 seconds worth of data and FFT it, giving a > 1.2 million point FFT. The requirements state that the resulting very > narrow bin width will reduce the effect of noise. Is this a sensible thing > to do? > > I have a feeling it would be better to take a number of smaller FFTs (say > 1k point), calculate the magnitudes and average them. Thus reducing the > effect of random noise and giving a more flexible system that requires a > lot less memory. Is this the right way or would the method above be more > benificial?
You have been asked to compute the periodogram from the data. I have no idea whether the periodogram is a good way to compute the harmonical distorsion in this system, but generally speaking, it is better to compute the avegrage of lots of small periodograms than one big periodogram. The variance -- i.e. uncertainty -- of one coefficient P[k] in the periodogram is large, Var(P[k]) = P[k]^2. Note that Var(P[k]) does *not* depend on the number of samples N, meaning that the argument that "a large data sequence reduces noise" is plain wrong. The derivations of the variance of the periodogram as well as alternative ways of computing the periodogram (e.g. Welch's method) is found in texts on statistical signal processing, and is standard material. The books on the subject I know and are familiar with, date from the late '80s/early '90s, which means they are difficult to find now. I know there were a couple of books by Kay pubished around 1998-99, but I don't know if they treat these particular questions. Rune
"Rune Allnor" <allnor@tele.ntnu.no> writes:
> [...] > On 5 Apr, 12:44, "Edison" <bell...@btinternet.com> wrote: > >> The requirements state that the resulting very narrow bin width >> will reduce the effect of noise. > [...] > Note that Var(P[k]) does *not* depend on the number of > samples N, meaning that the argument that "a large data > sequence reduces noise" is plain wrong.
Perhaps he meant that, for a given noise power spectral density, a smaller bin width reduces the input noise power in that bin, which is true. The correctness of his statement depends on whether the "noise" he mentioned is input noise or estimation noise (i.e., the variance you were speaking of). -- % Randy Yates % "And all that I can do %% Fuquay-Varina, NC % is say I'm sorry, %%% 919-577-9882 % that's the way it goes..." %%%% <yates@ieee.org> % Getting To The Point', *Balance of Power*, ELO http://home.earthlink.net/~yatescr
On 8 Apr, 00:01, Randy Yates <y...@ieee.org> wrote:
> "Rune Allnor" <all...@tele.ntnu.no> writes: > > [...] > > On 5 Apr, 12:44, "Edison" <bell...@btinternet.com> wrote: > > >> The requirements state that the resulting very narrow bin width > >> will reduce the effect of noise. > > [...] > > Note that Var(P[k]) does *not* depend on the number of > > samples N, meaning that the argument that "a large data > > sequence reduces noise" is plain wrong. > > Perhaps he meant that, for a given noise power spectral density, a > smaller bin width reduces the input noise power in that bin, which is > true. > > The correctness of his statement depends on whether the "noise" he > mentioned is input noise or estimation noise (i.e., the variance you > were speaking of).
The statement "reduce the effect of noise" implies to me that one expects a lower variance in the estimated periodogram. Whatever was intended, the OP ought to be aware that the periodogram is a notoriosly poor estimator for the PSD. Rune
On Apr 7, 4:00 pm, "Rune Allnor" <all...@tele.ntnu.no> wrote:
> On 8 Apr, 00:01, Randy Yates <y...@ieee.org> wrote: > > > > > "Rune Allnor" <all...@tele.ntnu.no> writes: > > > [...] > > > On 5 Apr, 12:44, "Edison" <bell...@btinternet.com> wrote: > > > >> The requirements state that the resulting very narrow bin width > > >> will reduce the effect of noise. > > > [...] > > > Note that Var(P[k]) does *not* depend on the number of > > > samples N, meaning that the argument that "a large data > > > sequence reduces noise" is plain wrong. > > > Perhaps he meant that, for a given noise power spectral density, a > > smaller bin width reduces the input noise power in that bin, which is > > true. > > > The correctness of his statement depends on whether the "noise" he > > mentioned is input noise or estimation noise (i.e., the variance you > > were speaking of). > > The statement "reduce the effect of noise" implies to me that > one expects a lower variance in the estimated periodogram. > > Whatever was intended, the OP ought to be aware that the > periodogram is a notoriosly poor estimator for the PSD. > > Rune
The expression: Var(P[k]) = P[k]^2. is true for bins containing Gaussian noise. It is not true for bins containing a tone with energy considerably greater than the noise energy in the bin. The OP has a task of measuring a base frequency component and harmonic components, not measuring noise. An old, but still available reference is: http://www.bksv.com/pdf/bv0031.pdf See the article on units starting on page 29 for a discussion of units differences between discrete spectra and random signals. By the way, the internet has made a change in book availability. The 'old books' are often available used a reasonable cost. I just picked up a copy of Tukey and Blackman's Dover printing of "The Measurement of Power Spectra" for under $10. That's a lot more than the $1.85 price in 1959 but still less than current texbook prices. A common real problem with the single large transform approach is that the base frequency is not stable enough to remain at a constant frequency to a tolerance of less than a bin size. If it is not stable enough, a smaller transform size can make the bin size larger than the frequency variation or an algorithm that correctly combines energy over more than one bin can be used. Dale B. Dalrymple http://dbdimages.com
">
> AN 243 Fundamentals of Signal Analysis > http://cp.literature.agilent.com/litweb/pdf/5952-8898E.pdf > > Trigonometric Transforms--a Unique Introduction to the FFT by fred > harris > http://ultranalog.com/sd375/trigonometric_transform.pdf > > Well, that comes to two not one, but the price is right. > > Dale B. Dalrymple > http://dbdimages.com >
great material ! Thanks !