DSPRelated.com
Forums

need recommendation(s) for FFT algorithm to use

Started by ggk September 4, 2010
On 9/4/2010 1:32 PM, all4dsp wrote:

> > There will be spikes very close or even at Fs/2 due to the physics > involved. Turns out that the real signal often has spikes at or near Fs/2 > for various reasons due to real noise phenomena. However, there will be no > aliasing in the original signal (from the test environment in acquiring the > data). > > The reason the array is so large is that the acquisition window is directly > proportional to the lowest frequency that is measured. That is, if the > acquisition window is 1ms of time, I can view down to 1KHz in frequencies. > Since my application requires looking at 1KHz or even lower frequencies, I > need to work with large arrays. Any filter (e.g. M points< N points) that > looks at a subset of this data would be removing these low frequencies from > the time-domain view, and sort of defeats the purpose of acquiring a large > array to look at this data to begin with.
There are folks here better than I in the practical application of some of these things. What I hear is that you have a very high bandwidth AND are interested in dealing with very low frequencies all at the same time. That's pretty unusual. This somehow suggests to me that it would make sense to split up the data with lowpass/highpass filters or bandpass filters. Then, the lowpassed data can be sampled less frequently (decimated). And, the highpassed data can be sampled for less time. In both cases, the number of samples is reduced. You can carry on from there if this notion starts to make sense to you. Fred
On Sep 4, 9:43&#4294967295;am, "all4dsp" <all4dsp@n_o_s_p_a_m.comcast.net> wrote:
> The purpose of zeroing bins and taking IFFT is to observe the time-domain > contribution due to the spurs only (and not the random noise). For the > processing that I'm doing, this is the key piece of information.
Note that unless the "spurs" are exactly bin centered (exactly periodic in the FFT aperture), their contribution will be spread over the entire spectrum, and perhaps not of insignificant contribution in nearby bins, which could be removed by naive zeroing. Zeroing bins in the frequency domain (essentially multiplying by a rectangular gating function) is the same as convolving with a sinc function in the time domain, thus possibly causing any transients to "ring". You may end up actually increasing certain types of what to might be calling random noise by this type of filtering. IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M http://www.nicholson.com/rhn/dsp.html
>You might be wanting to ask yourself another question though (and >perhaps you have): > >"Is zeroing samples in the frequency domain likely to do what you
expect?"
> >This is an implementation of filtering by frequency domain >multiplication (multiplying by zeros and ones). If you conjure up such >a filter and IFFT just the filter then you'll see that the temporal unit >sample response may (or may not) be exactly what you hoped for. > >The issue is time-domain aliasing or "folding". It would be more >"acceptable" or "safer" practice to smooth the transitions in frequency >in going from passbands to stopbands so that the temporal output doesn't >"wrap/fold/overlap". Whether this matters is application specific. It >does have some impact on the compute load but not. > >More technically: >For a data sequence of length N: >If you arbitrarily apply 1's and 0's in the frequency domain you could >conceptually do this: >- Form the sequence of 1's and 0's. And, just so things are at least >reasonably correct: >- the 1st sample (at f=0) is assumed to repeat at fs. >- the 2nd sample=Nth sample >- the 3rd sample=(N-1)th sample\. >. >. >. >At least then the sequence is symmetric about f=0 and the IFFT will be
real.
> >Without any other consideration, this array will be of length N also,
right?
>- IFFT just this sequence. The result will be the unit sample response >of the filter and will have length N. >So, now we have a data sequence of length N and a filter of length N. >If you were to convolve them in time the result would have length 2N-1 >and, by the way, the entire output sequence would be either the >transient of filling the filter or the transient of emptying the filter >and there'd be no steady-state output at all! > >But, since the plan is to multiply in frequency then you'll be doing a >*circular* convolution. If you do a circular convolution with only N >points then the output will be totally overlapped upon itself. > >The proper way to do the circular convolution is this: >Assume the data length of N as above. >Assume the filter length is M and, I'll suggest that M should be a fair >bit shorter than N. I would have said M=0.1*N but let's use M=0.33*N >That way, 1/3 of the output will be the initial transient, 1/3 of the >output will be steady state and 1/3 of the output will be the trailing >transient. > >A straight convolution of data and filter here would result in M+N-1 >samples in the sequence. So must the circular convolution. For >simplicity, let's just make them both of length M+N. >Thus, we will append M zeros to the end of the data sequence >and >we will append N zeros to the end of the filter sequence. >Now both sequences are of length M+N which just happens to be 1.33*N. >And, the filter sequence will have 3/4 of its samples=0 and will (or >could have) only have 1 in 4 nonzero samples in frequency. This tells >you something about the frequency transition widths that are possible >with this filter. > >Summary: >If you are going to do the original process then the number of samples >might ought to be doubled with zero-valued samples before computing the >FFT. Ditto the conceptual filter that you will construct. >But, perhaps it makes more sense to keep the filter shorter in time as >above...... > >P.S. The number of samples you have suggests huge overkill for doing >most things. It implies that there must be spikes very close to fs/2. >Is that right? The resolution implied goes way beyond what's "normal". > If so, then there are likely any number of tricks to be played to >reduce the compute load. > >Fred >
I've been thinking a lot about this discussion (thanks SO much by the way!). While I can follow the thought process and agree with the textbook-like procedure outlined, I'm still having trouble squaring it with my previous thinking. Let's say I have a time-domain signal x(t) containing random noise (e.g. white noise and/or 1/f noise), and periodic noise at multiple frequencies and amplitudes. I sample this signal with a data converter to obtain N samples. If I look at the FFT magnitude spectrum, I'll see a background of random noise with spurious noise (spurs) poking out at the frequencies associated with the periodic noise sources. My goal is to view what the time-domain signal x(t) would look like if it were to contain only those frequency components associated with the periodic noise sources. More specifically, let's assume the spur associated with any one particular noise source can be represented by ONE frequency-bin's phase and magnitude, where the frequency bin is that bin having the largest FFT magnitude associated with that particular noise source. For example, if a particular noise source occupies 5 frequency bins, where the FFT magnitude rises from bin 1 to bin 2, rises from bin 2 to bin 3, then falls from bin 3 to bin 4, and falls from bin 4 to bin 5, then the spur is represented by the middle bin (3), since this bin has the largest magnitude. As a thought experiment, let's start with ONE periodic noise source, and say it's the largest noise source in terms of FFT magnitude. If we compute the inverse DFT based on the coefficients of this one point, we obtain a sinewave. My understanding is this sinewave is the frequency component present in x(t) due to the largest periodic noise source. While I can "see" the conceptual filter selecting only this one frequency bin, I'm not sure I can see the filter filling up and emptying; is circular convolution a problem in this case? Let's add a second periodic noise source, the second-largest periodic noise source, to the above thought experiment. Now we have two sinewaves adding in the time-domain. Isn't this representative of x(t) containing only these two noise sources? Again, I'm not sure how to interpret the effect of circular convolution here (or if it's present, or if it needs to be worried about). If there are a total of a dozen periodic noise sources, and we did similarly for each (summing sinewaves in the time domain, each sinewave computed from single-point DFTs), my (simplistic) thinking is this should be an accurate representation of the periodic noise sources' contribution to x(t). Is that correct, or am I way off here? Because if I look at it from the point of view outlined in the above reply, while I can "see" the conceptual filter and circular convolution, I have trouble squaring it with the result of summing sinewaves as above. Perhaps the difference is I'm simply picking off single-frequency bins when summing sinewaves, whereas the M-point filter approach includes neighboring bins' effects entering the equations that causes the filter to fill/empty and produce steady (and non-steady) transient regions. Thanks in advance for any insight, and sorry if I've outworn my welcome here trying to really understand this.