DSPRelated.com
Forums

FFT: decimation in freq vs dec in time?

Started by Jocke P March 25, 2005
Could someone please refer to good, non-matematician-accessible discussion
of decimation-in-time vs decimation-in-freq FFT's?

Second, more confused, question.
The implementations I find online are d.in-time, but I'd like a
d.in-freq implementation if available (i think).
The idea is to build a tree of partial FFT's: While making a 512-pt fft
I want to save the intermediate 16 32-point fft's, the 8 64-pt, the 4 128-pt
and the two 256-point fft's from which it is made.

Third, um, "observation" (misconception?)
This makes it impossible to use windows on either the partial or the
final fft's, unless I do each level over from scratch, which in turn
would sort of erode the first "F" in "FFT" - maybe that's not a big deal
for my application though.

Anyway I don't care about orthogonality, but have the idea I could have
both precision in time and precision in frequency (for visualisation purposes)
by so to speak oversampling the spectrum.
Downsampling and FFT is an obvious alternative, but I've read that common
downsampling techniques have flaws in the boundary regions.
Better implementation ideas are of course welcome though.

Grateful for any corrections, (online) discussion/article pointers, or heartless ridicule.

	jp
Jocke P wrote:
> Could someone please refer to good, non-matematician-accessible
discussion
> of decimation-in-time vs decimation-in-freq FFT's?
The difference is pretty trivial, in principle. The most common FFT algorithm, Cooley-Tukey, breaks up a transform of a composite size N = N1 * N2 into: 1) N1 transforms of size N2 (of interleaved data) 2) an N1xN2 transpose and some phase-factor multiplications 3) N2 transforms of size N1 (again, interleaved). This is repeated recursively (e.g. if N2 is large, it is broken into smaller transforms, etcetera). It is called decimation in time if N1 is some small, typically bounded (or even constant) number (called the "radix"), and decimation in frequency if N2 is the radix. That's all there is to it. See e.g. http://en.wikipedia.org/wiki/Cooley-Tukey_FFT_algorithm
> Second, more confused, question. > The implementations I find online are d.in-time, but I'd like a > d.in-freq implementation if available (i think). > The idea is to build a tree of partial FFT's: While making a 512-pt
fft
> I want to save the intermediate 16 32-point fft's, the 8 64-pt, the 4
128-pt
> and the two 256-point fft's from which it is made.
You can save partial FFTs from either decimation in time or frequency. Whether these FFTs are useful or not is another question. For example, if you do a size-1024 FFT by radix-2 decimation in time, you first do size-512 FFTs of the even and odd elements, and then combine them with phase factors and 512 size-2 transforms. Whether those size-512 sub-transforms are useful depends on your application, I guess (they have the same frequency resolution as the size-1024 transform but a smaller Nyquist frequency). If you do the same size-1024 FFT by radix-2 decimation in frequency, you first combine the first and second halves of the data with a bunch of size-2 transforms, followed by phase factors, followed by a couple size-512 FFTs. Since the size-512 FFTs come after the size-2 transforms and phase factors, they may be less useful compared to transforms of the raw data. Again, depends on your problem, I guess.
> Anyway I don't care about orthogonality, but have the idea I could
have
> both precision in time and precision in frequency (for visualisation
purposes) No, you're not going to get time precision this way, because the first sub-transforms of an FFT are always interleaved, and are never of contiguous chunks of the input data. Cordially, Steven G. Johnson
Jocke P wrote:
> Could someone please refer to good, non-matematician-accessible discussion > of decimation-in-time vs decimation-in-freq FFT's?
http://cnx.rice.edu/content/m12016/latest/ http://cnx.rice.edu/content/m12018/latest/
> Second, more confused, question. > The implementations I find online are d.in-time, but I'd like a > d.in-freq implementation if available (i think). > The idea is to build a tree of partial FFT's: While making a 512-pt fft > I want to save the intermediate 16 32-point fft's, the 8 64-pt, the 4 > 128-pt > and the two 256-point fft's from which it is made. > > Third, um, "observation" (misconception?) > This makes it impossible to use windows on either the partial or the > final fft's, unless I do each level over from scratch, which in turn > would sort of erode the first "F" in "FFT" - maybe that's not a big deal > for my application though. > > Anyway I don't care about orthogonality, but have the idea I could have > both precision in time and precision in frequency (for visualisation > purposes) > by so to speak oversampling the spectrum. > Downsampling and FFT is an obvious alternative, but I've read that common > downsampling techniques have flaws in the boundary regions. > Better implementation ideas are of course welcome though. > > Grateful for any corrections, (online) discussion/article pointers, or > heartless ridicule.
They don't serve a free lunch. (That's as close to ridicule as my good nature allows.) You get exactly the same resolution whichever way you decimate. What's more, windowing is only useful when applied to the entire data set; otherwise, the butterflies' inherent scrambling messes up the window shape. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������