DSPRelated.com
Forums

windowing and zero-padding

Started by Unknown March 7, 2008
"dbd" <dbd@ieee.org> wrote in message 
news:e6ab77f2-6342-4d4b-afad-02c237dbee37@s8g2000prg.googlegroups.com...
> On Mar 10, 11:32 am, "Fred Marshall" <fmarshallx@remove_the_x.acm.org> > wrote: > >> ... >> I was looking at it from a functional / procedural / operational point of >> view. >> To me, "zero padding" implies "adding" zeros to an existing function. >> >> I have a little trouble saying this: >> "I'm going to take this function and *zero pad* it by multiplying it by a >> shorter gate function." I don't think that's what "zero padding" means >> or >> is intended to mean. >> >> So, I don't see how the *operation* "zero padding" can have a multiply >> involved. >> And, thus my comment. >> >> Fred > > Perhaps the problem stems from thinking that the functional and > operational viewpoints are the same. They aren't. When a long sequence > has a shorter window applied, the windowed elements are multiplied by > a weight. If the result is placed in an array of greater length than > the window size, the function performed is the same as that of > weighting a longer sequence with some of the weights zero. The > operations performed are different. The multiplies by zero can be > performed more efficiently by simply loading a zero into the location > and saving the expense of a multiply. We can also be more efficient by > not generating the samples we would multiply by zero. If the window > was rectangular, the function can be performed more efficiently by the > operation of loading the elements into the array without a > multiplication than by using a multiplication by one, but the function > is the same as windowing. The operations are different only because of > a special properties of the values of the window coefficients. > > Windowing and zero-padding are functions that can be usefully defined > as processes that use multiplies. They can be efficiently performed by > operations not performing trivial multiplies. > > Dale B. Dalrymple >
All fine except you didn't describe how a multiply comes into a zero-padding operation. You did describe how a multiply by zero can be replaced with simpler operations and, I guess, imply that those simpler operations start to look like zero padding. But, that's backing into it. Zero padding does not use multiplies in time and repeatedly saying that it does doesn't make it any less incorrect. Confusing zero padding with windowing (i.e. multiplying) isn't the end of the world but there *is* a distinction. Fred
On Mar 10, 12:40 pm, "Fred Marshall" <fmarshallx@remove_the_x.acm.org>
wrote:
> "dbd" <d...@ieee.org> wrote in message
> > > Windowing and zero-padding are functions that can be usefully defined > > as processes that use multiplies. They can be efficiently performed by > > operations not performing trivial multiplies. > > > Dale B. Dalrymple > > All fine except you didn't describe how a multiply comes into a zero-padding > operation. > You did describe how a multiply by zero can be replaced with simpler > operations and, I guess, imply that those simpler operations start to look > like zero padding. But, that's backing into it. Zero padding does not use > multiplies in time and repeatedly saying that it does doesn't make it any > less incorrect. Confusing zero padding with windowing (i.e. multiplying) > isn't the end of the world but there *is* a distinction. > > Fred
If we start from continuous process of infinite extent, we 1) sample, 2) window (rectangular) to transform size, 3) rewindow (rectangular or other) on data to be kept and multiply by zero for the rest to get to a smaller window size than transform size. Just because you do the operations in a different order and extent for efficiency doesn't mean the functions aren't equivalent. It's good engineering to take the shortcuts. That's no excuse to forget the connections between our theoretical basis and our implementations. Dale B. Dalrymple
"dbd" <dbd@ieee.org> wrote in message 
news:acdc86a8-85d0-451b-8da7-a59ff018cb2f@s13g2000prd.googlegroups.com...
> On Mar 10, 12:40 pm, "Fred Marshall" <fmarshallx@remove_the_x.acm.org> > wrote: >> "dbd" <d...@ieee.org> wrote in message > >> >> > Windowing and zero-padding are functions that can be usefully defined >> > as processes that use multiplies. They can be efficiently performed by >> > operations not performing trivial multiplies. >> >> > Dale B. Dalrymple >> >> All fine except you didn't describe how a multiply comes into a >> zero-padding >> operation. >> You did describe how a multiply by zero can be replaced with simpler >> operations and, I guess, imply that those simpler operations start to >> look >> like zero padding. But, that's backing into it. Zero padding does not >> use >> multiplies in time and repeatedly saying that it does doesn't make it any >> less incorrect. Confusing zero padding with windowing (i.e. multiplying) >> isn't the end of the world but there *is* a distinction. >> >> Fred > > If we start from continuous process of infinite extent, we 1) sample, > 2) window (rectangular) to transform size, 3) rewindow (rectangular or > other) on data to be kept and multiply by zero for the rest to get to > a smaller window size than transform size. Just because you do the > operations in a different order and extent for efficiency doesn't mean > the functions aren't equivalent. It's good engineering to take the > shortcuts. That's no excuse to forget the connections between our > theoretical basis and our implementations. > > Dale B. Dalrymple
I don't know when I've seen a more contrived sequence of steps. Why in the world would one window and then, just for fun?, window again? That would border on insanity. It certainly is NOT a shortcut. And, what's been forgotten and by whom? I'm truly lost. Fred
On Mar 10, 8:57 am, Eric Jacobsen <eric.jacob...@ieee.org> wrote:
> On Mon, 10 Mar 2008 01:48:17 -0700 (PDT), "Ron N." > > > > <rhnlo...@yahoo.com> wrote: > >On Mar 9, 9:41 pm, "Fred Marshall" <fmarshallx@remove_the_x.acm.org> > >wrote: > >> "Ron N." <rhnlo...@yahoo.com> wrote in message > > >>news:bc1bbe96-e40a-4d0c-8d24-8c9994bbddca@e25g2000prg.googlegroups.com... > > >> > Zero padding is equivalent to applying a rectangular window. > > >> I don't see how..... assuming that the rectangular window has already been > >> applied then what does zero padding do? > > >Zero padding n data samples before a transform is equivalent > >to applying a rectangular window of length n inside a > >transform of length greater than n. The zero padding > >causes zero-valued points to be represented in the > >longer transform results, same as a rectangular window. > > >IMHO. YMMV. > > I agree completely and this is why I claim that the FFT has "inherent" > rectangular windowing; if you watch the behavior of the sidelobes as > you analyse the effects of the rectangular window from say 4:1 > oversampled from zero-padding down to a few samples zero-padded then > down to no zero-padding, the resampling of the sidelobes follows a > steady, predictable, analyzable trend.
If you want an interesting experiment, you can even take the number of zero-padded samples below zero. How? As the number of zero-padded samples gets smaller, the length of your non-zeroed data rectangle gets longer. Continue. When you get to zero zero-padding your rectangular window will be the same length as your FFT. Continue. When your rectangular data window becomes longer than your FFT, wrap the extra data around the FFT aperture circularly. You will find that the sidelobe behavior of sinusoids wrapped in this manner continues to follow your "steady, predictable, analyzable trend". But I'm not sure where this procedure would be actually useful. IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M http://www.nicholson.com/rhn/dsp.html
On Mar 10, 10:00 pm, "Ron N." <rhnlo...@yahoo.com> wrote:
> ... > When your > rectangular data window becomes longer than your FFT, wrap > the extra data around the FFT aperture circularly. You will > find that the sidelobe behavior of sinusoids wrapped in this > manner continues to follow your "steady, predictable, > analyzable trend". > > But I'm not sure where this procedure would be actually > useful. > > IMHO. YMMV. > -- > rhn A.T nicholson d.0.t C-o-M > http://www.nicholson.com/rhn/dsp.html
This is what is done in synchronous sampling. When the sample frequency can be controlled to be a multiple of a fundamental frequency, say by phase locking to a 1 per revolution tach signal, samples can be wrapped many times to give very sharp responses at the bin centers where the harmonics lay and reject other signals. harris uses the term 'folding' instead of 'wrapping', perhaps because of the use of 'unwrapping' in a different DSP context. Dale B. Dalrymple
On Tue, 11 Mar 2008 00:17:41 -0700 (PDT), dbd <dbd@ieee.org> wrote:

>On Mar 10, 10:00 pm, "Ron N." <rhnlo...@yahoo.com> wrote: >> ... >> When your >> rectangular data window becomes longer than your FFT, wrap >> the extra data around the FFT aperture circularly. You will >> find that the sidelobe behavior of sinusoids wrapped in this >> manner continues to follow your "steady, predictable, >> analyzable trend". >> >> But I'm not sure where this procedure would be actually >> useful. >> >> IMHO. YMMV. >> -- >> rhn A.T nicholson d.0.t C-o-M >> http://www.nicholson.com/rhn/dsp.html
That's pretty cool, I hadn't realized that, but it does make sense.
>This is what is done in synchronous sampling. When the sample >frequency can be controlled to be a multiple of a fundamental >frequency, say by phase locking to a 1 per revolution tach signal, >samples can be wrapped many times to give very sharp responses at the >bin centers where the harmonics lay and reject other signals. harris >uses the term 'folding' instead of 'wrapping', perhaps because of the >use of 'unwrapping' in a different DSP context. > >Dale B. Dalrymple
Yes, and it's not too hard to see, from Ron's description above, that once you've fully 'folded' an aperture, i.e., you're lapping 2N samples into a length-N transform, that you're just adding multiple instances of input windows together coherently wrt the transform basis functions. For example, if there's a sinusoid with M cycles/aperture, i.e., M cycles over N samples, breaking a length 2N vector in two halves and adding them together will coherently add the sinusoids. So the FFT output will still have a happy, sharp, spike at bin M. This does extend cyclically so that whenever the overlaps (or folds) reach integer numbers of the FFT length, 2M, 3M, ... KM, the same thing happens. It's not hard to see that the 'reverse-zero-padding' with non-integer "folds" is an extension of interpreting the effects of zero padding, as Ron described. Pretty cool stuff, I think, and I think it could definitely be useful if there's only time/resources to compute a length-N FFT but you need to consider more than N points in the input. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org
On Mar 10, 11:17 pm, dbd <d...@ieee.org> wrote:
> This is what is done in synchronous sampling. When the sample > frequency can be controlled to be a multiple of a fundamental > frequency, say by phase locking to a 1 per revolution tach signal, > samples can be wrapped many times to give very sharp responses at the > bin centers where the harmonics lay and reject other signals. harris > uses the term 'folding' instead of 'wrapping', perhaps because of the > use of 'unwrapping' in a different DSP context.
I have a slight preference for the term "wrapping" for continuing the data with the same phase at the opposite end of the window. The term "folding" more implies the reflection of the data at the same edge, thus reversing the phase. And unwrapping time domain transients after an IFFT is a similar problem to unwrapping phase in the frequency domain: in order to make a transient pulse or phase curve continuous, on moves a chunk of the result outside the range [-pi..pi] or [0..(n-1)] IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M