DSPRelated.com
Forums

Zero padding fftw

Started by simwes December 8, 2010
On Dec 13, 8:20 pm, eric.jacob...@ieee.org (Eric Jacobsen) wrote:
> On Mon, 13 Dec 2010 10:36:06 -0800 (PST), illywhacker > <illywac...@gmail.com> wrote: > >On Dec 13, 5:56 pm, eric.jacob...@ieee.org (Eric Jacobsen) wrote: > >> On Mon, 13 Dec 2010 09:26:53 -0800 (PST), illywhacker > >> <illywac...@gmail.com> wrote: > >> >On Dec 13, 4:15=A0pm, eric.jacob...@ieee.org (Eric Jacobsen) wrote: > >> >> On Mon, 13 Dec 2010 06:31:45 -0800 (PST), illywhacker > >> >> <illywac...@gmail.com> wrote: > >> >> >On Dec 13, 12:44=3DA0am, dbd <d...@ieee.org> wrote: > >> >> >> On Dec 12, 2:36=3DA0pm, illywhacker <illywac...@gmail.com> wrote:
> You don't have to go that far at all. Many times it is desirable to > extract the value a signal at a point in between existing samples. > Finding the symbol centers of a communication signal is a reasonable > example. There are many, many examples of simple interpolation of the > anti-aliased signal being a useful or necessary process. Are you > really arguing otherwise?
No one is claiming that interpolation is not useful. However, in case (1), where no information is added, it is useless outside of specific algorithms, and in any case it is trivial. This is because you already *know* the anti-aliased signal: the samples specify it uniquely. Now there are questions about how to implement these things as efficiently as possible, etc., but that is not the issue here.
> Sounds like you're going to lengths to adjust or narrow the > interpretation to suit you.
Not at all. I made a statement about the need for a model that is true in both cases: (1) no information added and (2) information added. This had nothing to do with Dale: it was to clarify another post. However Dale believes that only case (1) is of relevance to DSP: "In DSP we don't interpolate the signal as it was before anti-aliasing and sampling so modeling the signal as it was before anti-aliasing and sampling is irrelevant" and "You are indeed fortunate to find 2) trivial since it is the only sensible use case for DSP". So he chimed in and made a statement that, suitably interpreted, is true in case (1), but is false in general. I told Dale it was false in general, and he took umbrage. That's it. I am not sure why you have chosen to pick up his false statement as a cause celebre, or why you are trying to widen the discussion. Very strange. illywhacker;
On Dec 13, 4:06&#4294967295;pm, Jerry Avins <j...@ieee.org> wrote:
> On Dec 13, 2:44&#4294967295;pm, Clay <c...@claysturner.com> wrote: > > > > > > > On Dec 13, 12:49&#4294967295;pm, Jerry Avins <j...@ieee.org> wrote: > > > > On Dec 13, 11:23&#4294967295;am, Clay <c...@claysturner.com> wrote: > > > > > On Dec 13, 11:06&#4294967295;am, eric.jacob...@ieee.org (Eric Jacobsen) wrote: > > > > > > On Mon, 13 Dec 2010 05:53:36 +0000 (UTC), glen herrmannsfeldt > > > > > > <g...@ugcs.caltech.edu> wrote: > > > > > >dbd <d...@ieee.org> wrote: > > > > > >(snip) > > > > > > >> The OP asked for help in correctly performing an interpolation by fft, > > > > > >> zero extending in the transformed domain and ifff. > > > > > >(snip) > > > > > > >> Interpolation is a common tactic used to more accurately resolve the > > > > > >> location of features such as a peak or a zero crossing in a sampled > > > > > >> signal. In English, the ability "to more accurately resolve" is > > > > > >> correctly referred to as "increased resolution". This definition of > > > > > >> resolution is consistent with the OP's question, algorithm and > > > > > >> vocabulary and with 1) and 2). > > > > > > >In optics, as I understand it, resolution comes from the ability > > > > > >distinguish between two objects in an image, in place of one larger > > > > > >object. &#4294967295;The wikipedia page Optical_resolution has much of the > > > > > >explanation. > > > > > > >If you are looking through a telescope, you would like to know > > > > > >if you see two stars that are close together, or just one. > > > > > >In most cases, interpolation doesn't help. &#4294967295;Note that the > > > > > >actual image is, as seen from earth, pretty much a point source. > > > > > >Diffraction broadens that into an Airy pattern, which is pretty > > > > > >much the circular version of the sinc. &#4294967295; > > > > > > >High quality lenses are diffraction limited, such that diffraction > > > > > >effects limit the possible resolution. &#4294967295;For lower quality lenses, > > > > > >the lens itself limits the resolution. > > > > > > >Interpolation doesn't increase the resolution. &#4294967295;It may allow > > > > > >one to see what the shape of the expected sinc or Airy disk is, > > > > > >but that is all. > > > > > > >Linear deconvolution also doesn't do much to improve resolution. > > > > > > >Non-linear deconvolution uses other properties of the system > > > > > >to improve resolution, but that takes more than interpolation. > > > > > >For example, absorption spectra can't go below zero or > > > > > >above one. &#4294967295;Taking those constraints into account, and accurately > > > > > >knowing the transfer function of the system, allows one to do > > > > > >better than one might otherwise expect. > > > > > > >-- glen > > > > > > Similar definitions of "resolution" exist in signal processing. &#4294967295; When > > > > > I worked on radar processing systems, the ability to resolve two point > > > > > targets, similar to your optical example of resolving two stars, was > > > > > the pertinent example. &#4294967295; The idea there, which I've seen described > > > > > fairly consistently for other applications, was that two sinx/x point > > > > > responses had to be able to be separated to about their 3dB points in > > > > > order to detect and "resolve" them separately. > > > > > > So that's one definition of "resolution" that seems to be fairly > > > > > consistently applied. &#4294967295; In this context, interpolating additional > > > > > points does nothing to improve the resolution, which I think is why > > > > > that's usually the case that's made. > > > > > > But there are other things that can be "resolved" from a signal, and > > > > > as Darrell alluded the locations of peaks (e.g., accurately estimating > > > > > frequency in a DFT output) or zero crossings can often be more > > > > > accurately located using interpolation. > > > > > > So in that sense increasing the "resolution" by interpolation is > > > > > certainly possible, but it's not the usual way the term is used. &#4294967295;So, > > > > > as usual, it is important to know what is meant when "resolution" is > > > > > used in a discussion. > > > > > > Eric Jacobsen > > > > > Minister of Algorithms > > > > > Abineau Communicationshttp://www.abineau.com-Hidequotedtext- > > > > > > - Show quoted text - > > > > > Certainly in astro one can refer to Rayleigh's or Dawes limits, but > > > > there are tricks for seeing a little more than what those limits > > > > allow. In the astronomical case of stars where for almost (there are > > > > exceptions) all practical purposes are mathematical point sources, the > > > > common trick to resolving close stars is to use an apodization filter > > > > placed over the telescope's aperture. This filter will push the each > > > > star's diffraction rings around at the expense of increasing the size > > > > of the central airey discs. But this can be useful for locating > > > > companion stars. > > > > > So in an analogous way one can think of making a window function have > > > > a null at a key location to reveal an item of interest but at the > > > > expense of losing something else. > > > > One simple apodizing mask is a square. It increases the effective > > > resolution along the diagonals of the square at the expense of > > > resolution along the other symmetry axes, 45 degrees away. (This is > > > great for spectroscopes, where resolution perpendicular to the slit is > > > all that matters.) Like all apodizing masks, it has to be applied at > > > the time of data collection. As far as I know, you can't improve an > > > already existing image that way. > > > > Jerry- Hide quoted text - > > > > - Show quoted text - > > > Yes, the mask is a "run time" thing. In fact if you are using a square > > mask, you can play with rotating it to place the desired companion > > object along the diagonal to the square. > > > Some people use round masks made out of several layers of window > > screen. I haven't tried this yet. Here is an example > > >http://home.digitalexp.com/~suiterhr/TM/MakingApod.pdf > > > This article's author has written the "bible" for star testing > > Albert Ingalls's "Amateur Telescope Making (Book2, p357)" shows a very > clever opaque mask for eliminating the visible effect of spider > diffraction. Only 4% of the original light is blocked. Of course, > curved spider vanes work also. > > Jerry- Hide quoted text - > > - Show quoted text -
I use a Schmidt Newtonian, so instead of a spider, my diagonal mirror is suspended by the corrector plate. It seems to work pretty well and I get a rather large (over a degree) low coma field. Clay
On 12/13/2010 12:35 PM, glen herrmannsfeldt wrote:

> > Well, first of all it seems to me that zero padding in only > applicable in the case that the source signal is actually zero > outside the range of the transform, or that it should be assumed > to be zero. > > -- glen
Glen, While it has some appeal, I think this gets you on a slippery slope. If that property were true then the inverse transform would be infinite. I don't think that's usually what we deal with unless we're back to continuous infinite extent Fourier Transforms. While some argue on this point, I still find it useful to consider DFT pairs to represent periodic sequences on one period each. So, if you start with a sequence and zero-pad it, you increase the apparent period length. Then, the DFT of the result has a shorter sample interval and retains its original periodicity. Example: Sample interval in time T (and we'll accept that this is a single period because....) Sample sequence N samples representing a period of duration N*T and N samples each (leaving the next sample to the next period). DFT sample sequence N samples Sample frequency (period) = 1/T Frequency sample interval = 1/NT Frequency period = N*(1/NT) = 1/T as above OK, now zero pad in frequency by a factor of 2: The "proper* way to do this is as follows: *Envision* replicating the spectrum up to 2/T. Note that there's a hump around 1/T as before. Actually the "replciate the spectrum up to 2/T did nothing at all except change our point of reference .. which frequency *is* fs? Well, now it's 2/T is all. If we inverse transform the "new" length 2N sequence this can be represented with, we get a zero-filled time sequence with the non-zero samples exactly the same as we started with. (We note that if we lowpass filter this "new" time sequence that it will be interpolated, thus filling in those alternating zero-valued samples.) But, instead of doing that inverse transform, let's do the lowpass filtering by multiplying in frequency. This means suppressing the bump around 1/T. After doing that it looks almost as if we'd padded N zeros to the original sequence of length N, around 1/2T .... which is the shorthand way of doing it. To zero-pad in this way is to multiply by a perfect brick wall LPF onto the length 2N sequence we envisioned above. Now if we inverse transform, we have an interpolated time sequence of length 2N. There's nothing assumed about the original content of the frequency period needing to be zero around 1/T - and, in fact, actually it wasn't. But, as we can see, in order for the interpolation to work it *has* to be *zeroed* around 1/T by virtue of an operation and not an assumption. Well, not the most rigorous presentation I'm sure, but I think it makes sense well enough. Fred
dbd <dbd@ieee.org> wrote:
(snip on resolution and zero-padding)

> is only true in symbolic manipulations and homework problems. In fact, > we may use the anti-aliased and sampled (and therefore faulty) data to > determine if we have a signal with components that represent some > model even when we know that signals that fit the model cannot be > accurately represented by the anti-aliased sampled data sets.
This reminds me of the FTIR spectrometer, and some of my thoughts on first learning about it. A traditional spectrometer passing (approximately) monochromatic light through a sample and measures the intensity coming out. (Usually as a function of time with a time varying input frequency.) FTIR passes a time varying, but not monochromatic beam through the sample, such that the result is (approximately) the Fourier transform of the absorption spectrum. The input beam is the output of a Michelson interferometer, such that it is pretty much the Fourier transform of a specific time (or length, in spectrometry units.) That is, it contains all the wavenumbers (inverse wavelengths) that are multiples of the fundamental wavenumber (inverse length) of the spectrometer. The resolution of the FTIR spectrometer depends on the maximum retardation (a good description in wikipedia, redirected from FTIR), similar to the way that Fourier transform frequency resolution depends on the length (in time) of the input data. My first thought when hearing about FTIR was wondering if you could get the absorption spectrum from regions past that of the light source. The Fourier transform will go down to zero wavenumber, even thought the light source doesn't go that far. Is it obvious that that is related to zero padding? Anyway, while one could imagine a continuous scanning system, it is usually done with discrete positions, such as a stepper motor driven mirror movement system. With discrete positions, the resulting signal is discrete time (retardation length), and so the FFT is appropriate for the transform.
> To do > this for some models we may need to 'resolve' the positions of things > like peaks and zero crossings to a finer 'resolution' than allowed by > the original sample positions. The OP has given us no context to > justify restricting our consideration to any more restrictive model > than "anti-aliased and sampled". There are many games we can play if > we know more, but you have failed to show any models as required > limitations on the application of this group's usage of > "interpolation" in response to the OP.
-- glen
Fred Marshall <fmarshall_xremove_the_xs@xacm.org> wrote:
(snip, I wrote)

>> Well, first of all it seems to me that zero padding in only >> applicable in the case that the source signal is actually zero >> outside the range of the transform, or that it should be assumed >> to be zero.
(snip)
> While it has some appeal, I think this gets you on a slippery slope.
> If that property were true then the inverse transform would be infinite. > I don't think that's usually what we deal with unless we're back to > continuous infinite extent Fourier Transforms.
> While some argue on this point, I still find it useful to consider DFT > pairs to represent periodic sequences on one period each.
Yes, I like the periodic interpretation. In that case, there is nothing to pad. Only discrete frequencies exist.
> So, if you start with a sequence and zero-pad it, you increase the > apparent period length. Then, the DFT of the result has a shorter > sample interval and retains its original periodicity.
Note that most car CD players will go back to track one at the end, thus generating a periodic (in time) result. Now, consider a CD recording in a concert hall. There might be another concert in the same hall following the recording session, such that the microphone input at later times is not zero. Zero padding gives the answer with the assumption that there is no following concert. Considering the question of frequency bin resolution, one has to wonder if it makes sense to ask about resolution finer than the inverse length of the recording. If, for example, the signal goes right up to the end of the recording, then the spectrum is very sensitive to the ending point. In another case, though it might work. Consider a Gaussian envelope wave packet. That is exp(-A*t**2)*cos(omega*t) for appropriate A and omega, with A much smaller than omega**2. Now, the signal goes to zero in such a way that it should be assumed to be zero past the recorded data points. For this case, given sufficient signal/noise, I can see that one could determine omega using an interpolation method. Still, there is some uncertainty in the frequency, and one might get a different result for a Lorentzian envelope, for example. (snip) -- glen
On Dec 13, 2:45 pm, Fred Marshall <fmarshall_xremove_the...@xacm.org>
wrote:
> On 12/13/2010 12:35 PM, glen herrmannsfeldt wrote: > > > Well, first of all it seems to me that zero padding in only > > applicable in the case that the source signal is actually zero > > outside the range of the transform, or that it should be assumed > > to be zero. > > > -- glen > > Glen, > > While it has some appeal, I think this gets you on a slippery slope. > > If that property were true then the inverse transform would be infinite. > I don't think that's usually what we deal with unless we're back to > continuous infinite extent Fourier Transforms. > > While some argue on this point, I still find it useful to consider DFT > pairs to represent periodic sequences on one period each. > ... > > Fred
There you are coming around with your old circular argument again Fred :). If you guys want to kick off another periodic-extension/aperiodic discussion go ahead, but don't blame it on the fft of a zero extended windowed sequence. No part of that calculation makes use of any information outside of the original window on the original samples. No output of that calculation can tell you anything about what lies outside of the window on the original samples. That's the bad news. The good news is that the fft of the zero filled windowed sequence can still be interpreted either way as appropriate to your assumption in subsequent processing. Have fun! Dale B. Dalrymple
On Mon, 13 Dec 2010 17:53:28 -0800 (PST), dbd <dbd@ieee.org> wrote:

>On Dec 13, 2:45 pm, Fred Marshall <fmarshall_xremove_the...@xacm.org> >wrote: >> On 12/13/2010 12:35 PM, glen herrmannsfeldt wrote: >> >> > Well, first of all it seems to me that zero padding in only >> > applicable in the case that the source signal is actually zero >> > outside the range of the transform, or that it should be assumed >> > to be zero. >> >> > -- glen >> >> Glen, >> >> While it has some appeal, I think this gets you on a slippery slope. >> >> If that property were true then the inverse transform would be infinite. >> I don't think that's usually what we deal with unless we're back to >> continuous infinite extent Fourier Transforms. >> >> While some argue on this point, I still find it useful to consider DFT >> pairs to represent periodic sequences on one period each. >> ... >> >> Fred > >There you are coming around with your old circular argument again >Fred :). > >If you guys want to kick off another periodic-extension/aperiodic >discussion go ahead, but don't blame it on the fft of a zero extended >windowed sequence. No part of that calculation makes use of any >information outside of the original window on the original samples. No >output of that calculation can tell you anything about what lies >outside of the window on the original samples. That's the bad news. >The good news is that the fft of the zero filled windowed sequence can >still be interpreted either way as appropriate to your assumption in >subsequent processing. > >Have fun! > >Dale B. Dalrymple
FWIW, that reminds me a lot of the discussion from eons ago about whether the FFT has an "inherent" rectangular window. I argued that it does, and the zero padding behavior tends, I think, to lend credence to that. Zero padding can be thought of as applying a rectangular window that's shorter than the length of the longest period of the basis functions, and the behavior wrt the output sinx/x bin response is consistent as the window adjusts from a short window up the full length of the basis functions. As you're suggesting, this also has implications on the interpretation of the periodic/aperiodic nature of the transform. So, yeah, this can get a lot of discussion going, and I think it's interesting that there are multiple points of view that fully explain the behavior. It's easy to think that one or other point of view might be better than another, but in the end it seems like whichever you want to use will get you there. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.abineau.com
On Dec 13, 9:02&#4294967295;pm, dbd <d...@ieee.org> wrote:
> On Dec 13, 10:36 am, illywhacker <illywac...@gmail.com> wrote: > > > ... > > Consider the possibility that you might > > simply be understanding badly. Dale told me 'The only "model" that is > > relevant to the situation is whether the signal of interest survived > > the anti-aliasing filtering required before the original sampling'. > > This is false. (Well, actually, it is imprecise ... > > Yes it is imprecise, and intentionally so. In real world data > analysis, we expect that our samples contain components from unknown > models and even from models that cannot be accurately represented by a > finite set of samples.
<snip>
> we may use the anti-aliased and sampled (and therefore faulty) data to > determine if we have a signal with components that represent some > model even when we know that signals that fit the model cannot be > accurately represented by the anti-aliased sampled data sets.
Of course. I have been saying this throughout. Superresolution is an example.
> To do > this for some models we may need to 'resolve' the positions of things > like peaks and zero crossings to a finer 'resolution' than allowed by > the original sample positions.
It is very simple. Either the original sample positions uniquely encode the signal of interest or they do not. If they do, then computing further samples is trivial. If they do not (i.e. more than one signal of interest could have produced the given samples), then further information is needed to distinguish between these possibilities. This further information is a model. Which part of this do you not agree with? illywhacker;
On Dec 13, 4:06&#4294967295;pm, eric.jacob...@ieee.org (Eric Jacobsen) wrote:
> On Mon, 13 Dec 2010 05:53:36 +0000 (UTC), glen herrmannsfeldt > > > > <g...@ugcs.caltech.edu> wrote: > >dbd <d...@ieee.org> wrote: > >(snip) > > >> The OP asked for help in correctly performing an interpolation by fft, > >> zero extending in the transformed domain and ifff. > >(snip) > > >> Interpolation is a common tactic used to more accurately resolve the > >> location of features such as a peak or a zero crossing in a sampled > >> signal. In English, the ability "to more accurately resolve" is > >> correctly referred to as "increased resolution". This definition of > >> resolution is consistent with the OP's question, algorithm and > >> vocabulary and with 1) and 2). > > >In optics, as I understand it, resolution comes from the ability > >distinguish between two objects in an image, in place of one larger > >object. &#4294967295;The wikipedia page Optical_resolution has much of the > >explanation. > > >If you are looking through a telescope, you would like to know > >if you see two stars that are close together, or just one. > >In most cases, interpolation doesn't help. &#4294967295;Note that the > >actual image is, as seen from earth, pretty much a point source. > >Diffraction broadens that into an Airy pattern, which is pretty > >much the circular version of the sinc. &#4294967295; > > >High quality lenses are diffraction limited, such that diffraction > >effects limit the possible resolution. &#4294967295;For lower quality lenses, > >the lens itself limits the resolution. > > >Interpolation doesn't increase the resolution. &#4294967295;It may allow > >one to see what the shape of the expected sinc or Airy disk is, > >but that is all. > > >Linear deconvolution also doesn't do much to improve resolution. > > >Non-linear deconvolution uses other properties of the system > >to improve resolution, but that takes more than interpolation. > >For example, absorption spectra can't go below zero or > >above one. &#4294967295;Taking those constraints into account, and accurately > >knowing the transfer function of the system, allows one to do > >better than one might otherwise expect. > > >-- glen > > Similar definitions of "resolution" exist in signal processing. &#4294967295; When > I worked on radar processing systems, the ability to resolve two point > targets, similar to your optical example of resolving two stars, was > the pertinent example. &#4294967295; The idea there, which I've seen described > fairly consistently for other applications, was that two sinx/x point > responses had to be able to be separated to about their 3dB points in > order to detect and "resolve" them separately. > > So that's one definition of "resolution" that seems to be fairly > consistently applied. &#4294967295; In this context, interpolating additional > points does nothing to improve the resolution, which I think is why > that's usually the case that's made. > > But there are other things that can be "resolved" from a signal, and > as Darrell alluded the locations of peaks (e.g., accurately estimating > frequency in a DFT output) or zero crossings can often be more > accurately located using interpolation. > > So in that sense increasing the "resolution" by interpolation is > certainly possible, but it's not the usual way the term is used. &#4294967295;So, > as usual, it is important to know what is meant when "resolution" is > used in a discussion. > > Eric Jacobsen > Minister of Algorithms > Abineau Communicationshttp://www.abineau.com
Fascinating discussion. Better if we can keep mutual irritation out of it. I think the fascinating aspect is that we seem to have two differing definitions of 'information' here. On one side we have a valid statement that if we have only a set of samples that uniquely determine a signal, then no amount of interpolation, reconstruction or change of basis can add new information. And on the other side, an equally valid statement that interpolation, reconstruction or change of basis do add new information. However, the 'information' is different in each case: in the former, it is information as in 'new knowledge' while in the latter it is information as in "I can see it better" (or "my algorithm can see it better"). This is similar to a coded message with the decoding key provided - the message is there and no new information is added by decoding it, but in another sense a lot of (a different kind of) information is added by decoding (because then we can read the message). Whereas if the decoding key were not provided along with the message, but was a priori knowledge, then the key is the added information that allows us to go back to the unencoded message. (Actually, that is a bad analogy..) Isn't one valid key point being made here, that if we have a set of samples that uniquely determine the signal, then no further information is added by any interpolation? The purpose of 'reconstruction' (ie changing the basis, or if you like, calulating another set of samples that also uniquely determines the signal) is 'only' to yield a set of samples that are convenient as input to any given further view or processing (or 'algorithm' if you prefer). And isn't a second valid key point that in the case where the signal has been 'anti-aliased' to ensure that the first set of samples do uniquely determine it, then if we have additional knowledge of the signal prior to anti-aliasing (such as, that it is isolated point sources) then that a priori knowledge is added information and may be used to guide further analysis or processing. There is a clear distinction between additional information (eg "I know it is point sources") and 'interpolation' or 'reconstruction' which do not add information (though they may aid visualization or further processing). Chris ============================= Chris Bore BORES Signal Processing www.bores.com
On 12/14/2010 05:13 AM, Chris Bore wrote:
> [...] > Isn't one valid key point being made here, that if we have a set of > samples that uniquely determine the signal, then no further > information is added by any interpolation? The purpose of > 'reconstruction' (ie changing the basis, or if you like, calulating > another set of samples that also uniquely determines the signal) is > 'only' to yield a set of samples that are convenient as input to any > given further view or processing (or 'algorithm' if you prefer). > > And isn't a second valid key point that in the case where the signal > has been 'anti-aliased' to ensure that the first set of samples do > uniquely determine it, then if we have additional knowledge of the > signal prior to anti-aliasing (such as, that it is isolated point > sources) then that a priori knowledge is added information and may be > used to guide further analysis or processing. > > There is a clear distinction between additional information (eg "I > know it is point sources") and 'interpolation' or 'reconstruction' > which do not add information (though they may aid visualization or > further processing).
An excellent summary, IMO. Thank you, Chris! -- Randy Yates % "My Shangri-la has gone away, fading like Digital Signal Labs % the Beatles on 'Hey Jude'" mailto://yates@ieee.org % http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO