DSPRelated.com
Forums

Windowing in the Frequency Domain

Started by OldUncleSilas April 4, 2009
On 16 Apr, 17:12, c...@claysturner.com wrote:
> > Ouch... I am not aware of a single application where > > windows are applied in frequency domain... > > Hello Rune, > > Sometimes one defines a signal via a windowed frequency domain > function.
Sure. That's an obvious corollary to the idea that time-domain windowing for PSD estimation is an efficient way to implement frequency-domain weighted averages. Both Kay and Papoulis, two of the 'heavy' writers on statistical DSP, explain that windowing in time domain is a means to reduce the influence of the outer coefficients in the autocorrelation sequence, which have high variance. The difference might seem one of merely semantics, but the 'standard' texts on statistical DSP represent lobe widths and side lobe levels as very big problems. They are, if you restrict attention only to the 'classical' window functions. But as you point out, one can just as well design the desired frequency response and compute the desired time domain coefficients. But that's *design*. I still haven't seen any reason to do the *computations* in frequency domain. Rune
On 16 Apr, 17:12, c...@claysturner.com wrote:
> > Ouch... I am not aware of a single application where > > windows are applied in frequency domain... > > Hello Rune, > > Sometimes one defines a signal via a windowed frequency domain > function.
Sure. That's an obvious corollary to the idea that time-domain windowing for PSD estimation is an efficient way to implement frequency-domain weighted averages. Both Kay and Papoulis, two of the 'heavy' writers on statistical DSP, explain that windowing in time domain is a means to reduce the influence of the outer coefficients in the autocorrelation sequence, which have high variance. The difference might seem one of merely semantics, but the 'standard' texts on statistical DSP represent lobe widths and side lobe levels as very big problems. They are, if you restrict attention only to the 'classical' window functions. But as you point out, one can just as well design the desired frequency response and compute the desired time domain coefficients. But that's *design*. I still haven't seen any reason to do the *computations* in frequency domain. Rune
On 16 Apr, 17:12, c...@claysturner.com wrote:
> > Ouch... I am not aware of a single application where > > windows are applied in frequency domain... > > Hello Rune, > > Sometimes one defines a signal via a windowed frequency domain > function.
Sure. That's an obvious corollary to the idea that time-domain windowing for PSD estimation is an efficient way to implement frequency-domain weighted averages. Both Kay and Papoulis, two of the 'heavy' writers on statistical DSP, explain that windowing in time domain is a means to reduce the influence of the outer coefficients in the autocorrelation sequence, which have high variance. The difference might seem one of merely semantics, but the 'standard' texts on statistical DSP represent lobe widths and side lobe levels as very big problems. They are, if you restrict attention only to the 'classical' window functions. But as you point out, one can just as well design the desired frequency response and compute the desired time domain coefficients. But that's *design*. I still haven't seen any reason to do the *computations* in frequency domain. Rune
On Apr 16, 10:04 am, Rune Allnor <all...@tele.ntnu.no> wrote:
> > ... > But that's *design*. I still haven't seen any reason to do > the *computations* in frequency domain. > > Rune
In this thread on April 4, Eric Jacobsen suggested an example. In the last paragraph of my post on that day I suggested three. Why haven't you commented on these? Those who merely read this thread have seen reasons posted. Dale B. Dalrymple
On 17 Apr, 05:02, dbd <d...@ieee.org> wrote:
> On Apr 16, 10:04 am, Rune Allnor <all...@tele.ntnu.no> wrote: > > > > > ... > > But that's *design*. I still haven't seen any reason to do > > the *computations* in frequency domain. > > > Rune > > In this thread on April 4, Eric Jacobsen suggested an example. In the > last paragraph of my post on that day I suggested three. Why haven't > you commented on these? Those who merely read this thread have seen > reasons posted.
Well, I'm probably screwed up by having made a living by *applied* DSP (as opposed to academics or research) for too long. OK, the convolution of a 3-pt kernel saves some computations compared to convolving with an N-pt kernel. My problem is what purpose is served by convolving with the 3pt kernel at all? 'Save computations' is an obvious answer, but then the follow-up questions become: - What was the objective with using the N-pt kernel? - Are these objectives still met after the simplification to use the 3-pt kernel? I made the effort to explain what uses I know of, for the N-pt kernel. I also argued why the 3-pt kernel is at least pedagoically problematic in the context of FIR filter design, and does not serve the useful purpose in the context of PSD estimation. Again, it's a psychological side effect of my utalitarian approach to DSP that I don't see why a computational trick is nifty, if I don't see what useful purpose the computations are there to achieve in the first place. If saving computations is an issue and no one can argue what a particular sequence of computations aim to achieve, why do the computations at all? If convolving with the N-pt kernel is too expensive, and the results by using the 3pt kernel are acceptable, why use the 3pt kernel at all? Even more computations are saved by just skipping them alltogether and using the raw spectrum, right? The *only* argument against this line of reasoning is that 'using the 3pt kernel serves some purpose other than just saving computations compared to the Npt kernel'. It is this purpose I try to understand what might be. Rune
Rune Allnor wrote:

> - What was the objective with using the N-pt kernel?
Since the OP is working with an incoming signal's spectrum we can presume it is to improve the spectrum estimation. No need to presume in Richard Dobson's case, he told you that is his objective too, and that he's achieving it.
> - Are these objectives still met after the simplification > to use the 3-pt kernel? > > I made the effort to explain what uses I know of, for the > N-pt kernel. I also argued why the 3-pt kernel is at least > pedagoically problematic in the context of FIR filter > design,
Where noone was proposing its use to begin with. Perhaps you got the impression that harris did -- in fact he gives both symmetric and periodic forms of each window, although his presentation puts the distinction in index ranges rather than algebraic expressions.
> and does not serve the useful purpose in the > context of PSD estimation.
You did not demonstrate that. Much of your effort arguing would have been better spent on finding out that the period-(N-1) Hann window's transform bins outside the three central points contain less than 0.5% of the total energy at N=8 and this decreases as O(1/N^2). How significant is those values' contribution to a spectral weighted average going to be for transform sizes that yield useful resolution? Martin -- Quidquid latine scriptum est, altum videtur.

Rune Allnor wrote:
>why > use the 3pt kernel at all? Even more computations are saved > by just skipping them alltogether and using the raw > spectrum, right?
So you are again saying that one should never use a Hann window? Maybe the OP wants to find out if your advice is sound or not? Seems like that would be a good reason to use the frequency domain 3 pt kernel. You continue to ignore the fact that because the OP is using a sliding DFT it is computationally more efficient and easier to modify his algorithm to include a Hann window in the frequency domain than to do it in the time domain. Whether he needs a window at all is something the OP can ponder when he reaches the point where he can make a comparison. -jim
> > The *only* argument against this line of reasoning is > that 'using the 3pt kernel serves some purpose other > than just saving computations compared to the Npt kernel'. > > It is this purpose I try to understand what might be. > > Rune
On 17 Apr, 13:39, Martin Eisenberg <martin.eisenb...@udo.edu> wrote:
> Rune Allnor wrote: > > - What was the objective with using the N-pt kernel?
> > I also argued why the 3-pt kernel is at least > > pedagoically problematic in the context of FIR filter > > design,
> Where noone was proposing its use to begin with.
Do you agree with me in that the N-period forms are obsolete for FIR filter design? That whatever advantage they might offer over N-1-period forms are taken care of at least as well by e.g. Parks-McClellan designs? I assume your answer is 'yes', so that we can focus on PSD estimation below.
> > and does not serve the useful purpose in the > > context of PSD estimation. > > You did not demonstrate that. Much of your effort arguing would have > been better spent on finding out that the period-(N-1) Hann window's > transform bins outside the three central points contain less than > 0.5% of the total energy at N=8 and this decreases as O(1/N^2). How > significant is those values' contribution to a spectral weighted > average going to be for transform sizes that yield useful resolution?
Actually, I have no idea. I just observe that every textbook author - except one - who writes on the subject uses the N-1-period forms of the windows. The one exception is Papoulis, who states the N-period form with the proviso 'If N is large' (Papoulis: "Probability, Random Variables, and Stochastic Processes", 3rd ed, 1991, p. 456). Presumably, since he explicitly mentions large N, he would use some other form for small N. Having said that - both your comment about realtive significance and Papoulis' comment about large N makes sense if one is talking about numerical accuracies e.g. in fixed- point arithmetics. In these cases the difference between N-period window and the N-1-period window would be lost in arithmetic inaccuracies. That's an argument for the simplification which is straight- forward and easy to understand. However, they do not quite require Nobel laureate levels of skills to come up with. Since no one have mentioned these very basic arguments so far in the discussion, it makes me believe they are not used as support for selecting the N-period forms. Dale mentioned he wouldn't create confusion by inaccuaretly paraphrase arguments, but the arguments above are so simple that they can not possibly what he had in mind. So what is it Harris has seen that no one else have? The paper has been out there for more than 30 years, but is at best mentioned in passing by authors on spectrum estimation like Kay, and Bendat and Piersol, and not at all by Papoulis? If the paper is as important - except for purely historical reasons - as some seem to think, why aren't the results mentioned in the textbooks? Rune
On Fri, 17 Apr 2009 00:34:07 -0700 (PDT), Rune Allnor
<allnor@tele.ntnu.no> wrote:

>On 17 Apr, 05:02, dbd <d...@ieee.org> wrote: >> On Apr 16, 10:04 am, Rune Allnor <all...@tele.ntnu.no> wrote: >> >> >> >> > ... >> > But that's *design*. I still haven't seen any reason to do >> > the *computations* in frequency domain. >> >> > Rune >> >> In this thread on April 4, Eric Jacobsen suggested an example. In the >> last paragraph of my post on that day I suggested three. Why haven't >> you commented on these? Those who merely read this thread have seen >> reasons posted. > >Well, I'm probably screwed up by having made a living >by *applied* DSP (as opposed to academics or research) >for too long.
The example I gave was practical. I learned that doing fast correlations for synthetic aperture radar processing. I haven't followed the entire argument, so I don't know how much the context has shifted, but if you're convolving/correlating a fixed reference function against a frequency-domain vector (i.e., using fast correlation), it's trivial to apply the FD weighting function to the pre-stored reference function. It then costs NO additional computation and you get the weighting function essentially for free. In the SAR context the weighting function was critical in controlling the target sidelobes and resolution in the output image, which was in the domain after the next transform. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org Blog: http://www.dsprelated.com/blogs-1/hf/Eric_Jacobsen.php
On 17 Apr, 18:30, Eric Jacobsen <eric.jacob...@ieee.org> wrote:
> On Fri, 17 Apr 2009 00:34:07 -0700 (PDT), Rune Allnor > > > > > > <all...@tele.ntnu.no> wrote: > >On 17 Apr, 05:02, dbd <d...@ieee.org> wrote: > >> On Apr 16, 10:04 am, Rune Allnor <all...@tele.ntnu.no> wrote: > > >> > ... > >> > But that's *design*. I still haven't seen any reason to do > >> > the *computations* in frequency domain. > > >> > Rune > > >> In this thread on April 4, Eric Jacobsen suggested an example. In the > >> last paragraph of my post on that day I suggested three. Why haven't > >> you commented on these? Those who merely read this thread have seen > >> reasons posted. > > >Well, I'm probably screwed up by having made a living > >by *applied* DSP (as opposed to academics or research) > >for too long. > > The example I gave was practical. &#4294967295;I learned that doing fast > correlations for synthetic aperture radar processing.
Where did you mention it? I can only find one other post of yours through Google groups: http://groups.google.no/group/comp.dsp/msg/d0280ff1d4149ecd?hl=no
> I haven't followed the entire argument, so I don't know how much the > context has shifted,
Well, a quick recap of my questions is found here: http://groups.google.no/group/comp.dsp/msg/98f74d795425172a?hl=no Virtually all the textbook issues I have found on the window functions, use window functions of length N and cosine terms with period N-1. This means that all the coefficients of the DFT of the window function are non-zero. First of all, I showed that there are problems with this approach for certain standard problems involving window functions: http://groups.google.no/group/comp.dsp/msg/1426fa4aff6a60cd?hl=no I then proceeded to ask what the purpose is for using the 3pt frequency domain form of the kernel, that is, cosine terms with period N, given the problems.
> but if you're convolving/correlating a fixed > reference function against a frequency-domain vector (i.e., using fast > correlation), it's trivial to apply the FD weighting function to the > pre-stored reference function. It then costs NO additional > computation and you get the weighting function essentially for free.
Everything is pre-computed and stored in, what, spatial domain?
> In the SAR context the weighting function was critical in controlling > the target sidelobes and resolution in the output image, which was in > the domain after the next transform.
I am sure it was. What was the decider? The type of window (Hamming, Hann, Blackman,...) or the size of of the FD kernel for a given type of window? In other words, did you compare the results from the 3-pt kernel with no kernel and N-pt kernel for a given type of window? Rune