DSPRelated.com
Forums

what is lag-domain processing?

Started by stereo April 21, 2006
Hello everyone,

reading one of the classic papers about windowing, Nuttal's "Some
Windows with Very Good Sidelobe Behaviour", I struggle with the
sentence on page 86 "When the weighting is applied instead in the lag
domain,...,rather than in the time domain...".

Could anyone tell me what is meant with the term "lag-domain", since I
have no clue and can't find any enlightening explanation.

stereo

stereo skrev:
> Hello everyone, > > reading one of the classic papers about windowing, Nuttal's "Some > Windows with Very Good Sidelobe Behaviour", I struggle with the > sentence on page 86 "When the weighting is applied instead in the lag > domain,...,rather than in the time domain...". > > Could anyone tell me what is meant with the term "lag-domain", since I > have no clue and can't find any enlightening explanation.
You left the crucial clue to what he means out: "...as in Blackman-Tukey spectral analysis,..." For the spectral analysis, the "proper" way of doing things is: 1) Compute the autocorrelation sequence 2) Apply a window 3) DFT to spectrum domain Very often people do things in a slightly different order, 1) Apply a window to the time-domain sequence 2) DFT to spectrum domain 3) Find the squared magnittude of the spectrum. Note the difference between computing the autocorrelation sequence first (convert from t domain to 'tau' domain) and *then* applying the window, and applying the window directly in time domain before doing anything else. Rune
On 2006-04-21 08:00:27 -0300, "Rune Allnor" <allnor@tele.ntnu.no> said:

> > stereo skrev: >> Hello everyone, >> >> reading one of the classic papers about windowing, Nuttal's "Some >> Windows with Very Good Sidelobe Behaviour", I struggle with the >> sentence on page 86 "When the weighting is applied instead in the lag >> domain,...,rather than in the time domain...". >> >> Could anyone tell me what is meant with the term "lag-domain", since I >> have no clue and can't find any enlightening explanation. > > You left the crucial clue to what he means out: "...as in > Blackman-Tukey > spectral analysis,..." > > For the spectral analysis, the "proper" way of doing things is: > > 1) Compute the autocorrelation sequence > 2) Apply a window > 3) DFT to spectrum domain > > Very often people do things in a slightly different order, > > 1) Apply a window to the time-domain sequence > 2) DFT to spectrum domain > 3) Find the squared magnittude of the spectrum. > > Note the difference between computing the autocorrelation > sequence first (convert from t domain to 'tau' domain) and > *then* applying the window, and applying the window directly > in time domain before doing anything else. > > Rune
And then some folks also average again across the spectrum, or multiply by weights in the autocovariance, as the spectum you a implicitly suggesting is a lowered leakage periodogram where the estimates will still only have two degress of freedom. They are rather variable. The whole point of the estimators introduced by Tukey, and independly by Bartlett, is to gain stability of the spectrum estimators. The stability comes at a price of bias which is then addressed by the choice of spectral window. The slightly different order is a major change in terms of estimator stability.
I think it means log domain.

-- 
Jon Harris
SPAM blocker in place:
Remove 99 (but leave 7) to reply

"stereo" <leben.in.stereo@googlemail.com> wrote in message 
news:1145612938.652392.284120@g10g2000cwb.googlegroups.com...
> Hello everyone, > > reading one of the classic papers about windowing, Nuttal's "Some > Windows with Very Good Sidelobe Behaviour", I struggle with the > sentence on page 86 "When the weighting is applied instead in the lag > domain,...,rather than in the time domain...". > > Could anyone tell me what is meant with the term "lag-domain", since I > have no clue and can't find any enlightening explanation. > > stereo >
stereo wrote:
> > Hello everyone, > > reading one of the classic papers about windowing, Nuttal's "Some > Windows with Very Good Sidelobe Behaviour", I struggle with the > sentence on page 86 "When the weighting is applied instead in the lag > domain,...,rather than in the time domain...". > > Could anyone tell me what is meant with the term "lag-domain", since I > have no clue and can't find any enlightening explanation.
I'd never heard of the term, but plugging "lag domain" into Google results in a number of hits including this: http://www.drao-ofr.hia-iha.nrc-cnrc.gc.ca/science/vlbi/principles/principles.shtml#lcci Erik -- +-----------------------------------------------------------+ Erik de Castro Lopo +-----------------------------------------------------------+ "We are shut up in schools and college recitation rooms for ten or fifteen years, and come out at last with a belly full of words and do not know a thing." -- Ralph Waldo Emerson
Hello again to everyone,

thank you all for the time you have taken, and for your answers to my
question. Combining especially Rune's and Erik's answer I would
conclude "lag domain" is a term for the "correlation domain", in my
case for autocorrelation of one signal, but also used for
cross-correlations like given in the example on the webpage cited by
Erik.

On the other hand, what I don't understand is the difference Rune was
mentioning in his posting: "Note the difference between computing the
autocorrelation sequence first (convert from t domain to 'tau' domain)
and *then* applying the window, and applying the window directly in
time domain before doing anything else." I always thought the two ways
you provided would be exchangeable concepts, giving the same
results...?
And what is wrong with averaging power spectra (Gordon: "And then some
folks also average again across the spectrum, or multiply by weights in
the autocovariance..."), isn't that also given by Welch's method? Or do
I mix things up? Well, it seems so, but maybe one of you might
enlighten some still-dsp-beginner...?

Thanks again and in advance...

stereo

stereo skrev:

> On the other hand, what I don't understand is the difference Rune was > mentioning in his posting: "Note the difference between computing the > autocorrelation sequence first (convert from t domain to 'tau' domain) > and *then* applying the window, and applying the window directly in > time domain before doing anything else." I always thought the two ways > you provided would be exchangeable concepts, giving the same > results...?
No, they don't. If you start with a sequence x of length N in time domain, and apply, say, a Hamming window first, you will apply a Hamming window of length N. When you compute the autocorrlation of this windowed sequence, say x', then the Hamming window appears twice, Rx'x' = E[x'(n)x'(n+k)]=E[x(n)w(n)x(n+k)w(n+k)] Sx'x' = FT{E[x(n)w(n)x(n+k)w(n+k)]} whereas the "proper way" is Rxx = E[x(n)x(n+k)] Sxx = FT{E[x(n)x(n+k)]W(n)} where W(n) is the Hamming window of length 2N-1. There is a big difference. The side lobes of the window w(n) is practically twice as wide as that of W(n), and in spectrum domain the convolution with w(n) appears twice, smudging spectrum features even more. Rune
On 2006-04-24 06:49:50 -0300, "stereo" <leben.in.stereo@googlemail.com> said:

> Hello again to everyone, > > thank you all for the time you have taken, and for your answers to my > question. Combining especially Rune's and Erik's answer I would > conclude "lag domain" is a term for the "correlation domain", in my > case for autocorrelation of one signal, but also used for > cross-correlations like given in the example on the webpage cited by > Erik. > > On the other hand, what I don't understand is the difference Rune was > mentioning in his posting: "Note the difference between computing the > autocorrelation sequence first (convert from t domain to 'tau' domain) > and *then* applying the window, and applying the window directly in > time domain before doing anything else." I always thought the two ways > you provided would be exchangeable concepts, giving the same > results...? > And what is wrong with averaging power spectra (Gordon: "And then some > folks also average again across the spectrum, or multiply by weights in > the autocovariance..."), isn't that also given by Welch's method?
If it the same Welch as I knew from IBM Watson then the answer is that the averaging of the short segment periodograms is tossing statistical efficiency for the purpose of keeping the required memeory size down. That was done at a time whem 32K was a big memory for a mainframe. Now 32M is a small memory for a video board.
> Or do > I mix things up? Well, it seems so, but maybe one of you might > enlighten some still-dsp-beginner...?
If you do not average across the spectrum you will have highly variable estimates. The computation scheme in which you just apply some window on the original time sequence is a minor variation on the Schuster periodogram. That is what was suggested as being equivalent to the "indirect" method of Bartlett and Tukey. It is not! To get the spectral averaging you need to either explicitly average across the spectrum, which usually is with a convolution of some sort, or you need to multiply the autocovariance by a window in the lag domain. The two are equivalent and it is a matter of convenience and timing. The autocovariance is a function of "time" but it is not the same "time" as the original sequence. The time for the autocovariance is often called lag so the autocovariance is in the lag domain. For consistency you would have to call the time domain the sequence domain if you want to use the covariance domain. Name them after the functions or name them after the function arguments, but do not mix them would be a good rule. Historically the Schuster periodogram lead to all sorts of problems in the search for hidden periodicities as it unstable. It is an interesting thing statistically as it is not a consistent estimator. Consistent estimators settle down and estimate the parameters ever more closely as the sample size goes up. The Schuster periodogram stays variable. It was only when Bartlett and Tukey, independently, realized that there had to be spectral averaging that this got cured. Initially the spectral averaging was done by truncating the autocovariance. That was not a good spectral averaging scheme so better lag windows were soon in use. With the advent of FFT the full autocovariance became cheap and then folks decided to lower the cost even more by reverting to the periodogram. Bad statistical reasoning. Some of it might have been justified by not being in the same estimation situation as the original problem statement but a lot was not.
> Thanks again and in advance... > > stereo
On 2006-04-24 07:14:16 -0300, "Rune Allnor" <allnor@tele.ntnu.no> said:

> > stereo skrev: > >> On the other hand, what I don't understand is the difference Rune was >> mentioning in his posting: "Note the difference between computing the >> autocorrelation sequence first (convert from t domain to 'tau' domain) >> and *then* applying the window, and applying the window directly in >> time domain before doing anything else." I always thought the two ways >> you provided would be exchangeable concepts, giving the same >> results...? > > No, they don't. If you start with a sequence x of length N in time > domain, and apply, say, a Hamming window first, you will > apply a Hamming window of length N. When you compute > the autocorrlation of this windowed sequence, say x', then > the Hamming window appears twice, > > Rx'x' = E[x'(n)x'(n+k)]=E[x(n)w(n)x(n+k)w(n+k)] > Sx'x' = FT{E[x(n)w(n)x(n+k)w(n+k)]} > > whereas the "proper way" is > > Rxx = E[x(n)x(n+k)] > Sxx = FT{E[x(n)x(n+k)]W(n)} > > where W(n) is the Hamming window of length 2N-1. > > There is a big difference. The side lobes of the window w(n) is > practically twice as wide as that of W(n), and in spectrum domain > the convolution with w(n) appears twice, smudging spectrum > features even more. > Rune
And as N -> large you still do not get any lowering of the variablility so it is not a consistent estimator. Somewhere you need to have a broader spectral averaging window. The original tapering of the time sequence helps lower the leakage. Often it would be a cosine tapering applied to the 10%, or something, on either end. The spectrum window will typically correspond to truncating and weighting the autocovariance.