```On 1 Aug, 13:22, Rune Allnor <all...@tele.ntnu.no> wrote:

> If you want to use windows, it depends a bit on how
> you interpret the purpose and function of the window
> functions. Some people - me not among them - claim
> that it suffices to do a simple weighted average of
> two or five spectrum coefficients to obtain the
> desired (non-descript) effects from windows.
>
> My view is that the purpose of the windows is to
> smooth the spectrum. To achieve the smoothing, one
> needs to average more than three or five spectrum
> coefficients, so the procedure goes something like
>
> X = fft(x,2*length(x)-1);
> Rxx = fft(w.*ifft(X.*conj(X)));
>
> where w is the 2*length(x)-1 -length window of choise.

You can actually compute the weighted autocorrelation
sequences with 2 DFTs and 3 IDFTs.

Instead of using 2N-1 -length DFTs (N = length(x)),
use 2N-length DFTs and compute the 3pt or 5pt averages
in spectrum domain. Since the 2N-1 -length Rxx is symmetric,
the spectrum is real-valued. The converse is also true:
If the spectrum is real-valued the time-domain signal
is symmetric.

So the 2N-length spectrum formed as S = X.*conj(X) is
real-valued, and its IDFT is symmetric. But, since
s = IDFT(S) is of length 2N, and even number, at
least one time-domain coefficient is forced to be 0.

The net effect of the algorithm

1) X = fft(x,2*length(x));
2) average 3 spectrum coefficients
3) IDFT result

is exactly the same as my algorithm above except that
the resulting time-domain signal is zero-padded by one
sample.

I wouldn't be the least surprised if this is what is
suggested the infamous Harris paper that was debated
here a few months ago.

Rune
```
```On Aug 1, 4:22&#2013266080;am, Rune Allnor <all...@tele.ntnu.no> wrote:
> On 1 Aug, 13:09, HardySpicer <gyansor...@gmail.com> wrote:
>
>
>
> > On Aug 1, 4:00&#2013266080;am, Rune Allnor <all...@tele.ntnu.no> wrote:
>
> > > On 1 Aug, 08:32, HardySpicer <gyansor...@gmail.com> wrote:
>
> > > > How do I find the autocorrelation &#2013266080;of a multivariable channel with 2
> > > > outputs? The result will be a matrix of correlation functions which
> > > > are a function of lag. Suppose the signals are a vector S={s1 s2}
>
> > > > I can do
>
> > > > R(k+1,lag)=beta*R(k.Lag)+(1-beta)*S(k+1)S^T(k+1+lag) where T is
> > > > transpose of the S vector
>
> > > > amd beta is a forgetting factor less than 1. Can this be done using
> > > > FFTs though?
>
> > > If we forget about the beta for now, let's have a look
> > > at the starting point:
>
> > > Your signal x is a vector x(n) =[x1(n) x2(n)].
>
> > > The between-channel correlation is
>
> > > C12(k) = E[x(n)x(n-k)'] = E[ [x1(n) x2(n)]'[x1(n-k) x2(n-k)]]
>
> > > which is (view with fixed-width font)
>
> > > &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; | &#2013266080;Rx1x1(k) &#2013266080; Cx1x2(k) |
> > > C12(k) = &#2013266080;| &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080;|
> > > &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; | &#2013266080;Cx2x1(k) &#2013266080; Rx2x2(k) |
>
> > > Here Rxnxn is the autocorrelation of x1, and Cx1x2
> > > is the cross correlation between x1 and x2.
>
> > > Since b oth autocorrelations and cross correlations
> > > can be computed by means of DFTs, then you can in
> > > principle get the results you want by computing two
> > > FFTs and tree IFFTs.
>
> > > But you mentioned a beta factor.
>
> > > Usually, the beta factor is used in recursive
> > > calculations, along the lines of
>
> > > Rxx_m(k) = beta*x(n)*x(n-k) + (1-beta)Rxx_(m-1)(k)
>
> > > In this case you have an old estimate for the quantity,
> > > that will be updated by the new piece of information.
> > > The forgetting factor is a simple, cheap - albeit
> > > somewhat crude - way of letting the new samples
> > > have some influence of the estimate, as time goes by.
>
> > > In other words, one emphasizes the tendency of the
> > > last samples over the 'academically correct' estimates
> > > from the whole signal.
>
> > > The recursive estimate is also simple to compute in
> > > an online application. However, it might not be simple
> > > to compute that correlation estimate by means of DFTs.
>
> > > Rune
>
> > Thanks for that. Back to your earlier statement. 2 FFTs and 3 IFFTs.
> > Do you mean 2 FFTs for autospectra and 1 for cross-spectrum and
> > therefore 3 to get back to the "time-domain" (since the matrices must
> > be symmetric).
>
> In the quick'n dirty method, you do something like
>
> X =fft(x);
> Y = ffy(y);
> Rxx = ifft(X.*conj(X));
> Rxy = ifft(X.*conj(Y));
> Ryy = ifft(Y.*conj(Y));
>
> where you need to pay attention to FFT lengths to
> avoid wrap-around effects.
>
> If you want to use windows, it depends a bit on how
> you interpret the purpose and function of the window
> functions. Some people - me not among them - claim
> that it suffices to do a simple weighted average of
> two or five spectrum coefficients to obtain the
> desired (non-descript) effects from windows.
>
> My view is that the purpose of the windows is to
> smooth the spectrum. To achieve the smoothing, one
> needs to average more than three or five spectrum
> coefficients, so the procedure goes something like
>
> X = fft(x,2*length(x)-1);
> Rxx = fft(w.*ifft(X.*conj(X)));
>
> where w is the 2*length(x)-1 -length window of choise.
>
> Rune

Thanks. That's great.

Hardy
```
```On 1 Aug, 13:09, HardySpicer <gyansor...@gmail.com> wrote:
> On Aug 1, 4:00&#2013266080;am, Rune Allnor <all...@tele.ntnu.no> wrote:
>
>
>
> > On 1 Aug, 08:32, HardySpicer <gyansor...@gmail.com> wrote:
>
> > > How do I find the autocorrelation &#2013266080;of a multivariable channel with 2
> > > outputs? The result will be a matrix of correlation functions which
> > > are a function of lag. Suppose the signals are a vector S={s1 s2}
>
> > > I can do
>
> > > R(k+1,lag)=beta*R(k.Lag)+(1-beta)*S(k+1)S^T(k+1+lag) where T is
> > > transpose of the S vector
>
> > > amd beta is a forgetting factor less than 1. Can this be done using
> > > FFTs though?
>
> > If we forget about the beta for now, let's have a look
> > at the starting point:
>
> > Your signal x is a vector x(n) =[x1(n) x2(n)].
>
> > The between-channel correlation is
>
> > C12(k) = E[x(n)x(n-k)'] = E[ [x1(n) x2(n)]'[x1(n-k) x2(n-k)]]
>
> > which is (view with fixed-width font)
>
> > &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; | &#2013266080;Rx1x1(k) &#2013266080; Cx1x2(k) |
> > C12(k) = &#2013266080;| &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080;|
> > &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; | &#2013266080;Cx2x1(k) &#2013266080; Rx2x2(k) |
>
> > Here Rxnxn is the autocorrelation of x1, and Cx1x2
> > is the cross correlation between x1 and x2.
>
> > Since b oth autocorrelations and cross correlations
> > can be computed by means of DFTs, then you can in
> > principle get the results you want by computing two
> > FFTs and tree IFFTs.
>
> > But you mentioned a beta factor.
>
> > Usually, the beta factor is used in recursive
> > calculations, along the lines of
>
> > Rxx_m(k) = beta*x(n)*x(n-k) + (1-beta)Rxx_(m-1)(k)
>
> > In this case you have an old estimate for the quantity,
> > that will be updated by the new piece of information.
> > The forgetting factor is a simple, cheap - albeit
> > somewhat crude - way of letting the new samples
> > have some influence of the estimate, as time goes by.
>
> > In other words, one emphasizes the tendency of the
> > last samples over the 'academically correct' estimates
> > from the whole signal.
>
> > The recursive estimate is also simple to compute in
> > an online application. However, it might not be simple
> > to compute that correlation estimate by means of DFTs.
>
> > Rune
>
> Thanks for that. Back to your earlier statement. 2 FFTs and 3 IFFTs.
> Do you mean 2 FFTs for autospectra and 1 for cross-spectrum and
> therefore 3 to get back to the "time-domain" (since the matrices must
> be symmetric).

In the quick'n dirty method, you do something like

X =fft(x);
Y = ffy(y);
Rxx = ifft(X.*conj(X));
Rxy = ifft(X.*conj(Y));
Ryy = ifft(Y.*conj(Y));

where you need to pay attention to FFT lengths to
avoid wrap-around effects.

If you want to use windows, it depends a bit on how
you interpret the purpose and function of the window
functions. Some people - me not among them - claim
that it suffices to do a simple weighted average of
two or five spectrum coefficients to obtain the
desired (non-descript) effects from windows.

My view is that the purpose of the windows is to
smooth the spectrum. To achieve the smoothing, one
needs to average more than three or five spectrum
coefficients, so the procedure goes something like

X = fft(x,2*length(x)-1);
Rxx = fft(w.*ifft(X.*conj(X)));

where w is the 2*length(x)-1 -length window of choise.

Rune
```
```On Aug 1, 4:00&#2013266080;am, Rune Allnor <all...@tele.ntnu.no> wrote:
> On 1 Aug, 08:32, HardySpicer <gyansor...@gmail.com> wrote:
>
> > How do I find the autocorrelation &#2013266080;of a multivariable channel with 2
> > outputs? The result will be a matrix of correlation functions which
> > are a function of lag. Suppose the signals are a vector S={s1 s2}
>
> > I can do
>
> > R(k+1,lag)=beta*R(k.Lag)+(1-beta)*S(k+1)S^T(k+1+lag) where T is
> > transpose of the S vector
>
> > amd beta is a forgetting factor less than 1. Can this be done using
> > FFTs though?
>
> If we forget about the beta for now, let's have a look
> at the starting point:
>
> Your signal x is a vector x(n) =[x1(n) x2(n)].
>
> The between-channel correlation is
>
> C12(k) = E[x(n)x(n-k)'] = E[ [x1(n) x2(n)]'[x1(n-k) x2(n-k)]]
>
> which is (view with fixed-width font)
>
> &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; | &#2013266080;Rx1x1(k) &#2013266080; Cx1x2(k) |
> C12(k) = &#2013266080;| &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080;|
> &#2013266080; &#2013266080; &#2013266080; &#2013266080; &#2013266080; | &#2013266080;Cx2x1(k) &#2013266080; Rx2x2(k) |
>
> Here Rxnxn is the autocorrelation of x1, and Cx1x2
> is the cross correlation between x1 and x2.
>
> Since b oth autocorrelations and cross correlations
> can be computed by means of DFTs, then you can in
> principle get the results you want by computing two
> FFTs and tree IFFTs.
>
> But you mentioned a beta factor.
>
> Usually, the beta factor is used in recursive
> calculations, along the lines of
>
> Rxx_m(k) = beta*x(n)*x(n-k) + (1-beta)Rxx_(m-1)(k)
>
> In this case you have an old estimate for the quantity,
> that will be updated by the new piece of information.
> The forgetting factor is a simple, cheap - albeit
> somewhat crude - way of letting the new samples
> have some influence of the estimate, as time goes by.
>
> In other words, one emphasizes the tendency of the
> last samples over the 'academically correct' estimates
> from the whole signal.
>
> The recursive estimate is also simple to compute in
> an online application. However, it might not be simple
> to compute that correlation estimate by means of DFTs.
>
> Rune
Thanks for that. Back to your earlier statement. 2 FFTs and 3 IFFTs.
Do you mean 2 FFTs for autospectra and 1 for cross-spectrum and
therefore 3 to get back to the "time-domain" (since the matrices must
be symmetric).

Hardy
```
```On 1 Aug, 08:32, HardySpicer <gyansor...@gmail.com> wrote:
> How do I find the autocorrelation &#2013266080;of a multivariable channel with 2
> outputs? The result will be a matrix of correlation functions which
> are a function of lag. Suppose the signals are a vector S={s1 s2}
>
> I can do
>
> R(k+1,lag)=beta*R(k.Lag)+(1-beta)*S(k+1)S^T(k+1+lag) where T is
> transpose of the S vector
>
> amd beta is a forgetting factor less than 1. Can this be done using
> FFTs though?

If we forget about the beta for now, let's have a look
at the starting point:

Your signal x is a vector x(n) =[x1(n) x2(n)].

The between-channel correlation is

C12(k) = E[x(n)x(n-k)'] = E[ [x1(n) x2(n)]'[x1(n-k) x2(n-k)]]

which is (view with fixed-width font)

|  Rx1x1(k)   Cx1x2(k) |
C12(k) =  |                      |
|  Cx2x1(k)   Rx2x2(k) |

Here Rxnxn is the autocorrelation of x1, and Cx1x2
is the cross correlation between x1 and x2.

Since b oth autocorrelations and cross correlations
can be computed by means of DFTs, then you can in
principle get the results you want by computing two
FFTs and tree IFFTs.

But you mentioned a beta factor.

Usually, the beta factor is used in recursive
calculations, along the lines of

Rxx_m(k) = beta*x(n)*x(n-k) + (1-beta)Rxx_(m-1)(k)

In this case you have an old estimate for the quantity,
that will be updated by the new piece of information.
The forgetting factor is a simple, cheap - albeit
somewhat crude - way of letting the new samples
have some influence of the estimate, as time goes by.

In other words, one emphasizes the tendency of the
last samples over the 'academically correct' estimates
from the whole signal.

The recursive estimate is also simple to compute in
an online application. However, it might not be simple
to compute that correlation estimate by means of DFTs.

Rune
```
```How do I find the autocorrelation  of a multivariable channel with 2
outputs? The result will be a matrix of correlation functions which
are a function of lag. Suppose the signals are a vector S={s1 s2}

I can do

R(k+1,lag)=beta*R(k.Lag)+(1-beta)*S(k+1)S^T(k+1+lag) where T is
transpose of the S vector

amd beta is a forgetting factor less than 1. Can this be done using
FFTs though?

Hardy

```