DSPRelated.com
Forums

Basics of Spectral Density and Autocorrelation Function

Started by lebann July 27, 2005
Hi, I am trying to grasp the basics of autocorrelation and PSD.  I dont
have a strong background on DSP, so any help will be appreciated.  Here
are my questions:

(1) why is the PSD, which is given by : 

PSD(f)=4*Integral[cos(2*pi*f*t)*Autocorrelation from -inf, +inf]
 
is VALID ONLY for frequncies ABOVE 1/tau (where tau is the time length of
the autocorrelation time). For example, my autocorrelation function has
only a finite duration of ~2 picoseconds, then (from what I was told) my
PSD plot at frequencies below 1/(2e-12) Hz does not make sense since it is
the convolution of my "weak PSD" and a sinc(2*pi*f*tau)^2 of the window? 
Is this some kind an uncretainity principle?  I mean I can understand that
if my autocorrelation function is infinite than only freq above zero makes
sense, but I am not sure were that law came from? Also, where does the 
sinc(2*pi*f*tau)^2 come from?  Should that be sinc(2*pi*f*tau), since the
fourier transform of a square window is a sinc? I would have thought
instead for it to be the convolution of my PSD*snc function.

my second and last question is:

(2)  How does the autocorrelation and PSD relate?  And what exactly is the
psuedo period of an autocorrelation function? From what I was told, the
psuedo period of the autocorrelation function will correspond to a hump in
the PSD plot.  

Thanks so much in advance for your assitance..

sincerely,
Lebann



		
This message was sent using the Comp.DSP web interface on
www.DSPRelated.com

lebann wrote:
> Hi, I am trying to grasp the basics of autocorrelation and PSD. I dont > have a strong background on DSP, so any help will be appreciated. Here > are my questions: > > (1) why is the PSD, which is given by : > > PSD(f)=4*Integral[cos(2*pi*f*t)*Autocorrelation from -inf, +inf] > > is VALID ONLY for frequncies ABOVE 1/tau (where tau is the time length of > the autocorrelation time). For example, my autocorrelation function has > only a finite duration of ~2 picoseconds, then (from what I was told) my > PSD plot at frequencies below 1/(2e-12) Hz does not make sense since it is > the convolution of my "weak PSD" and a sinc(2*pi*f*tau)^2 of the window? > Is this some kind an uncretainity principle?
First, I am not sure about how you use the terminology. The term "autocorrelation time" could mean either the duration of your data sequence, the autocorrelation sequence (which a re two different things) or it could mean the time lag of some peak in the autocorrelation sequence. If we agree that the duration of your autocorrelation sequence is 2 ps, then the explanation for the "frequencies below 1/2 ps make no sense" is simply that the data don't contain information for longer durations than 2 ps. If, for instance, there is some feature present at 1 MHz, you will not find it, since it would require a recording with duration on the order of 1 us, for the 1 MHz feature to be detected.
> I mean I can understand that > if my autocorrelation function is infinite than only freq above zero makes > sense, but I am not sure were that law came from?
The problem is, your autocorrelation function isn't infinite. When you work with data, they are of finite duration.
> Also, where does the > sinc(2*pi*f*tau)^2 come from? Should that be sinc(2*pi*f*tau), since the > fourier transform of a square window is a sinc? I would have thought > instead for it to be the convolution of my PSD*snc function.
The standard argument with finite-duration time series, is that the available data are the result of element-wise multiplication between an infinitely long data sequence and a rectangular window function. Transforming the (theoretically infinite) sequence to frequency domain introduces the sinc terms. Since the autocorrelation consists of two finite-duration sequences, there are two sinc terms in the expression for PSD. In time domain, the rectangular windows are convolved with each other (the result is the triangular bias of the "usual" time-domain estimator for the aotocorrelation sequence), which transforms to a sinc^2 term, not a sinc term, in frequency domain.
> my second and last question is: > > (2) How does the autocorrelation and PSD relate?
The PSD is the Fourier transform of the autocorrelation sequence.
> And what exactly is the > psuedo period of an autocorrelation function? From what I was told, the > psuedo period of the autocorrelation function will correspond to a hump in > the PSD plot.
Right. It follows directly from the FT relation between the PSD and the autocorrelation sequence. Rune
Thanks for the reply Rune.  

>If we agree that the duration of your autocorrelation sequence is 2 ps, >then the explanation for the "frequencies below 1/2 ps make no sense" >is simply that the data don't contain information for longer durations >than 2 ps. If, for instance, there is some feature present at 1 MHz, >you will not find it, since it would require a recording with duration >on the order of 1 us, for the 1 MHz feature to be detected.
However, I am still unclear about the reason WHY that is true? Why would finding features for some frequency 1Mhz would require the duration of the autocorrelation function to be on the order of (1/1e6)? Is this some kind of uncertanity in signals like the Hiesenberg uncertainity in phsics?
>Right. It follows directly from the FT relation between the PSD and >the autocorrelation sequence.
I understand that the PSD is the fourier transform of the autocorrelation function, however I am not clear as to : (1) What exacly is the pseudo-period of the ACF? The ACF that I am working with has a downswing and then some aperiodic oscillation so there isnt a uniform period. Is the "psuedo-period" the duration of the initial downswing (neglecting the subsequent oscillation which have their own periods). (2) Why would the "FT relation between the PSD and the autocorrelation sequence" would result a hump in the PSD plot for frequency corresponding to the psuedo-period of the ACF? What about the other minor periods (such as the ones for a sinc function)? Would those periods result in other humps in the PSD plot corresponding to those frequencies? This message was sent using the Comp.DSP web interface on www.DSPRelated.com
"lebann" <Lebann@gmail.com> wrote in message
news:P5-dnVVQdq_MpHXfRVn-1A@giganews.com...
> > (2) How does the autocorrelation and PSD relate?
Via the Wiener-Kinchen Theorem... the Fourier TF of Auotorrelation is PSD. Shytot
lebann wrote:
> Thanks for the reply Rune. > > >If we agree that the duration of your autocorrelation sequence is 2 ps, > >then the explanation for the "frequencies below 1/2 ps make no sense" > >is simply that the data don't contain information for longer durations > >than 2 ps. If, for instance, there is some feature present at 1 MHz, > >you will not find it, since it would require a recording with duration > >on the order of 1 us, for the 1 MHz feature to be detected. > > However, I am still unclear about the reason WHY that is true? Why would > finding features for some frequency 1Mhz would require the duration of the > autocorrelation function to be on the order of (1/1e6)? Is this some kind > of uncertanity in signals like the Hiesenberg uncertainity in phsics?
You are thinking too complicated. If you want to map the variation of, say, gorcery prices over a year, you will need to observe the prices for one or more years. You can't walk around in the grocery market for a couple of hours, or even a couple of days, and expect to be able to say much about seasonal variations.
> >Right. It follows directly from the FT relation between the PSD and > >the autocorrelation sequence. > > I understand that the PSD is the fourier transform of the autocorrelation > function, however I am not clear as to : > > > (1) What exacly is the pseudo-period of the ACF? The ACF that I am working > with has a downswing and then some aperiodic oscillation so there isnt a > uniform period. Is the "psuedo-period" the duration of the initial > downswing (neglecting the subsequent oscillation which have their own > periods).
The "pseudo period" is the typical duration between peaks in your pseudo periodic signal.
> (2) Why would the "FT relation between the PSD and the autocorrelation > sequence" would result a hump in the PSD plot for frequency corresponding > to the psuedo-period of the ACF? What about the other minor periods (such > as the ones for a sinc function)? Would those periods result in other > humps in the PSD plot corresponding to those frequencies?
These are basic properties of the FT. I suggest you play a bit with the FT for varions signals. using matlab or some other maths program. Having said that, discriminating between "real" and "spurious" "humps" in the power spectrum is not easy. Rune
lebann <Lebann@gmail.com> wrote:

> However, I am still unclear about the reason WHY that is true? > Why would finding features for some frequency 1Mhz would require > the duration of the autocorrelation function to be on the order > of (1/1e6)? Is this some kind of uncertanity in signals like > the Hiesenberg uncertainity in phsics?
While I have actually heard engineers call this effect the "Heisenberg Principle" that's a misnomer. I will say instead it's the Gibbs Phenomenon. In any event, to get a given frequency resolution F in a spectral estimator, you need a window of autocorelation data of at least length 1/F. What isn't true is that with a window of exactly this length, you need to select only frequencies that equally spaced multiples of F. You can use any frequency. Steve
Steve Pope wrote:
> lebann <Lebann@gmail.com> wrote:
>>However, I am still unclear about the reason WHY that is true? >>Why would finding features for some frequency 1Mhz would require >>the duration of the autocorrelation function to be on the order >>of (1/1e6)? Is this some kind of uncertanity in signals like >>the Hiesenberg uncertainity in phsics?
> While I have actually heard engineers call this effect > the "Heisenberg Principle" that's a misnomer. I will say > instead it's the Gibbs Phenomenon. In any event, to get > a given frequency resolution F in a spectral estimator, > you need a window of autocorelation data of at least length 1/F.
However, the quantum mechanics explanation of the uncertainty principle does come from this phenomenon. One class I had explained it in terms of a Fourier transform. If you look at the width of the absolute value of a function with unit area, and the width of the absolute value of its Fourier transform, you pretty much find the uncertainty principle. Other than a factor of hbar, the position and momentum wavefunctions are related by a Fourier transform, as are time and energy. Especially if you consider E=\hbar \omega. -- glen
Steve Pope wrote:
> lebann <Lebann@gmail.com> wrote: > > > However, I am still unclear about the reason WHY that is true? > > Why would finding features for some frequency 1Mhz would require > > the duration of the autocorrelation function to be on the order > > of (1/1e6)? Is this some kind of uncertanity in signals like > > the Hiesenberg uncertainity in phsics? > > While I have actually heard engineers call this effect > the "Heisenberg Principle" that's a misnomer.
There are a couple of tricks that can be used to find the exact location of a spectrum line, independent of the observation length of the data. If there is exactly one - 1 - sinusoidal in the data, you can locate it with arbitrary precision by mere zero-padding. The Heisenberg pinciple, as I know it from signal analysis, addresses the problem of recognizing a signal to contain two close sinusoidals, as opposed to one "fuzzy" one. In this case, the derivation is straight-forward since one needs to be able to fit one main lobe of the rectangular window between the two sinusoidals to be guearanteed to have no overlap. Finding the lobe width of the window is trivial, it amounts to finding the zeros closest to 0 on either side, which depend on the length of the window. Rune