DSPRelated.com
Forums

Statistics of the WSS processes passed through LTI systems

Started by lajka May 28, 2012
Hello,

I am pretty new at this, so I apologize in advance if I have missed the
right place to ask. I'd be grateful if you could forward me to the right
place, if that is the case.

My questions are simple, but Google didn't help, so maybe someone here can
point me in the right direction:

1) "If the input to a LTI system is a Gaussian random process, the output
is a Gaussian random process" <- How do we know this?
All I see on the internet is people showing how to compute stochastic
parameters (e.g. correlations and SPDs), but I have no idea how would I
manage to show or to derive the statistics of the output process.

2) In most cases, people often interchange the expectation operator and
time integral (usually when convolution occurs) without saying anything. I
was wondering when exactly is that allowed (and when is not), and what
conditions need to be met for that to happen.

Thanks in advance.


On 5/28/12 8:08 AM, lajka wrote:
> Hello, > > I am pretty new at this, so I apologize in advance if I have missed the > right place to ask. I'd be grateful if you could forward me to the right > place, if that is the case. > > My questions are simple, but Google didn't help, so maybe someone here can > point me in the right direction: > > 1) "If the input to a LTI system is a Gaussian random process, the output > is a Gaussian random process"<- How do we know this?
we know that because of the fact that an LTI system adds together a bunch of scaled copies of the input samples, and then there is the Central Limit Theorem that says that if you add together a whole bunch of random numbers, the p.d.f. of the sum will approach Gaussian as more and more numbers are added. and even without the CLT, you can show pretty easily that the sum of two (or more) Gaussian random variables is, itself, Gaussian.
> All I see on the internet is people showing how to compute stochastic > parameters (e.g. correlations and SPDs), but I have no idea how would I > manage to show or to derive the statistics of the output process.
most books or primers on Probability, Random Numbers, and Random Processes ("stochastic process" is an uppity way of saying "random process") will show how you can determine the autocorrelation of the output of an LTI when you know the impulse response and the autocorrelation of the input. essentially, the Fourier Transform of the autocorrelation of a random process is the power spectrum and an LTI deals with the power spectrum of a random process just like it deals with the power spectrum of a deterministic process.
> 2) In most cases, people often interchange the expectation operator and > time integral (usually when convolution occurs) without saying anything. I > was wondering when exactly is that allowed (and when is not), and what > conditions need to be met for that to happen.
an additional assumption must be made that the process is "ergodic". ergodicity is the property that all time-averages may be replaced with probabilistic averages.
> Thanks in advance.
FWIW. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On Mon, 28 May 2012 07:08:52 -0500, lajka wrote:

> Hello, > > I am pretty new at this, so I apologize in advance if I have missed the > right place to ask. I'd be grateful if you could forward me to the right > place, if that is the case. > > My questions are simple, but Google didn't help, so maybe someone here > can point me in the right direction: > > 1) "If the input to a LTI system is a Gaussian random process, the > output is a Gaussian random process" <- How do we know this? All I see > on the internet is people showing how to compute stochastic parameters > (e.g. correlations and SPDs), but I have no idea how would I manage to > show or to derive the statistics of the output process.
We know this because the output of an LTI system is a weighted integral of the input, and because someone, somewhere, has proved that the weighted integral of a Gaussian random process is a Gaussian random process. (Life is harder with integrals, but it's a fairly direct corollary of the fact that the sum of two Gaussian-distributed random numbers in also Gaussian).
> 2) In most cases, people often interchange the expectation operator and > time integral (usually when convolution occurs) without saying anything. > I was wondering when exactly is that allowed (and when is not), and what > conditions need to be met for that to happen.
Because you can. You need to get a book on random processes, and study it in depth. Basically, you will find that the expectation operator is itself just an integral of the PDF over the variable's range; because the process is stationary it is independent of time, which means that the order of the expectation integral and the time integral can be swapped just as with any two independent integrals. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
"lajka" <lelemarko@n_o_s_p_a_m.gmail.com> writes:

> 1) "If the input to a LTI system is a Gaussian random process, the output > is a Gaussian random process" <- How do we know this?
While he doesn't show the proof, Viniotis states in [viniotis, Theorm 9.2, p.486] that "... the random variable Y(t) ... is equal to the random variable \int_{-\infty}^{+\infty} h(s) X(t-s) ds in the mean square sense." where h(t) is the impulse response of the LTI system and {X(t)} is the input random process. @BOOK{viniotis, title = "{Probability and Random Processes for Electrical Engineers}", author = "{Yannis~Viniotis}", publisher = "WCB McGraw-Hill", year = "1998"} --Randy -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
Randy Yates <yates@digitalsignallabs.com> writes:

> "lajka" <lelemarko@n_o_s_p_a_m.gmail.com> writes: > >> 1) "If the input to a LTI system is a Gaussian random process, the output >> is a Gaussian random process" <- How do we know this? > > While he doesn't show the proof, Viniotis states in [viniotis, Theorm > 9.2, p.486] that > > "... the random variable Y(t) ... is equal to the random variable > \int_{-\infty}^{+\infty} h(s) X(t-s) ds in the mean square sense." > > where h(t) is the impulse response of the LTI system and {X(t)} is the > input random process. > > @BOOK{viniotis, > title = "{Probability and Random Processes for Electrical Engineers}", > author = "{Yannis~Viniotis}", > publisher = "WCB McGraw-Hill", > year = "1998"}
lajka, In case it helps, here's my "cheatsheet" for my old Random Processes class. For one thing, it defines "mean square sense". http://www.digitalsignallabs.com/cheatsheet.pdf -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
On May 28, 8:08&#4294967295;am, "lajka" <lelemarko@n_o_s_p_a_m.gmail.com> wrote:
> Hello, > > I am pretty new at this, so I apologize in advance if I have missed the > right place to ask. I'd be grateful if you could forward me to the right > place, if that is the case. > > My questions are simple, but Google didn't help, so maybe someone here can > point me in the right direction: > > 1) "If the input to a LTI system is a Gaussian random process, the output > is a Gaussian random process" <- How do we know this? > All I see on the internet is people showing how to compute stochastic > parameters (e.g. correlations and SPDs), but I have no idea how would I > manage to show or to derive the statistics of the output process. > > 2) In most cases, people often interchange the expectation operator and > time integral (usually when convolution occurs) without saying anything. I > was wondering when exactly is that allowed (and when is not), and what > conditions need to be met for that to happen. > > Thanks in advance.
Several people have made claims in their responses that the sum of two Gaussian random variables is itself a Gaussian random variable. This is not, strictly speaking, completely true. What needs to be said is that the Gaussian random variables must be JOINTLY Gaussian random variables: the sum of Gaussian but not jointly Gaussian random variables is not a Gaussian random variable. Fortunately, the needed assertion of joint Gaussianity does happen to hold in _your_ particular application because the random variables comprising a Gaussian random process are, by definition, jointly Gaussian. Dilip Sarwate
dvsarwate <dvsarwate@yahoo.com> wrote:
> On May 28, 8:08&#4294967295;am, "lajka" <lelemarko@n_o_s_p_a_m.gmail.com> wrote:
>> 1) "If the input to a LTI system is a Gaussian random process, the output >> is a Gaussian random process" <- How do we know this? >> All I see on the internet is people showing how to compute stochastic >> parameters (e.g. correlations and SPDs), but I have no idea how would I >> manage to show or to derive the statistics of the output process.
(snip)
> Several people have made claims in their responses that the sum of > two Gaussian random variables is itself a Gaussian random variable. > This is not, strictly speaking, completely true. What needs to be said > is that the Gaussian random variables must be JOINTLY Gaussian random > variables: the sum of Gaussian but not jointly Gaussian random > variables is not a Gaussian random variable.
http://en.wikipedia.org/wiki/Multivariate_normal_distribution#Normally_distributed_and_independent As it says in the above article, independent Gaussian random variables are jointly Gaussian. It is probably true that often enough people know that some variables are independent, and don't give it a second thought. (And also often for variables that aren't Gaussian, the statistics of Gaussian variables are close enough.) It is also true that too often variables thought to be independent actually aren't. Much of the Wall street problems in recent years are a result of assuming some variables are independent, using statistics that only work in that assumption, and then finding out (too late) that they weren't. Your decision to buy a few shares of stock might reasonably be assumed to be independent of your neighbor's decision to buy some stock. Deciding to buy millions of shares of stock might not be so independent, though. -- glen
On Mon, 28 May 2012 13:34:37 -0400, robert bristow-johnson wrote:

> On 5/28/12 8:08 AM, lajka wrote: >> Hello, >> >> I am pretty new at this, so I apologize in advance if I have missed the >> right place to ask. I'd be grateful if you could forward me to the >> right place, if that is the case. >> >> My questions are simple, but Google didn't help, so maybe someone here >> can point me in the right direction: >> >> 1) "If the input to a LTI system is a Gaussian random process, the >> output is a Gaussian random process"<- How do we know this? > > we know that because of the fact that an LTI system adds together a > bunch of scaled copies of the input samples, and then there is the > Central Limit Theorem that says that if you add together a whole bunch > of random numbers, the p.d.f. of the sum will approach Gaussian as more > and more numbers are added.
Just a nerdy, anal-retentive caveat: The Central Limit Theorem says that when you add together a whole bunch of random numbers WITH FINITE VARIANCES, the pdf of the sum will approach Gaussian. If you happen to be dealing with a process that has infinite variance (they're out there) then the central limit theorem does not hold AT ALL. Moreover, if you happen to be dealing with a "long tailed" distribution that has a finite variance but a nasty shape factor (I'm not sure if that's the right term -- but at any rate if it's a "really long tailed" distribution) then the number of samples you need before your distribution gets close enough to be Gaussian may be far more than you can practically collect. Many, many physical random processes are long-tailed to one extent or another, and I've worked on at least one project (my master's thesis) where the probability density of the noise was effectively infinite* and simply could not be dealt with effectively by assuming that it was Gaussian. So, the Gaussian assumption is often a good one to make. Even when you're dealing with a long-tailed distribution, there are times when you can make the Gaussian assumption and just accept the occasional glitch or sub-optimal behavior. But you cannot _always_ make the Gaussian assumption. * electrostatic discharge in the atmosphere, which is generally small but can easily and credibly range up to a lighting strike. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On 5/28/12 5:41 PM, Tim Wescott wrote:
> On Mon, 28 May 2012 13:34:37 -0400, robert bristow-johnson wrote: > >> On 5/28/12 8:08 AM, lajka wrote:
...
>>> >>> 1) "If the input to a LTI system is a Gaussian random process, the >>> output is a Gaussian random process"<- How do we know this? >> >> we know that because of the fact that an LTI system adds together a >> bunch of scaled copies of the input samples, and then there is the >> Central Limit Theorem that says that if you add together a whole bunch >> of random numbers, the p.d.f. of the sum will approach Gaussian as more >> and more numbers are added. > > Just a nerdy, anal-retentive caveat: > > The Central Limit Theorem says that when you add together a whole bunch > of random numbers WITH FINITE VARIANCES, the pdf of the sum will approach > Gaussian. If you happen to be dealing with a process that has infinite > variance (they're out there) then the central limit theorem does not hold > AT ALL.
yeah, i had thought about tossing in this caveat, but i didn't want to since we were going to be adding up a bunch of Gaussian RVs anyway. a good example of what you're bringing up, Tim, is the "Cauchy" RV. it has a p.d.f. that looks like p(x) = 1/pi * 1/(1 + (x-u)^2) it's a legit p.d.f., it has integral of 1 but the variance is infinite, and you can't even come up with a mean for this, unless you do some hand-waving. "u" is the "principle value" which sorta can be thought of as a mean. you can add up these Cauchy RVs until the cows come home and it doesn't converge to anything other than a Cauchy. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On Mon, 28 May 2012 17:56:29 -0400, robert bristow-johnson wrote:

> On 5/28/12 5:41 PM, Tim Wescott wrote: >> On Mon, 28 May 2012 13:34:37 -0400, robert bristow-johnson wrote: >> >>> On 5/28/12 8:08 AM, lajka wrote: > ... >>>> >>>> 1) "If the input to a LTI system is a Gaussian random process, the >>>> output is a Gaussian random process"<- How do we know this? >>> >>> we know that because of the fact that an LTI system adds together a >>> bunch of scaled copies of the input samples, and then there is the >>> Central Limit Theorem that says that if you add together a whole bunch >>> of random numbers, the p.d.f. of the sum will approach Gaussian as >>> more and more numbers are added. >> >> Just a nerdy, anal-retentive caveat: >> >> The Central Limit Theorem says that when you add together a whole bunch >> of random numbers WITH FINITE VARIANCES, the pdf of the sum will >> approach Gaussian. If you happen to be dealing with a process that has >> infinite variance (they're out there) then the central limit theorem >> does not hold AT ALL. > > yeah, i had thought about tossing in this caveat, but i didn't want to > since we were going to be adding up a bunch of Gaussian RVs anyway. > > a good example of what you're bringing up, Tim, is the "Cauchy" RV. it > has a p.d.f. that looks like > > > p(x) = 1/pi * 1/(1 + (x-u)^2) > > it's a legit p.d.f., it has integral of 1 but the variance is infinite, > and you can't even come up with a mean for this, unless you do some > hand-waving. "u" is the "principle value" which sorta can be thought of > as a mean. > > you can add up these Cauchy RVs until the cows come home and it doesn't > converge to anything other than a Cauchy.
I was going to post that as an example, but I keep remembering it as the Cauer distribution. Thanks for remembering it for me. I just looked up Cauchy on Wikipedia -- wow. I think I'm off for an extensive Wiki-Walk in Mathemagic land. -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com