Randy Yates <yates@ieee.org> wrote in message news:<_B_ob.1666$9M3.58@newsread2.news.atl.earthlink.net>...> It is indeed a contradiction to me, and I cannot resolve > the contradiction. It is one I have had for years.Randy: We have crossed swords on this point before, but what the heck, once more into the breach... Let us first talk of energy and power for a deterministic signal. A signal x(t) delivers, say during the interval [0, T], into a 1 ohm resistor) an amount of energy that can be expressed as En(T) = *integral* of x^2(t) from 0 to T The *average power* during this interval is (1/T).En(T) while the *instantaneous power* P(t) at any time t during this interval is the ratio of the energy delivered during a small interval of duration dt to the length of the interval, i.e. P(t) = [En(t + dt) - En(t)]/dt, or more properly, the limiting value of this ratio as dt becomes very small. We can extend all these notions to the whole real line appropriately, and for many signals, we will have that En(T) increases without bound as T approaches infinity while (1/T).En(T) approaches a constant value (called the average power of the signal), but let's just stick to a finite length interval for a while. Now, underlying all this mathematical malarkey is the notion that x^2(t) is an *integrable* function of t. Next, let us consider the random process that you and Carlos are thinking about, viz. a random process for which each X(t) is uniformly distributed on [-1, 1] and for which each X(t) is independent of all other X(t'). If we think about the *entire set* of realizations or sample functions of this process, then we see that these realizations are nothing more and nothing less than the set of *all* functions x(t) such that |x(t)| is at most 1 over the interval under consideration. We can define the energy delivered for all these signals as above. Or can we? Are *all* (or almost all) bounded functions also integrable functions?. No. In fact almost all of these sample functions of the random process are *not integrable*. Thus it is not clear how we can define energy delivered for the sample functions of the process that you are considering. But, you say, a typical sample function is, after all, a signal that we can produce in the lab, and therefore we can measure its energy and figure out how to define the appropriate formula for energy. Well, as I pointed out in a different subthread, a typical sample function is discontinuous almost everywhere, and requires infinite currents to flow to charge and discharge capacitors instantaneously. Can we get a pretty good approximation by creating a sampled version? Well, you have then "filtered" the process in some sense when you sample it, and the discrepancy between infinite power and finite power disappears. In summary, I suggest that we all stop trying to look for Wiener's or Khinchine's original definitions and/or proofs in the hopes of resolving the apparent contradictions. The answer is not there but rather in basic calculus which we have forgotten in our rush to judgment... The Wiener-Khinchine formulation does not apply to the so-called white noise process with finite variance. --Dilip Sarwate