DSPRelated.com
Forums

Signal energy explanation

Started by SysSpider January 21, 2007
Hey everyone,

First of all, let me say I'm only an amateur, and my experience with
signal processing is limited. There is one concept that has been
bothering me, not the idea in itself but the formulation. A real
signal's energy is defined as being the integral of the squared
function in order to time. If I understand correctly, we square the
signal in order to make the negative ordenates positive, so the signal
is not taken into account. But why do we square instead of simply
taking the magnitude of the value?

One definition I found of signal energy is "Energy dissipated when
voltage f(t) is applied to 1 ohm resistor". I can derive the squaring
from this, but since I don't understand why this definition would
correspond to signal energy it doesn't make me any more enlightened.

I would really appreciate your help on this, since I cannot stand using
results without understanding their fundamentation. Greets,

SysSpider

>Hey everyone, > >First of all, let me say I'm only an amateur, and my experience with >signal processing is limited. There is one concept that has been >bothering me, not the idea in itself but the formulation. A real >signal's energy is defined as being the integral of the squared >function in order to time. If I understand correctly, we square the >signal in order to make the negative ordenates positive, so the signal >is not taken into account. But why do we square instead of simply >taking the magnitude of the value? > >One definition I found of signal energy is "Energy dissipated when >voltage f(t) is applied to 1 ohm resistor". I can derive the squaring >from this, but since I don't understand why this definition would >correspond to signal energy it doesn't make me any more enlightened. > >I would really appreciate your help on this, since I cannot stand using >results without understanding their fundamentation. Greets, > >SysSpider >
Hi, Using square is to emphasize larger amplitude signal and put less emphasis on small amplitude signal. This may be useful in certain circumstances or areas, so it depends. I am not an expert. Just sharing my thoughts.
"SysSpider" <SysSpider@gmail.com> writes:

> Hey everyone, > > First of all, let me say I'm only an amateur, and my experience with > signal processing is limited. There is one concept that has been > bothering me, not the idea in itself but the formulation. A real > signal's energy is defined as being the integral of the squared > function in order to time. If I understand correctly, we square the > signal in order to make the negative ordenates positive, so the signal > is not taken into account. But why do we square instead of simply > taking the magnitude of the value? > > One definition I found of signal energy is "Energy dissipated when > voltage f(t) is applied to 1 ohm resistor". I can derive the squaring > from this, but since I don't understand why this definition would > correspond to signal energy it doesn't make me any more enlightened. > > I would really appreciate your help on this, since I cannot stand using > results without understanding their fundamentation. Greets,
Hi, Try analyzing it from a units perspective. Let the signal x(t) be in [volts]. It can be shown that [volts] = [joules / coulomb]. We define "e" as e = (1 / R) \int_{0}^{T} x^2(t) dt, where R is an implicit 1-ohm resistor. It can be shown that [ohms] = [joule-second / coulomb^2]. Therefore "e" has units of [coulomb^2 / joule-second] * [joule^2 / coulomb^2] * [second] = [joule], where the final [second] unit comes from the "dt" in the integral. Thus we see that e defined this way yields energy [joules]. -- % Randy Yates % "With time with what you've learned, %% Fuquay-Varina, NC % they'll kiss the ground you walk %%% 919-577-9882 % upon." %%%% <yates@ieee.org> % '21st Century Man', *Time*, ELO http://home.earthlink.net/~yatescr
On Sun, 21 Jan 2007 16:08:13 -0000, SysSpider <SysSpider@gmail.com> wrote:

> Hey everyone, > > First of all, let me say I'm only an amateur, and my experience with > signal processing is limited. There is one concept that has been > bothering me, not the idea in itself but the formulation. A real > signal's energy is defined as being the integral of the squared > function in order to time. If I understand correctly, we square the > signal in order to make the negative ordenates positive, so the signal > is not taken into account. But why do we square instead of simply > taking the magnitude of the value? > > One definition I found of signal energy is "Energy dissipated when > voltage f(t) is applied to 1 ohm resistor". I can derive the squaring > from this, but since I don't understand why this definition would > correspond to signal energy it doesn't make me any more enlightened.
In many DSP applications, the signals involved ultimately end up converted to a physical form via some sort of transducer, be it a resistor, motor, loudspeaker or antenna. In such cases, the signal often directly represents the voltage (or current) fed to these devices; therefore, it is natural to talk about the energy involved. In the linear case, the power/energy is indeed proportional to the integral of the voltage or current over time. Many intuitive mathematical results can be obtained by thinking of the signal in this way (Parseval's theorem is a good example of this). In many other cases, the signals represent more abstract concepts (e.g. stock prices, population size, etc.), where of course the concept of "energy" doesn't really apply. Nonetheless, all the mathematical approaches still apply, and therefore a lot of the terminology holds as well. -- Oli
SysSpider wrote:
> Hey everyone, > > First of all, let me say I'm only an amateur, and my experience with > signal processing is limited. There is one concept that has been > bothering me, not the idea in itself but the formulation. A real > signal's energy is defined as being the integral of the squared > function in order to time. If I understand correctly, we square the > signal in order to make the negative ordenates positive, so the signal > is not taken into account. But why do we square instead of simply > taking the magnitude of the value? > > One definition I found of signal energy is "Energy dissipated when > voltage f(t) is applied to 1 ohm resistor". I can derive the squaring > from this, but since I don't understand why this definition would > correspond to signal energy it doesn't make me any more enlightened. > > I would really appreciate your help on this, since I cannot stand using > results without understanding their fundamentation. Greets, > > SysSpider >
Way back in the dark ages, a signal's energy (or power) was the amount of energy in Joules, or power in Watts, required to transmit the signal. Signal processing theory came out of electronics, and in electronics power = voltage*current. In electronics ost transmission media appears as a linear resistive load, so the actual signal power is proportional to the signal's voltage squared divided by resistance. In signal processing we often don't care much at all about absolute power (or energy) -- indeed, in DSP a signal's "power" has little physical significance. What we _do_ care deeply about is signal power (or energy) compared to noise power (or energy). But as Oli Chatworth explained, the math has been worked out using x^2 as a measure of "energy", x^2 can often be traced back to energy, and x^2 often makes the math easy (particularly if you assume that the noise is Gaussian, which is often warranted). -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/ "Applied Control Theory for Embedded Systems" came out in April. See details at http://www.wescottdesign.com/actfes/actfes.html
On Sun, 21 Jan 2007 17:36:03 -0000, Oli Charlesworth  
<catch@olifilth.co.uk> wrote:
> In the linear case, the power/energy is indeed proportional to the > integral of the voltage or current over time.
Oops, there should be a "squared" in that sentence. -- Oli
Tim Wescott wrote:
> > But as Oli Chatworth explained, the math has been worked out using x^2 > as a measure of "energy", x^2 can often be traced back to energy, and > x^2 often makes the math easy (particularly if you assume that the noise > is Gaussian, which is often warranted). > > -- >
Hello Time and others, Apart from actual physics applications using real energy, using magnitude squared makes mathematical sense in that Parseval's (or the more commonly used Bessel's theorem (it is a special case of Parseval's)) that relates the sums of squares between the time and frequency domains. So if one wishes to use things like fourier, and lapace analysis, then using magnitude squared opens up a lot more relationships than simply using magnitudes. IHTH, Clay
Tim Wescott wrote:
> Way back in the dark ages, a signal's energy (or power) was the amount > of energy in Joules, or power in Watts, required to transmit the signal. > Signal processing theory came out of electronics, and in electronics > power = voltage*current. In electronics ost transmission media appears > as a linear resistive load, so the actual signal power is proportional > to the signal's voltage squared divided by resistance. > > In signal processing we often don't care much at all about absolute > power (or energy) -- indeed, in DSP a signal's "power" has little > physical significance. What we _do_ care deeply about is signal power > (or energy) compared to noise power (or energy). > > But as Oli Chatworth explained, the math has been worked out using x^2 > as a measure of "energy", x^2 can often be traced back to energy, and > x^2 often makes the math easy (particularly if you assume that the noise > is Gaussian, which is often warranted). >
You don't hear too many DSP engineers talking about the active, reactive and apparent power in their signals, do you? :-) Steve
SysSpider wrote:
> Hey everyone, > > First of all, let me say I'm only an amateur, and my experience with > signal processing is limited. There is one concept that has been > bothering me, not the idea in itself but the formulation. A real > signal's energy is defined as being the integral of the squared > function in order to time. If I understand correctly, we square the > signal in order to make the negative ordenates positive, so the signal > is not taken into account. But why do we square instead of simply > taking the magnitude of the value? > > One definition I found of signal energy is "Energy dissipated when > voltage f(t) is applied to 1 ohm resistor". I can derive the squaring > from this, but since I don't understand why this definition would > correspond to signal energy it doesn't make me any more enlightened.
... There's a misconception in your formulation. The reason for squaring is not to make the numbers positive. That it works out that way could be thought of as one of the beautiful marvels of nature. You recognized that energy is the integral of power over time. Ignoring the duration and concentrating on the power can be instructive. The power in an electric circuit is the product of voltage and current: p = e*i. In a linear resistive circuit, current is proportion to voltage and voltage to current: i = v/R and v = iR. Making a substitution, p = i^2R and p = v^2/R. When signals are represented by numbers, the notions of power and energy are merely a sort of mental placeholder without real significance. That allows us the simplifying convention that R = 1. It also endows our math with all the results that apply to the kind of live circuits that can zap you is you grab the wrong wire. Jerry -- Engineering is the art of making what you want from things you can get. &macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;
Jerry,

Jerry Avins wrote:

> > When signals are represented by numbers, the notions of power and energy > are merely a sort of mental placeholder without real significance. That > allows us the simplifying convention that R = 1. It also endows our math > with all the results that apply to the kind of live circuits that can > zap you is you grab the wrong wire. >
My understanding is that the input/output resistance in most modern cases is usually high enough, and the currents are low enough (I guess precisely to save overall power consumption and leave voltage levels as "pure information", minimizing electrical "side effects"). Therefore I do not think it's intuitive to justify using "power" (squaring) by resorting to this physics only. That topic reminded me of the troubles I once had talking to a "non-math-type" colleague -- the issue was precisely "why one would want to use squaring instead of just taking the abs. value".
>From what I understand, L2 norm is "better" for a lot of analytical
purposes (again, Parseval's theorem, continuity/differentiability/etc.), as a result it is better suited for a lot of practical methods/algorithms. Even if an algorithm implemented inside CPU only, without any "current" or "heat" at all. Especially if one starts to consider analytic signals, complex numbers, etc. But, frankly, I have yet to heard an "intuitive explanation" for a "non-math-type" person (the one who believes his voltmeter, which does measure real voltages, but does not see any "analytic signals") regarding why he should square the voltage of the signal he perceives through some high-Ohm ADC input to characterize the signal... (and actually for a lot of purposes he shouldn't, the amplitude being perfectly adequate). Actually I would be very much delighted to see such "intuitive" explanation... Regards, Dmitry.