# Complex versus real numbers

Started by August 25, 2009
```Chris Bore <chris.bore@gmail.com> wrote:
< I repeatedly come across objections to using complex numbers for DSP.
< I wonder if there is a good way that I can explain without struggling?

< The current case concerns ultrasound, where the measurement is of
< pressure (on a transducer). My processing uses complex numbers, which
< naturally arise when I demodulate the signal. The question I am asked
< is, why are complex numbers necessary when the signal itself seems to
< be real-valued?

It took me a while to understand, or at least believe that I do...

< My answer (one of them) is that the signal is modelled using complex
< exponentials (Fourier analysis) and so this is the natural way to
< handle those quantities. But often, a measurement of
< 'amplitude' (instantaneous, real-valued, pressure) is desired, and
< this seems to be a real-valued quantity. My argument here is that this
< is modelled as the sum of two complex exponentials, contra-rotating
< (one with +ve and one with -ve frequency), to produce a resultant that
< happens to have zero imaginary part.

It is the counter-rotating part that is important.  More on that below.

< So far so clear. But we can also derive Foruier transforms that are
< based on sums of sine and cosine functions - each of which seem to be
< real-valued quantities, and so this bypasses the complex
< implementation.

Wrong argument.  The Fourier sine and cosine transforms are
fundamentally different.  If you want a pure real transform,
there is the Hartley transformsform.   The fundamental difference
is in the boundary conditions used.

< My argument here is that this is the same as complex
< numbers, only in a different form where the complex arithmetic is done
< explicitly by the way the sine and cosine terms are added and
< multiplied, and where amplitude/phase replace real/imaginary in a
< different arithmetic. But my current disputee convincingly suggests
< that the complex representation is therefore unnecessary.

< Two questions:

< 1) am I correct in saying that the sine+cosine transform simply
< duplicates complex arithmetic by inventing a sort of phase/amplitude
< arithmetic?

The sine and cosine transform are different.  The important thing
is that an exponential has a phase/amplitude and direction.
(An analogy to standing waves will be important here.)

< 2) is there a simple and convincing argument to explain this if
< it is the case?

When working with waves, there are always two important quantities
but often we measure only one.  Current and voltage in electronics,
pressure and velocity in acoustics.  The complex value conveniently
keeps two quantities together.  If you measure the voltage as a
function of time on a wire, or pressure in an air column, you may
find it sinusoidal, but you don't know which way (or both ways) the
wave is moving.  For that you need to know the current or air
velocity.

If you want to describe propagating waves without complex numbers,
you need to keep two unrelated quantities.  With the assumption
of uniform media and discontinuous boundaries it is easy to keep
one complex value for each wave.  (Two for waves going in
both directions, possibly with different amplitude.)

For non-uniform media, it might be that the best way is to
keep both quantities and not use complex numbers.

OK, now for a story that this reminds me of.  In a lecture
demonstration in my undergrad physics class, the lecturer was
showing the analogy between waves in a transmission line and
acoustic waves in an air column.  At the end, the answer came
out wrong.  It seems that an open end cable is analogous to a
closed end air column, but that wasn't good enough.  For the
next lecture, the same demonstration equipment was out, but this
time with a current probe on the oscilloscope.  It seems that it
is easier to measure voltage than current, and easier to measure
air pressure than air velocity.  The wave equations are symmetric
between voltage/current and pressure/velocity but people are not.

< One further issue. If I am interested in 'amplitude', then for example
< if I average two numbers of equal amplitude and opposite phase, then
< if I use a complex representation the 'average' amplitude is zero,
< whereas if I average only their amplitudes then the average is equal
< to either of the original amplitudes. Clearly the second case is not
< the same measure as the first, but is there an easy explanation as to
< why and what it measures?

You might look at the explanations of SWR, and standing waves,
in radio transmission.  That might be the most common case where
two waves of different amplitude going in opposite direction
are being measured.   I believe, though, that is an important
part of ultrasound, too.

The Fourier sine and cosine transform are for the case where
you have equal amplitude waves going in each direction.  That
is, the boundary conditions are amplitude is zero at the end
(sine) or derivative is zero at the end (cosine).  The Fourier
exponential transform, along with the Hartley transform, have
periodic boundary conditions.  That is, the signal and its
derivative have the same value at each end.

-- glen
```
```glen herrmannsfeldt wrote:
> Chris Bore <chris.bore@gmail.com> wrote:

...

> < So far so clear. But we can also derive Foruier transforms that are
> < based on sums of sine and cosine functions - each of which seem to be
> < real-valued quantities, and so this bypasses the complex
> < implementation.
>
> Wrong argument.  The Fourier sine and cosine transforms are
> fundamentally different.  If you want a pure real transform,
> there is the Hartley transformsform.   The fundamental difference
> is in the boundary conditions used.

I think this is a misapprehension on your part. I believe that Chris
meant a single transform given in terms of sines and cosines
(rectangular coordinates) rather than complex exponentials (polar
coordinates).

...

Jerry
--
Engineering is the art of making what you want from things you can get.
&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
```
```Jerry Avins <jya@ieee.org> wrote:
< glen herrmannsfeldt wrote:
<> Chris Bore <chris.bore@gmail.com> wrote:

<   ...

<> < So far so clear. But we can also derive Foruier transforms that are
<> < based on sums of sine and cosine functions - each of which seem to be
<> < real-valued quantities, and so this bypasses the complex
<> < implementation.

<> Wrong argument.  The Fourier sine and cosine transforms are
<> fundamentally different.  If you want a pure real transform,
<> there is the Hartley transformsform.   The fundamental difference
<> is in the boundary conditions used.

< I think this is a misapprehension on your part. I believe that Chris
< meant a single transform given in terms of sines and cosines
< (rectangular coordinates) rather than complex exponentials (polar
< coordinates).

Maybe.  It took me a while (until the explanation in Numerical
Recipes) to understand the difference.  Mentioning sine and
transform in the same sentence seems too close to a Fourier
sine transform for me, though.

Hopefully the rest of the explanation helps.

-- glen
```
```"Jerry Avins" <jya@ieee.org> wrote in message
>
> ..... To calculate sqrt(a^2+b^2) on a slide rule, experienced users
> calculate a/sin(atan(a/b)).

Wow!

No matter how much maths ("math" to the Yanks) you have under
your belt, there's always yet another fascinating insight lurking just
around the corner!

```
```On Tue, 25 Aug 2009 05:28:00 -0700 (PDT), Chris Bore
<chris.bore@gmail.com> wrote:

[Snipped by Lyons]

Hi Chris,
If I find myself trying to explain why complex
numbers are so often used in DSP, I generally say
something like:

* "Out of convenience.  Euler's equations allow
us to represent a real signal as the sum of
positive- and negative-frequency complex exponentials
if it suits our purpose (if it's useful for some reason
to do so).  And when thinking about a complex exponential,
Euler allows us to it as a combination
(orthogonal, that is) of two real functions (a special
combination of a real sine and a real cosine.)
At that point I usually mumble something about Euler's
equations being a kind of "Rosetta Stone" that allows
us to translate back and forth between real and complex
representations--whichever suits our fancy.

* Complex (quadrature) signal representation is wildly
useful in accurately measuring the instantaneous
amplitude, phase, or frequency of a signal. (For amplitude,
phase, and frequency demodulation.)

* When we perform spectrum analysis in DSP, our
spectral results are in the form of complex numbers.
That's because the DFT computes both spectral magnitudes
***and*** the relative phase between spectral components.

* For pencil & paper analysis, complex (quadrature) signal
representation is often more convenient.  If you ask me
what is the form of the product of two sinusoids, I always
have scramble around to find a trig identity from some
math book.  It's easy for me to determine the product
of two complex exponentials (it's merely the sum of
exponents).

* I usually end my explanation with, "We use complex
numbers because that's the way God wants it to be."

>One further issue. If I am interested in 'amplitude', then for example
>if I average two numbers of equal amplitude and opposite phase, then
>if I use a complex representation the 'average' amplitude is zero,
>whereas if I average only their amplitudes then the average is equal
>to either of the original amplitudes. Clearly the second case is not
>the same measure as the first, but is there an easy explanation as to
>why and what it measures?

Your last topic points out an issue that may cause us
problems in our discussion--semantics.  (Jerry Avins
alluded to this.)

I've always thought that "amplitude" meant the difference
between a real number and zero.  And an "amplitude" value
can be a positive or a negative real-only quantity.
And I've always thought that the word "magnitude"
meant a positive value only.  For example, a single
complex number can be described by a real "magnitude"
value and a real phase value.

is thought provoking.  If the numbers each have an amplitude
and a phase, then the two numbers *must* be complex. And it
seems to me that averaging two complex numbers only has
meaning if we average the numbers' real and imaginary
parts separately.  This topic seems to be closely related
to the material in a blog, called "The Nature of Circles",
by Peter Kootsookos (our own Dr. K) at:

http://www.dsprelated.com/showarticle/57.php

Chris, you might take a look at that blog.

See Ya,
[-Rick-]
```
```On Aug 25, 8:28&#4294967295;am, Chris Bore <chris.b...@gmail.com> wrote:
> I repeatedly come across objections to using complex numbers for DSP.
> I wonder if there is a good way that I can explain without struggling?
>
> The current case concerns ultrasound, where the measurement is of
> pressure (on a transducer). My processing uses complex numbers, which
> naturally arise when I demodulate the signal. The question I am asked
> is, why are complex numbers necessary when the signal itself seems to
> be real-valued?
>
> My answer (one of them) is that the signal is modelled using complex
> exponentials (Fourier analysis) and so this is the natural way to
> handle those quantities. But often, a measurement of
> 'amplitude' (instantaneous, real-valued, pressure) is desired, and
> this seems to be a real-valued quantity. My argument here is that this
> is modelled as the sum of two complex exponentials, contra-rotating
> (one with +ve and one with -ve frequency), to produce a resultant that
> happens to have zero imaginary part.
>
> So far so clear. But we can also derive Foruier transforms that are
> based on sums of sine and cosine functions - each of which seem to be
> real-valued quantities, and so this bypasses the complex
> implementation. My argument here is that this is the same as complex
> numbers, only in a different form where the complex arithmetic is done
> explicitly by the way the sine and cosine terms are added and
> multiplied, and where amplitude/phase replace real/imaginary in a
> different arithmetic. But my current disputee convincingly suggests
> that the complex representation is therefore unnecessary.
>
> Two questions:
>
> 1) am I correct in saying that the sine+cosine transform simply
> duplicates complex arithmetic by inventing a sort of phase/amplitude
> arithmetic?
>
> 2) is there a simple and convincing argument to explain this if it is
> the case?
>
> One further issue. If I am interested in 'amplitude', then for example
> if I average two numbers of equal amplitude and opposite phase, then
> if I use a complex representation the 'average' amplitude is zero,
> whereas if I average only their amplitudes then the average is equal
> to either of the original amplitudes. Clearly the second case is not
> the same measure as the first, but is there an easy explanation as to
> why and what it measures?
>
> Thanks,
>
> Chris
> =====================
> Chris Bore
> BORES Signal Processingwww.bores.com

Chris,

The blunt and honest answer for your client (although probably one you
can't give him in person) is that just because he doesn't understand
something, doesn't mean he should fear it. This is a possibly a
psychological argument, not a technical one.  Will he be upset to
discover that the electricity he uses to toast his bread in the
morning came out of a nuclear power plant, for which the theoretical
physicist used complex values to model the reactions?  This is the
same kind of reaction I had from an old boss for whom the digital
electronics concept of "state machine" was the devil's work... too
complex for him to understand, and he got worried every time I
described a solution in terms of one. "Couldn't you just use gates and
flip-flops instead?", he suggested...  I kid you not.

I look at your sine+cosine versus complex number problem through a
more abstract lens.  If you get enough *raw information* about a
problem, and there's a solution, you can get the solution from your
information. But once you fold, merge, lose, destroy enough of your
raw information, you can't get the right solution any more.  In very
general terms, it's like having enough dimensions, or degrees of
freedom, or sampling rate.  Below a certain point, you can't solve the
problem anymore.  It's like thinking you have a very large random
matrix (lots of entropy, lots of information to solve your problem)
but then discovering it really only has rank 2.

In universal algebra, this notion is captured by the idea of a
universal arrow (function) through which you can factor any other
arrow. The kernel of your universal arrow function must refine the
kernel of your original function in order for you to be able to factor
through it. This kernel refinement captures the level of "raw
information" needed. Not the best description but it's how I think of
it.

Using sine and cos transforms separately,  you're adding the same
amount of degrees of freedom as you would generate using the complex-
valued transform.  And because they're orthogonal, you are sure they
capture all the information in the complex output. If they didn't, it
would be like a matrix of lower rank (as an analogy). You can think of
these two representations as isomorphic in the sense that, by defining
enough rules about recombining the separated results, you can still
get your original solution. But they're certainly not *equal* - as has
been pointed out, the root of x^2+1=0 is certainly not any real
combination of sines and cosines - it's a very unique mathematical
object which *is*, in its heart and soul,  a complex number.

You're not really "duplicating" complex arithmetic so much as coming
up with an "socially acceptable" form which can be translated to-and-
from the complex numbers that you *really* use to solve your problem.
But because it's "socially acceptable", it prevents the peasants from
rising up and burning you at the stake.

Just my 2 cents.

- Kenn

```
```On 26 Aug, 14:59, sleeman <kennheinr...@sympatico.ca> wrote:

> The blunt and honest answer for your client (although probably one you
> can't give him in person) is that just because he doesn't understand
> something, doesn't mean he should fear it. This is a possibly a
> psychological argument, not a technical one.
...
> ... got worried every time I
> described a solution in terms of one. "Couldn't you just use gates and
> flip-flops instead?", he suggested... &#4294967295;I kid you not.

This was the exact reason why my improved passive sonar
never caught on: The people who needed to know were too
hung up in familiar but irrelevant semantics and terminology.

Everybody who had tried to solve the same problem before
me had been limited by the (artificial) limitations by the
semantics they used. So when I disregarded the 'tradition'
I found the solution very quickly (as I recall, in a couple
of afternoons, inbetween other work), but since everybody
else were unable to communicate the subject without resorting
to traditional semantics, no one were able to understand

Of course, my solution could not be expressed in the 'old'
before me.

> You're not really "duplicating" complex arithmetic so much as coming
> up with an "socially acceptable" form which can be translated to-and-
> from the complex numbers that you *really* use to solve your problem.
> But because it's "socially acceptable", it prevents the peasants from
> rising up and burning you at the stake.

This is an important point: People who are unaware of the
role of semantics will instinctively react to the effect that
"so you think I am stupid?!" if you point out the details to
them - as has been amply demonstrated in a different thread
here over the past few days.

Do not underestimate the effects of the psychological blow
it is to find out that one needs to learn very basic stuff.
In my experience this happens especially with people who are
in way over their heads, and who somehow sense - but do not
understand! - that they are. People who understand that they
are in deep, ask for *help*, not for extra manhours to get the
job done, and appreciate the help they get. Such people also
tend to be keen on learning form you along the way.

Just be very cautios and play your cards carefully. When it
comes to people, always expect the worst.

Rune
```
```On Aug 26, 2:59 pm, sleeman <kennheinr...@sympatico.ca> wrote:
On Aug 25, 8:28 am, Chris Bore <chris.b...@gmail.com> wrote:

> Using sine and cos transforms separately, you're adding the
> same amount of degrees of freedom as you would generate
> using the complex- valued transform. And because they're
> orthogonal, you are sure they capture all the information
> in the complex output. If they didn't, it would be like a
> matrix of lower rank (as an analogy). You can think of
> these two representations as isomorphic in the sense that,
> by defining enough rules about recombining the separated
> results, you can still get your original solution.

Right, but the sine/cosine basis is not just some arbitrary
basis related to the complex exponential basis by an
arbitrary unitary transformation. The sine and cosine
functions of a given frequency span the same 2d subspace
(with real coefficients) as the complex exponential of that
frequency and its negative (with conjugate coefficients).
These subspaces are the smallest subspaces preserved by
translations and reflections (time reversal).

illywhacker;
```
```On Aug 25, 2:28&#4294967295;pm, Chris Bore <chris.b...@gmail.com> wrote:
> Two questions:
>
> 1) am I correct in saying that the sine+cosine transform simply
> duplicates complex arithmetic by inventing a sort of phase/amplitude
> arithmetic?

Sort of. Sine and cosine functions are linear combinations of a
complex exponential of the same frequency and its complex conjugate.
They therefore span the same set of functions. Two things need to be
explained: what is so special about complex exponentials paired with
their complex conjugates, and why are these particular linear
combinations important? See below.

The real importance of complex exponentials is that they are
preserved, up to a factor, by translations. Indeed, they are the only
such functions that are also bounded. So if you want to have  a
representation of a function/signal that behaves simply under
translations, then you are pretty much forced to use Fourier
transforms.

If you now also allow reflections (i.e. time reversal in the 1d case),
then starting from a complex exponential, you generate its complex
conjugate. These two functions, as frequency chnages, give a set of 2d
subspaces that are the smallest subspaces preserved by translations
and reflections. But for each frequency, any two linearly independent
linear combinations of these two will do just as well in spannign the
subspace. What makes sine and cosine special is that they are real.
Thus, for real signals, one can ensure real coeffiicents.

> 2) is there a simple and convincing argument to explain this if it is
> the case?

No. It is highly unlikely that you can explain in a simple way (i.e.
without giving a lecture course) why complex numbers are useful, to
someone who does not realize why complex numbers are useful.

On the other hand, I suppose that if they understand calculus, you
could show them what I just stated. Take a translation by a distance
t; which functions are preserved by this for all t?

Start with a function f. Translate it by t, i.e. f(x + t) is the
translated function evaluated at x. Demand that

f(x + t) = a(t) f(x)

for some function a(t), i.e. the function is preserved by
translations. Now it may be obvious that f has to be an exponential at
this point, but if not, differentiate with respect to t and set t = 0:

f'(x) = a'(0) f(x) .

This is a differential equation whose solution is:

f(x) = B exp(a'(0) x) + C

for any B and C. For it to be bounded, a'(0) must be imaginary, giving
you complex exponentials. The original equation (*) is stronger than
the differential equation, and fixes C = 0. Normalization fixes B.

illywhacker;

```
```Rick Lyons <R.Lyons@_bogus_ieee.org> wrote:
(snipped even more)

< * "Out of convenience.  Euler's equations allow
< us to represent a real signal as the sum of
< positive- and negative-frequency complex exponentials
< if it suits our purpose (if it's useful for some reason
< to do so).  And when thinking about a complex exponential,
< Euler allows us to it as a combination
< (orthogonal, that is) of two real functions (a special
< combination of a real sine and a real cosine.)
< At that point I usually mumble something about Euler's
< equations being a kind of "Rosetta Stone" that allows
< us to translate back and forth between real and complex
< representations--whichever suits our fancy.

I wouldn't mind if you read and commented on my previous post

< * Complex (quadrature) signal representation is wildly
< useful in accurately measuring the instantaneous
< amplitude, phase, or frequency of a signal. (For amplitude,
< phase, and frequency demodulation.)

< * When we perform spectrum analysis in DSP, our
< spectral results are in the form of complex numbers.
< That's because the DFT computes both spectral magnitudes
< ***and*** the relative phase between spectral components.

Yes.  Maybe even more emphasis on this one.  Where waves
are concerned, there are usually two physical quantities
involved, such as voltage and current.  We usually only
measure one, but both are important, as is the relative
phase of the two.

< * For pencil & paper analysis, complex (quadrature) signal
< representation is often more convenient.  If you ask me
< what is the form of the product of two sinusoids, I always
< have scramble around to find a trig identity from some
< math book.  It's easy for me to determine the product
< of two complex exponentials (it's merely the sum of
< exponents).

That might be enough, but I believe in wave problems
there is more.

< * I usually end my explanation with, "We use complex
< numbers because that's the way God wants it to be."

This comes out especially in quantum mechanics.  Is the
wave function really complex, or is it just easier that way?
In EE, one usually want the voltage to be real.  That isn't
so obvious in QM.

(snip)

< I've always thought that "amplitude" meant the difference
< between a real number and zero.  And an "amplitude" value
< can be a positive or a negative real-only quantity.
< And I've always thought that the word "magnitude"
< meant a positive value only.  For example, a single
< complex number can be described by a real "magnitude"
< value and a real phase value.

I believe that, at least as used in physics, amplitude
includes phase.  That may be for lack of enough words.
There are descriptions like:

"For coherent signals, add the amplitude, for incoherent

Leaving out the question of partial coherence, phase must
be considered in the "add amplitude" statement.

< is thought provoking.  If the numbers each have an amplitude
< and a phase, then the two numbers *must* be complex. And it

I would say that they also could be sine or cosine with
a phase term, but much easier if complex.

< seems to me that averaging two complex numbers only has
< meaning if we average the numbers' real and imaginary
< parts separately.  This topic seems to be closely related
< to the material in a blog, called "The Nature of Circles",
< by Peter Kootsookos (our own Dr. K) at:

-- glen
```