DSPRelated.com
Forums

Calibrating FFT results, amplitude in to magnitude out

Started by Brian Willoughby March 25, 2011
On Monday, March 28, 2011 3:08:26 AM UTC-4, Brian Willoughby wrote:

  ...

> That said, if I make an attempt to follow what you wrote above, I find > myself asking why you translate a sinusoid into both e^(jwn) and > e^(-jwn) ... wouldn't that put double the signal in? Do you really need > both the negative and positive frequencies for a real-valued sinusoid > input? Why wouldn't a pure real sinusoid simply translate to the > positive frequency only?
Brian, Welcome to the club. Everybody will jump all over me if I write that imaginary numbers don't exist in the real world and that we use them only for convenience when calculating. More than convenient, they reduce the amount of ink needed to express a complicated thing to a comprehensible brevity. Aside: There are no negative frequencies either, but that's another issue. You may choose to see e^(-jwt) as involving a negative frequency, but it could just as well involve negative time or negative j. Save your sanity and think if it as e^-(jwt). To answer your question, you need the -(jwt) to balance the (jwt) if you want it to represent a sinusoid, just as you have to sew fingers on a mitten if you want to use it as a glove. e^jwt is, after all, not a real sinusoid, but a complex exponential. Needing to use complex exponential exponential in complementary pairs is the price we pay for avoiding trig functions inside our calculations. Euler showed us that e^jwt = cos(wt) + j*sin(wt). Even before that, we knew that cos(-wt) = cos(wt) and that sin(-wt) = -sin(wt). Put those ideas together, and it's easy to see that the sum or difference of e^jwt terms can represent a real sinusoid, but an e^jwt can't stand alone. Those who call e^jwt a sinusoid befuddle our minds (or are themselves befuddled, or both).
> I cracked open Steiglitz' DPS Primer and the > first example I came across translated sin(kwt) into e^(jkwt), without > the negative frequency term. To be precise, he translates sin(kπt/T) > with period of 2T into e^(jk2πt/T) with period of T, which is a little > confusing in itself because of the change in period, but there you have it.
When was that written? It was common in the 50s to ignore the e^-jwt and to fix things later with RE{} and IM{}. That puts a factor of 2 into the magnitude, but period??? Are you sure? Maybe a typo? ... Jerry -- Engineering is the art of making what you want from things you can get.
On Mar 28, 3:08 am, Brian Willoughby
<Sound_Consulting-...@Sounds.wa.com> wrote:
> On 2011/03/27 20:21, robert bristow-johnson wrote: > > > the way i dealt with this when i was in college and using the computer > > to do FFTs and such on data that (in simulation) came from a known > > source, was to compare the Discrete Fourier Transform (however it's > > defined, whether it has a 1, a 1/N, or a sqrt(1/N) in front) to a > > Riemann Summation of the Fourier Integral. > > > the other way i can think of is to simply define a unit sinusoid, bust > > it into a positive frequency e^(jwn) and a negative frequency > > component, e(-jwn), and stuff that into the DFT and see what comes > > out. &nbsp;if w = 2*pi*k/N for integer k, it will come out real clean and > > you will know how large the number in the FFT bin is for a unit- > > amplitude sine. > > Thanks for the response. > > I hope I don't seem lazy, but I was hoping to just get the answer. &nbsp;;-) > I don't mind also knowing how to independently derive said answer. > > In college, I remember (thinking) that I understood all of those > formulae with 'e' and found them to be a beautiful expression of the > symmetry of nature. &nbsp;These days, I have forgotten quite a bit of that, > assuming I really ever had it right. &nbsp;Now I tend to only understand the > direct and practical application rather than the theoretical proofs. > Actually, I've never been able to just look at a formula like e^(j2&pi;f/T) > and 'know' what its integral would be. &nbsp;
Don't worry, nobody can, not even Euler or Gauss. You can derive it, but not intitutively do it. (Or one can pretend by talking in axiomatic circles I suppose) Anyway, given a sine wave with peak value of 1 Volt, a 4 point DFT of the sine wave is (sin(0)*sin(0) + sin(pi/2)*sin(pi/2) + sin(pi)sin(pi) + sin(3/2pi)*sin(3/2pi)) = (0 + 1+ 0 + 1) = 2 (sin(0)*cos(0) + sin(pi/2)*cos(pi/2) + sin(pi)cos(pi) + sin(3/2pi)*cos(3/2pi)) = (0 + 0 + 0 + ) = 0 bin magnatide = sqrt(2^2+0^2)=2 so you get 2 for the bin peak voltage result which is a N/2 = 4/2 = 2 gain
Jerry Avins <jya@ieee.org> wrote:
> On Monday, March 28, 2011 3:08:26 AM UTC-4, Brian Willoughby wrote:
(snip)
>> Why wouldn't a pure real sinusoid simply translate to the >> positive frequency only?
> Brian, Welcome to the club. Everybody will jump all over me if > I write that imaginary numbers don't exist in the real world and > that we use them only for convenience when calculating.
Except for quantities that naturally occur in an exponent, index of refraction for one. Ellipsometry includes measuring the real and imaginary parts of the index of refraction (and has a nice wikipedia entry describing it.)
> More than convenient, they reduce the amount of ink needed to > express a complicated thing to a comprehensible brevity.
> Aside: There are no negative frequencies either, but that's > another issue. You may choose to see e^(-jwt) as involving a > negative frequency, but it could just as well involve negative > time or negative j. Save your sanity and think if it as e^-(jwt).
With sampled data, sometimes negative frequencies seem to make more sense, especially in aliased signals. The most common case is rotating wheels in time sampled video, which appear to be going backwards. When the aliased frequency is small and negative, that works better than describing it as large and positive, especially when you see the result. Otherwise, I agree.
>> I cracked open Steiglitz' DPS Primer and the >> first example I came across translated sin(kwt) into e^(jkwt), without >> the negative frequency term. To be precise, he translates sin(k??t/T) >> with period of 2T into e^(jk2??t/T) with period of T, which is a little >> confusing in itself because of the change in period, but there you have it.
> When was that written? It was common in the 50s to ignore the e^-jwt > and to fix things later with RE{} and IM{}. That puts a factor of 2 > into the magnitude, but period??? Are you sure? Maybe a typo?
I don't see how to do that, either. -- glen
On Monday, March 28, 2011 3:39:02 PM UTC-4, glen herrmannsfeldt wrote:
> Jerry Avins <j...@ieee.org> wrote: > > On Monday, March 28, 2011 3:08:26 AM UTC-4, Brian Willoughby wrote: > (snip) > >> Why wouldn't a pure real sinusoid simply translate to the > >> positive frequency only? > > > Brian, Welcome to the club. Everybody will jump all over me if > > I write that imaginary numbers don't exist in the real world and > > that we use them only for convenience when calculating. > > Except for quantities that naturally occur in an exponent, > index of refraction for one. Ellipsometry includes measuring > the real and imaginary parts of the index of refraction (and > has a nice wikipedia entry describing it.)
Complex numbers marvelously encapsulate what is otherwise too complex (there's another of R.B-J.'s peeves) to express compactly, even for refraction indices. We accept without flinching that the real part of a propagation constant represents attenuation. The math is beautiful; that doesn't make it real (there we go again!).
> > More than convenient, they reduce the amount of ink needed to > > express a complicated thing to a comprehensible brevity. > > > Aside: There are no negative frequencies either, but that's > > another issue. You may choose to see e^(-jwt) as involving a > > negative frequency, but it could just as well involve negative > > time or negative j. Save your sanity and think if it as e^-(jwt). > > With sampled data, sometimes negative frequencies seem to make > more sense, especially in aliased signals.
It makes for pretty cartoons; I don't propose anything different.
> The most common case > is rotating wheels in time sampled video, which appear to be > going backwards. When the aliased frequency is small and negative, > that works better than describing it as large and positive, > especially when you see the result. Otherwise, I agree.
If I walk around and look at the wheel from the other side, does it become a negative wheel? :-) While we're here, is the left bank of a river on my left side when I face upstream, or down? Inquiring minds want to know. ... Jerry -- Engineering is the art of making what you want from things you can get.
On Mar 25, 9:50&#4294967295;am, Brian Willoughby
<Sound_Consulting-...@Sounds.wa.com> wrote:
> Hello all, > > There is one factor that seems to be missing from texts which describe > the DFT/FFT (or perhaps I have missed it). &#4294967295;That is: The correlation > between time domain signal amplitude and frequency domain bin magnitudes. > > On the one hand, many DSP libraries are very meticulous about > documenting the differences between their FFT or IFFT implementation > results versus Matlab. &#4294967295;For example, one implementation of an N-point > FFT might require that the results be divided by N to correlate with the > Matlab results, while another implementation would produce the exact > results without further calculations. &#4294967295;Even though I don't use Matlab, > these sorts of issues make sense to me as an expected side effect of > certain code optimizations. > > What I haven't found in the general texts is mention of the values I > should expect if I plug sin() at a constant frequency into a time domain > array and calculate the FFT. &#4294967295;Should I expect one frequency magnitude > bin to have a value of 1.0 because the sin() function varies between > -1.0 and +1.0? ... or would the magnitude bin be expected to hold a > value of 2.0 because the peak-to-peak amplitude of sin() is actually 2.0 > (+1.0 - -1.0)? &#4294967295;I am asking specifically about sin() frequencies which > fall precisely into a frequency bin based on the FFT size. &#4294967295;The general > case would obviously be more complicated due to potential frequency > smearing. > > The reason I ask is that Apple's Accelerate/vecLib Framework seems to be > returning 2.0 when I carefully construct time domain test signals with > 'unity' amplitude, and for some reason I was expecting 1.0 ... so what's > the 'right' answer? > > By the way, my test code is taking the additional step of converting the > complex FFT results from their real and imaginary components into > magnitude (and phase - which should be optional). > > I also have a slightly related question: I assume that if the real and > imaginary components never exceed +/-1.0, then the magnitude could not > possibly be greater than 1.4142, but I also have a feeling that the > nature of the FFT is that the results should probably never produce a > magnitude greater than 1.0, because the real and imaginary components > are not completely random - i.e. a given pair of real and imaginary > components would never both be +1.0 or -1.0 unless something terribly > unnatural had occurred. &#4294967295;Again, assuming that the time-domain signal > does not exceed +/-1.0 (and I realize that is not always the case with > uncontrolled inputs). > > Brian Willoughby > Sound Consulting > > P.S. &#4294967295;I'm sure that I'll happen across one of my texts that does spell > this out, now that I've gone to the trouble of writing the question. > But I do not that I've gone looking for a reference on more than one > occasion and came up with nothing.
There are 3 widely used DFT normalizations (1/N,1,1/sqrt(N)). I prefer the 1st because cos(2*pi*f0*t) will yield 2 spikes at +/-f0 of height 0.5. This is the test I use when encountering a new DFT/FFT program. Hope this helps. Greg Hope this helps.
On Mon, 28 Mar 2011 14:30:29 -0700 (PDT), Jerry Avins <jya@ieee.org>
wrote:

>If I walk around and look at the wheel from the other side, does it become = >a negative wheel? :-) While we're here, is the left bank of a river on my = >left side when I face upstream, or down? Inquiring minds want to know. > > ... > >Jerry
Many such conventions are arbitrarily defined, like zero on a time scale, and all that is really required for mathematical success is that one is consistent. Even the sense of rotation of the basis functions in the forward and inverse DFT is arbitrary. As long as they're opposite from each other all is well. As long as one is consistent about the use there's no loss of function or generality. Just about any coordinate or reference system can be transformed or translated. For most negative quantities are very useful, but opportunities for confusion can't be eliminated. Many use concepts, like negative g's, that could be argued away by the obstinate but are still genuinely useful. How can acceleration be negative? Well, in one sense it can't, but in another things are a lot simpler if one allows it to be so. Which is reality? Well, that's probably relative, too... ;) Eric Jacobsen http://www.ericjacobsen.org http://www.dsprelated.com/blogs-1//Eric_Jacobsen.php
On 03/28/2011 05:30 PM, Jerry Avins wrote:
> [...] > The math is beautiful; that doesn't make it real (there we go > again!).
I will oppose your viewpoint until eternity, Jerry. Given what I've learned in my modern algebra classes, there is something unique about the complex number field that doesn't exist in the real number field, re: the Fundamental Theorem of Algebra. I think the main reason we've been arguing all these years is that neither of us has sufficiently understood the point of the other's argument. I do acknowledge that, computationally, working with the operations in the complex can be performed using real operations, and I think this is the gist of your argument. What I don't think you see, Jerry, is that the complex numbers, as an arithmetic system, give us properties that the reals don't have. -- Randy Yates Digital Signal Labs 919-577-9882 http://www.digitalsignallabs.com yates@digitalsignallabs.com
I can provide examples to confirm what you say about negative time or frequency. 

Maybe the most illustrative are the sidebands with AM. If a carrier at wc is modulated with a sinusoid of wm, The carrier remains unchanged and sidebands at wc+wm and wc-wm are generated. Relative to the carrier, the second sideband's frequency is negative. If the carrier is shifted to zero , ....

A projectile is launched from the top of a tower at a specified angle. When will it be at the level ground below? There are two solutions for the time, and one of them is negative. The negative solution can be given real meaning.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
Complex numbers marvelously simplify the math needed to describe real things. The same is true for matrix algebra and vector analysis. Complex numbers are the culmination of the progress of computation that makes all operations close. Addition is closed for positive integers -- the sum of two positive integers is a positive integer -- but not for subtraction. Introducing negative numbers closes subtraction. Multiplication is closed for integers -- the product of two integers is an integer -- but not for division. Introducing fractions closes division. And so on, until complex numbers are defined. Then _all_ arithmetic operations close.

Where I think we differ is in what the mathematical operations we use to describe things can tell us about their fundamental nature.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
Jerry Avins <jya@ieee.org> wrote:
> Complex numbers marvelously simplify the math needed to describe > real things. The same is true for matrix algebra and vector > analysis. Complex numbers are the culmination of the progress of > computation that makes all operations close.
OK, regarding quantities in exponents... First, I 100% agree that many real quantities that should be as cos(wt) are described as exp(iwt), such that they aren't actually complex. Now, consider the impedance of a copper inductor. There is wire resistance (R) and inductive impedance (iwL). The wire resistance comes from interaction between the electrons and copper, the inductance from interaction between the electrons and the magnetic field. Two separate physical phenomena, not the real and imaginary parts of one. Next consider a superconducting inductor around a lossy ferrite core. The current generates a magnetic field, which magnetizes the ferrite. There is a phase difference between them, and so the electrons interact with a phase shifted field. Now the real and imaginary parts of impedance are due to the in phase and quadrature components of the field as seen by the current. As far as the current, it is the same physics but with a phase shift. Why should it not be seen as the real and imaginary parts of impedance (or of inductance)? Well, for one it only moves the problem above from the inductor to the ferrite. Is it one or two phenomena? Now for index of refraction. The EM field affects the electrons in their orbits, and that, combined with the electron energy levels, determines the new field generated by the atoms. For small fields and a good dielectric the index of refraction is pretty close to real, but as above the important physics is in the phase shift between the incoming field and the outgoing (reradiated) field. It is complicated by the resonance frequencies of the atomic electrons, but there is only one physical phenomenon. The phase determines the real and imaginary parts of the index of refraction, as a function of frequency. -- glen