DSPRelated.com
Forums

Ways of implementing a Hilbert Transform

Started by Michel Rouzic January 11, 2006
Michel Rouzic wrote:
> Andor wrote: > >>Rick Lyons wrote: >> >>... >> >>>One way to generate an analytic signal is to perform the FFT of your >>>original real signal. Next, zero-out all of the FFT results negative >>>frequency components. (This will give you a spectrum having only >>>positive-frequency components.) Then perform the inverse FFT of that >>>new spectrum. The result of the inverse FFT will be your desired >>>analytic signal, and the real part of the analytic signal will be the >>>original a real signal samples that you started with. >> >>Rick, I tried out this procedure. It turned out that, in addition to >>what you describe, I had to scale the DC and the Nyquist term of the >>spectrum (both real only due to Hermitian symmetry) by a factor of 1/2 >>to get the desired result. >> >>I've never used the Hilbert transform before. I understand that it can >>be used to determine "instantaneous" amplitude (magnitude of the >>Hilbert transform pair) and frequency (rate of change of the phase of >>the Hilbert transfrom pair) of a time signal. Zeno would have a feast >>with that :-) ! >> >>In what type of processing are we interested in that information? > > > I'm trying to do a frequency shifting. I could do that by just moving > the content of the FFT of the signal i'm trying to shit, but only the > shift is not flat, it varies over time (because I'm trying to turn a > frequency sweep to flat)
First, you're going about your task the hard way. Don't first record a chirp and later take it out. Bring the chirp frequency to DC as you record, as I described earlier. (I.e. record the product of your stepwise chirp and the returned signal for later low-pass filtering.) Second, you seem to be making some wrong assumptions that are keeping you from seeing how to do what you you want. Get a book. Rick's if you can, and in any event Smith's at http://www.dspguide.com/. You can look up anything you know you don't know. What you think you know but is wrong really hurts. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Jerry Avins wrote:
> Michel Rouzic wrote: > > Jerry Avins wrote: > > ... > > >>For continuous processing, you would need to string the IFFTs together > >>with overlap. A Hilbert transform is done with convolution, just like > >>most FIRs. Any of those can be accomplished with FFT-IFFT. Look up "fast > >>convolution". > > > > > > Um, I guess that by conitnuous processing you mean real-time. well no i > > don't need that, i'm dealing with a sound file. Anyways, I already > > implemented FFT Convolution (just not with overlap-add, I used to, but > > for some mysterious reason it was broke). > > No, I don't. I mean processing a long input in shorter pieces. If your > FFT is large enough to handle the whole file at once, fine. Otherwise > you have to process the file in pieces and string the pieces together. > > Every filter, including FFT-modify-IFFT and analog, mangles the ends of > its inputs. When you process the file in pieces, there are lots of > mangled ends in the middle. Ouch! Overlap methods allow the ending > garbage of one chunk to be corrected by the beginning garbage of the > next, giving proper output (except for the actual ends). > > >>>I also read in a book that the analytic signal's real part is equal to > >>>the original real input signal. i got a little problem with that, it > >>>seems to me that it says that the analytic signal's real part is the > >>>input signal's real part, as if we discared the input signal's > >>>imaginary part. I probably got it wrong, so if someone could enlight > >>>me... > >> > >>You understand that to make an analytic signal, you perform a Hilbert > >>transformation on it in order to generate the imaginary part. You now > >>have two parts, real* and imaginary, which together are the analytic > >>(complex) signal. If the imaginary part is discarded, you are back to > >>the signal you started with. > > > > > > Well that's what confuses me. If you can get to the signal you started > > with by discarding its imaginary part, then it means the the signal you > > started with held all in the real part? > > You started with only the real part, then constructed the imaginary part > from it. Why does it confuse you that you can do it a second time?
But what do you do with the imaginary part of your input signal? you discard it, right?
> > In other words, you only perform the hilbert transform for the I part > > of the signal? cuz the signal I have to deal with has both real and > > imaginary parts. If you perform the hilbert transform on the I part of > > the signal, what do you do with the Q part of the original signal? > > If the signal is already analytic, you don't need to make it so. > Non-analytic complex signals rarely arise at baseband. Did I miss an > explanation of what you're up to?
I'm just trying to implement frequency shifting, particularly by following a schema i'll post on my other post. if you know of a more efficient one..
Jerry Avins wrote:
> Michel Rouzic wrote: > > Andor wrote: > > > >>Rick Lyons wrote: > >> > >>... > >> > >>>One way to generate an analytic signal is to perform the FFT of your > >>>original real signal. Next, zero-out all of the FFT results negative > >>>frequency components. (This will give you a spectrum having only > >>>positive-frequency components.) Then perform the inverse FFT of that > >>>new spectrum. The result of the inverse FFT will be your desired > >>>analytic signal, and the real part of the analytic signal will be the > >>>original a real signal samples that you started with. > >> > >>Rick, I tried out this procedure. It turned out that, in addition to > >>what you describe, I had to scale the DC and the Nyquist term of the > >>spectrum (both real only due to Hermitian symmetry) by a factor of 1/2 > >>to get the desired result. > >> > >>I've never used the Hilbert transform before. I understand that it can > >>be used to determine "instantaneous" amplitude (magnitude of the > >>Hilbert transform pair) and frequency (rate of change of the phase of > >>the Hilbert transfrom pair) of a time signal. Zeno would have a feast > >>with that :-) ! > >> > >>In what type of processing are we interested in that information? > > > > > > I'm trying to do a frequency shifting. I could do that by just moving > > the content of the FFT of the signal i'm trying to shit, but only the > > shift is not flat, it varies over time (because I'm trying to turn a > > frequency sweep to flat) > > First, you're going about your task the hard way. Don't first record a > chirp and later take it out. Bring the chirp frequency to DC as you > record, as I described earlier. (I.e. record the product of your > stepwise chirp and the returned signal for later low-pass filtering.)
Bring the chirp frequency to DC? I really don't understand what you mean. And then I don't find it that hard, I just need to bring the chirp to a fixed frequency... frequency shifting is all i'm lacking to achieve that.
> Second, you seem to be making some wrong assumptions that are keeping > you from seeing how to do what you you want. Get a book. Rick's if you > can, and in any event Smith's at http://www.dspguide.com/. You can look > up anything you know you don't know. What you think you know but is > wrong really hurts.
I got Rick's book. Very good, however in the Hilbert Transform part of it I read many times when it talks about negative frequencies and I still hardly can understand it. I understand that thing about shifting the phase by +j or -j whether it's in the positive or negative frequencies, but yet I gotta find out what negative frequencies really consist off. Also in his book I didn't find anything about frequency shifting. however in some old post i found this : input signal -----> hilbert ---> multiplier ----------- \ | \ \ cos_osc(shift_freq) \ \ adder ---> out \ / \ / \------------> multiplier ------------ | sin_osc(shift_freq) basically I understand how it works, the only problem is with the hilbert thing. I understand that it's job is to shift the phase by 90=B0, but how is it gonna do so if the imaginary part of the input signal is discarded? (so far thats what i understood from anything i heard about the hilbert transform, which doesn't make sence to me because how could you possibly get the signal with phase shifted by 90=B0 if you don't use both the real and imaginary part of the signal). So that's what I don't understand, this and what negative frequencies are really are, mostly that you guys seem to say that with stuff like MATLAB you get negative frequencies out of your FFT's..
Michel Rouzic wrote:

   ...

> I'm just trying to implement frequency shifting, particularly by > following a schema i'll post on my other post. if you know of a more > efficient one..
Make that "one that works." Efficiency is the least of your concerns right now. http://dspdimension.com/ Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Michel Rouzic wrote:

   ...

> input signal -----> hilbert ---> multiplier ----------- > \ | \ > \ cos_osc(shift_freq) \ > \ adder ---> out > \ / > \ / > \------------> multiplier ------------ > | > sin_osc(shift_freq)
You put the adder at the end because of a misconception. It's true that you can write "a + jb" but no physical adder can do that and put the result on one wire.
> basically I understand how it works, the only problem is with the > hilbert thing. I understand that it's job is to shift the phase by > 90�, but how is it gonna do so if the imaginary part of the input > signal is discarded?
If your signal comes through a microphone, there is no imaginary part. One wire, one part. Two wires are needed for two parts. When you have a physical two-part signal, it remains up to you to decide which to label 'imaginary'.
> (so far thats what i understood from anything i > heard about the hilbert transform, which doesn't make sence to me > because how could you possibly get the signal with phase shifted by > 90� if you don't use both the real and imaginary part of the signal). > So that's what I don't understand, this and what negative frequencies > are really are, mostly that you guys seem to say that with stuff like > MATLAB you get negative frequencies out of your FFT's.
Signals on wires don't have imaginary parts. If you think you understand differently, you're confusing yourself. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Jerry Avins wrote:

> Signals on wires don't have imaginary parts. If you think you > understand differently, you're confusing yourself.
I think this post exemplifies how you're mutually confusing each other. Michel has shown a dataflow graph. He has not said anything about physical adders, microphones, or wires -- and that's exactly the problem. Michel, I suggest you start a new thread with a title like "Ways of handling a chirp" or whatever you find appropriate and describe in detail the problem, not what you think is the solution. Specifically, how is your signal generated, what happens to it, how do you record it, and what do you want to come out of the processor? Hint: the word "Hilbert" will not occur, unless it's a building block of the signal source. Regarding negative frequency, let's first restate the intuition behind positive frequency using a particular signal: Given a sinusoid x(t) with x(0) = 0, we can move away from the origin and eventually reach a point where the waveform is indistinguishable from that at the starting point. The distance between these two points is the waveform's period, and the reciprocal of the period is the frequency. Nothing new so far. Near the origin, we might see the sinusoid x(t) go either negative or positive. This is easily expressed as x(t) = a*sin(v*t) where either a = -1 or a = 1, and v is the (positive) frequency. Now consider another particular signal, a complex exponential y(t) with y(0) = 1. Near the origin, we might see Im(x(t)) go either negative or positive, but this time a simple amplitude factor like a above cannot capture the possibilities! Recalling some things about the complex plane, we see that they are captured by a phase factor so that y(t) = exp(p*j*v*t) where either p = -1 or p = 1. On a purely symbolic level, we now choose to pair up the quantities p and v and write y(t) = exp(j*w*t). Here w = p*v can have either sign but we keep calling it a frequency for convenience. To sum up, signed frequency tells us which way a complex exponential encircles the time axis. And that's all there is to it! In particular, there's no physical correlate. As usual sampling adds a twist, but it's easy to visualize with complex phasors. Suppose that 0 < d < pi, so d+pi is a frequency somewhat above the Nyquist limit. Then, in complete analogy to eq. (2-3) -- (2-5) in Rick's book (1st ed.), we find that exp(j * (d+pi) * n) = exp(j * (d+pi) * n - j*2*pi*n) = exp(j * (d-pi) * n); in words, the positive frequency range pi -> 2*pi aliases with the negative frequency range -pi -> 0. Now I can also tell you what FFTW does: it is a common convention for FFT codes to order bins by positive frequency in the range [0, 2*pi], so the upper half of the output array holds the components at negative frequencies of decreasing absolute value. The FFTW manual says so in section 4.7.1. Martin -- The drowning girl will remark how pretty the coral. --Sara Swanson, Malignant
Jerry Avins wrote:
> Michel Rouzic wrote: > > ... > > > I'm just trying to implement frequency shifting, particularly by > > following a schema i'll post on my other post. if you know of a more > > efficient one.. > > Make that "one that works." Efficiency is the least of your concerns > right now. http://dspdimension.com/
oh yeah, of course, i didn't mean efficiency that way, i meant efficiency as in algorithmically simple.
Martin Eisenberg wrote:
> Jerry Avins wrote: > > > Signals on wires don't have imaginary parts. If you think you > > understand differently, you're confusing yourself. > > I think this post exemplifies how you're mutually confusing each > other. Michel has shown a dataflow graph. He has not said anything > about physical adders, microphones, or wires -- and that's exactly > the problem. > > Michel, I suggest you start a new thread with a title like "Ways of > handling a chirp" or whatever you find appropriate and describe in > detail the problem, not what you think is the solution. Specifically, > how is your signal generated, what happens to it, how do you record > it, and what do you want to come out of the processor? Hint: the word > "Hilbert" will not occur, unless it's a building block of the signal > source.
I already did that actually, it was mainly about discussing of how to make a bandpass filter that varies over time, because then I had absolutly no idea on how to do it. Basically two main ideas for doing this came out, performing some time-domain convolution with a bandpass FIR kernel that would change for every sample to adapt it's frequencies, and the other one was to shift the frequency of the chirp in order to make it flat. And I wanted to implement that following the schema shown above, but I didn't know how to deal with that Hilbert Transform thing and most of all I wanted to understand it (along with negative frequencies so I could understand analytic signals as well) so that's why I did this topic about all that.
> Regarding negative frequency, let's first restate the intuition > behind positive frequency using a particular signal: Given a sinusoid > x(t) with x(0) = 0, we can move away from the origin and eventually > reach a point where the waveform is indistinguishable from that at > the starting point. The distance between these two points is the > waveform's period, and the reciprocal of the period is the frequency. > Nothing new so far. > > Near the origin, we might see the sinusoid x(t) go either negative or > positive. This is easily expressed as x(t) = a*sin(v*t) where either > a = -1 or a = 1, and v is the (positive) frequency. Now consider > another particular signal, a complex exponential y(t) with y(0) = 1. > Near the origin, we might see Im(x(t)) go either negative or > positive, but this time a simple amplitude factor like a above cannot > capture the possibilities! > > Recalling some things about the complex plane, we see that they are > captured by a phase factor so that y(t) = exp(p*j*v*t) where either > p = -1 or p = 1. On a purely symbolic level, we now choose to pair up > the quantities p and v and write y(t) = exp(j*w*t). Here w = p*v can > have either sign but we keep calling it a frequency for convenience. > > To sum up, signed frequency tells us which way a complex exponential > encircles the time axis. And that's all there is to it! In > particular, there's no physical correlate. > > As usual sampling adds a twist, but it's easy to visualize with > complex phasors. Suppose that 0 < d < pi, so d+pi is a frequency > somewhat above the Nyquist limit. Then, in complete analogy to eq. > (2-3) -- (2-5) in Rick's book (1st ed.), we find that > > exp(j * (d+pi) * n) > = exp(j * (d+pi) * n - j*2*pi*n) > = exp(j * (d-pi) * n); > > in words, the positive frequency range pi -> 2*pi aliases with the > negative frequency range -pi -> 0. > > > Now I can also tell you what FFTW does: it is a common convention for > FFT codes to order bins by positive frequency in the range > [0, 2*pi], so the upper half of the output array holds the components > at negative frequencies of decreasing absolute value. The FFTW manual > says so in section 4.7.1.
"this means that the positive frequencies are stored in the first half of the output and the negative frequencies are stored in backwards order in the second half of the output." Wow... wtf. I thought that the first half of the output was the real part, and the second half was the imaginary part backwards. All my certainities are collapsing. Now i'm in the most complete confusion. Does it mean that the imaginary part == the negative frequencies? the cosines are in positive frequencies and the sines are in negative frequencies? if the nyquist frequency is 0.5, does a cosine @ 0.85 become by aliasing a sine @ 0.15?
Martin Eisenberg wrote:
> Jerry Avins wrote: > > >>Signals on wires don't have imaginary parts. If you think you >>understand differently, you're confusing yourself. > > > I think this post exemplifies how you're mutually confusing each > other. Michel has shown a dataflow graph. He has not said anything > about physical adders, microphones, or wires -- and that's exactly > the problem.
Thank you, Martin. I too had been making assumptions, and starting over is the right way from here. I was also beginning to think Michael and I were alone in this; company is nice. ...
> Martin >
> -- > The drowning girl will remark how pretty the coral. > --Sara Swanson, Malignant When I was young I was so skinny and bony that I didn't float in seawater, but I didn't know that. (I could go skinny dipping then. Now the best I can do is chunky dunking.) Adults always assured me that, relaxed in the water, I would float. I believed them. One afternoon, I had been swimming a lot around a raft anchored off shore, and decided to ho home; that meant swimming to the beach. I got about half way there and gave out. I was clear that my face wood get wet whether I liked it or not; I had to float. My thought as I saw the surface receding above me was, "Damn! The grownups lied again!" Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Michel Rouzic wrote:

   ...

> I already did that actually, it was mainly about discussing of how to > make a bandpass filter that varies over time, because then I had > absolutly no idea on how to do it. Basically two main ideas for doing > this came out, performing some time-domain convolution with a bandpass > FIR kernel that would change for every sample to adapt it's > frequencies, and the other one was to shift the frequency of the chirp > in order to make it flat. And I wanted to implement that following the > schema shown above, but I didn't know how to deal with that Hilbert > Transform thing and most of all I wanted to understand it (along with > negative frequencies so I could understand analytic signals as well) so > that's why I did this topic about all that.
Instead of a bandpass filter that varies with time, you can record the results of the chirp already shifted to DC and environs, so that a fixed lowpass filter does the same job. It's much easier that way. ...
>>Now I can also tell you what FFTW does: it is a common convention for >>FFT codes to order bins by positive frequency in the range >>[0, 2*pi], so the upper half of the output array holds the components >>at negative frequencies of decreasing absolute value. The FFTW manual >>says so in section 4.7.1. > > > "this means that the > positive frequencies are stored in the first half of the output and the > negative frequencies > are stored in backwards order in the second half of the output." > > Wow... wtf. I thought that the first half of the output was the real > part, and the second half was the imaginary part backwards. All my > certainities are collapsing. Now i'm in the most complete confusion. > Does it mean that the imaginary part == the negative frequencies? the > cosines are in positive frequencies and the sines are in negative > frequencies? if the nyquist frequency is 0.5, does a cosine @ 0.85 > become by aliasing a sine @ 0.15?
There are various ways to program FFTs. You need to read the specs of whichever you use to know if it's real in / complex out, complex in / complex out, where zero frequency is in the arrays, and the storage order in general. Decimation in time and decimation in frequency lead to different arrays with the same results. The one constant is that negative and imaginary frequencies are not interchangeable. Basically, you need to understand the tools you use. If you have th books I mentioned before, read them in detail and work out a few examples. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;