DSPRelated.com
Forums

FIR roots and frequency response

Started by Bob Cain February 13, 2004
"robert bristow-johnson" <rbj@surfglobal.net> wrote in message
news:BC5F01FA.8CFF%rbj@surfglobal.net...
> In article f3WZb.25811$Cd6.1037211@news20.bellglobal.com, Matt Timmermans
at
> mt0000@sympatico.nospam-remove.ca wrote on 02/21/2004 23:18: > > ... > > What, exactly, a minimum phase filter minimizes is the average group
delay
> > (discrete case) or integral of the group delay (continuous) over the
entire
> > spectrum -- they're zero for minimum phase filters. > > i think that Minimum Phase Filters (whether they be FIR or IIR matters
not)
> simply minimize the phase shift (or the negative of it). the minimizing
of
> phase delay or group delay happens as a consequence of that.
That's pretty much the same thing, but average or integral of group delay is more specific. If someone asked you which phase shift, and how you go about comparing the magnitude of those angular measurements, I think you would answer in terms of group delay.
> > Any minimum phase filter without zeros right on the frequency axis can
be
> > reconstructed from its log-magnitude spectrum, by using the Hilbert > > transform to derive its phase response. Exponentiate log-magnitude + > > j*phase to get the frequency response, and transform that back into the
time
> > domain to get the impulse response. The impulse response will be
causal,
> > because log-magnitude + j*phase is an analytic signal, and
exponentiation
> > preserves this property. > > i think to prove the association of MPF to the Hilbert Transform property
of
> log-magnitude to phase in radians takes more than that. just because a > complex function happens to be analytic (not to be confused with the use
of
> the term "analytic" in "analytic signal") [...]
Ah, but I said "analytic signal". The complex log of the frequency response is an analytic signal, so the frequency response is an analytic signal, so the impulse response is one-sided. Showing that you end up with zero average group delay is easy, too -- the Hilbert transform doesn't pass DC. Proving that you can't get less that zero average group delay from a causal filter, or proving that all of this corresponds to some other definition of "minimum phase filter" may be more difficult. I'm just happy to have a nice mental model of "minimum phase" that doesn't depend on poles and zeros, and lets me consruct cool filters like this one, with group delay (in samples) = ln(mag) = -cos(w). (approximately, due to truncation): h[n] = { 1.0000000000E+00 -1.0000000000E+00 5.0000000000E-01 -1.6666666667E-01 4.1666666667E-02 -8.3333333333E-03 1.3888888889E-03 -1.9841269841E-04 2.4801587302E-05 -2.7557319224E-06 2.7557319223E-07 -2.5052108382E-08 2.0876756982E-09 -1.6059044022E-10 1.1470750522E-11 -7.6471633382E-13 } In general, for discrete filters, when ln(mag) = a*cos(kw), then group_delay = a*k*cos(kw). That provides an interesting way to design novel IIRs, that don't have rational polynomial transfer functions, using remez().
In article ESz_b.4178$ee3.228505@news20.bellglobal.com, Matt Timmermans at
mt0000@sympatico.nospam-remove.ca wrote on 02/23/2004 22:52:

> > "robert bristow-johnson" <rbj@surfglobal.net> wrote in message > news:BC5F01FA.8CFF%rbj@surfglobal.net... >> In article f3WZb.25811$Cd6.1037211@news20.bellglobal.com, Matt Timmermans at >> mt0000@sympatico.nospam-remove.ca wrote on 02/21/2004 23:18:
...
>>> Any minimum phase filter without zeros right on the frequency axis can be >>> reconstructed from its log-magnitude spectrum, by using the Hilbert >>> transform to derive its phase response. Exponentiate log-magnitude + >>> j*phase to get the frequency response, and transform that back into the time >>> domain to get the impulse response. The impulse response will be causal, >>> because log-magnitude + j*phase is an analytic signal, and exponentiation >>> preserves this property. >> >> i think to prove the association of MPF to the Hilbert Transform property of >> log-magnitude to phase in radians takes more than that. just because a >> complex function happens to be analytic (not to be confused with the use of >> the term "analytic" in "analytic signal") [...] > > Ah, but I said "analytic signal". The complex log of the frequency response > is an analytic signal, so the frequency response is an analytic signal,
but that does not necessarily follow. you could have a non-minimum phase frequency response of a causal filter (which is an "analytical signal") but the complex log of it would not be.
> so the impulse response is one-sided.
which is always the case for the inverse F.T. of a frequency response that is an "analytical signal" (real part and imag part are a Hilbert pair). it is not obvious that if you define the hypothetical impulse response of log(H(e^jw)), that it is causal or one-sided. again, how do you show, that for a filter that has all zeros inside the unit circle (or in the left half s-plane for continuous-time) that such a filters log-magnitude and phase (in radians) are a Hilbert pair? and why is it not true if any of the zeros are reflected to outside the unit circle (or to the right half s-plane)? showing this equivalence is not easy even if it is a commonly referenced fact. r b-j
robert bristow-johnson wrote:

> again, how do you show, that for a filter that has all zeros inside the unit > circle (or in the left half s-plane for continuous-time) that such a filters > log-magnitude and phase (in radians) are a Hilbert pair? and why is it not > true if any of the zeros are reflected to outside the unit circle (or to the > right half s-plane)? showing this equivalence is not easy even if it is a > commonly referenced fact.
But if anybody could do it, I'll bet it's you. :-) FWIW, a nice diagram of this process is at: http://www.nauticom.net/www/jdtaft/minphase.htm What I most found of interest here is that he states it a requirement that the input be padded with zeros to twice its size to be "discrete causal" and then truncated back at the end. I empirically found this to be necessasary using the Matlab rceps() function to avoid wierdness in the second half of the result but someone here reported that the modification left it non-minimum. I just dunno what to believe. :-) Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Bob Cain wrote:

> robert bristow-johnson wrote: > >> again, how do you show, that for a filter that has all zeros inside >> the unit >> circle (or in the left half s-plane for continuous-time) that such a >> filters >> log-magnitude and phase (in radians) are a Hilbert pair? and why is >> it not >> true if any of the zeros are reflected to outside the unit circle (or >> to the >> right half s-plane)? showing this equivalence is not easy even if it >> is a >> commonly referenced fact. > > > But if anybody could do it, I'll bet it's you. :-) > > FWIW, a nice diagram of this process is at: > > http://www.nauticom.net/www/jdtaft/minphase.htm > > What I most found of interest here is that he states it a requirement > that the input be padded with zeros to twice its size to be "discrete > causal" and then truncated back at the end. I empirically found this to > be necessasary using the Matlab rceps() function to avoid wierdness in > the second half of the result but someone here reported that the > modification left it non-minimum. I just dunno what to believe. :-) > > Bob
That's a whole new ball of string. I'm under the impression that till now, the discussion was limited to transversal FIRs. The link above is about IIRs. I don't see where the coefficients come from yet, but FIRs don't have denominators and don't need stabilizing. I'm pretty sure O&S has a rigorous development of the FIR case. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Jerry Avins wrote:


> > That's a whole new ball of string. I'm under the impression that till > now, the discussion was limited to transversal FIRs. The link above is > about IIRs.
Not really. The process accepts a finite length signal as input. He is just pointing out that it can also be used to bring poles that are outside the unit circle to the inside in IIR situations where that arises, by application of it to the denominator. Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Bob Cain wrote:

> Jerry Avins wrote: > > >> >> That's a whole new ball of string. I'm under the impression that till >> now, the discussion was limited to transversal FIRs. The link above is >> about IIRs. > > > Not really. The process accepts a finite length signal as input. He is > just pointing out that it can also be used to bring poles that are > outside the unit circle to the inside in IIR situations where that > arises, by application of it to the denominator. > > > Bob
It's not a surprising extension, but it doesn't lend itself to creating maximum-phase filters by reversing the order of the coefficients. :-) Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
"robert bristow-johnson" <rbj@surfglobal.net> wrote in message
news:BC61A194.8DF1%rbj@surfglobal.net...
> In article ESz_b.4178$ee3.228505@news20.bellglobal.com, Matt Timmermans at > mt0000@sympatico.nospam-remove.ca wrote on 02/23/2004 22:52: > [...] > > Ah, but I said "analytic signal". The complex log of the frequency
response
> > is an analytic signal, so the frequency response is an analytic signal, > > but that does not necessarily follow. you could have a non-minimum phase > frequency response of a causal filter (which is an "analytical signal")
but
> the complex log of it would not be.
That's correct, but it *does* work in the other direction -- if f(w) is analytic (in the sense of "analytic signal", that I'll be using henceforth ;-), then exp(f(w)) is also analytic. This follows directly from the Taylor series expansion of exp(x), with which you can build exp(f(w)) from f(w) through a convergent sequence of point-wise sums and products. Both of those operations preserve the analytic property. You can't do that in the other direction, because a Taylor series for ln(x) needs constants in the numerators that mess it up. I'm not even sure that the Taylor series for ln(x) is valid for complex arguments. Either way, an analytic frequency response needn't have an analytic complex log, and doesn't for non-minimum phase filters.
> again, how do you show, that for a filter that has all zeros inside the
unit
> circle (or in the left half s-plane for continuous-time) that such a
filters
> log-magnitude and phase (in radians) are a Hilbert pair?
I don't. Everyone seems to know that, and I had no reason to doubt them, so I just took their word after verifying a few cases. The other side of the coin is important to me, however...
> and why is it not > true if any of the zeros are reflected to outside the unit circle (or to
the
> right half s-plane)? showing this equivalence is not easy even if it is a > commonly referenced fact.
If a discrete-time filter doesn't have the same number of poles as zeros inside the unit circle, or if a continuous-time filter has (left poles + right zeros) != (right poles + left zeros), then its phase response, i.e., the integral of the group delay, has an antisymmetric divergent characteristic. It does not approach zero as w -> +\inf and -\inf. Instead, the positive side grows without bound in one direction while the negative side grows without bound in the other (discrete case), or positive side and the negative side have offset asymptotes (continuous case). You only have to graph a few of these phase responses to see it. Either behaviour renders the Hilbert transform undefined -- if you try to evaluate it, it goes infinite everywhere. Since such phase responses don't even have Hilbert transforms, they certainly don't form Hilbert transform pairs with the log-magnitude responses of those filters.
In article %bd%b.16222$253.1014068@news20.bellglobal.com, Matt Timmermans at
mt0000@sympatico.nospam-remove.ca wrote on 02/25/2004 21:54:

> > "robert bristow-johnson" <rbj@surfglobal.net> wrote in message > news:BC61A194.8DF1%rbj@surfglobal.net... >> In article ESz_b.4178$ee3.228505@news20.bellglobal.com, Matt Timmermans at >> mt0000@sympatico.nospam-remove.ca wrote on 02/23/2004 22:52: >> [...] >>> Ah, but I said "analytic signal". The complex log of the frequency response >>> is an analytic signal, so the frequency response is an analytic signal, >> >> but that does not necessarily follow. you could have a non-minimum phase >> frequency response of a causal filter (which is an "analytical signal") but >> the complex log of it would not be. > > That's correct, but it *does* work in the other direction -- if f(w) is > analytic (in the sense of "analytic signal", that I'll be using henceforth > ;-), then exp(f(w)) is also analytic. This follows directly from the Taylor > series expansion of exp(x), with which you can build exp(f(w)) from f(w) > through a convergent sequence of point-wise sums and products. Both of > those operations preserve the analytic property.
it may be true, but doesn't appear obvious to me. if Re{f(w)} and Im{f(w)} are a Hilbert pair, why is Re{exp(f(w))} and Im{exp(f(w))}?
> You can't do that in the other direction, because a Taylor series for ln(x) > needs constants in the numerators that mess it up. I'm not even sure that > the Taylor series for ln(x) is valid for complex arguments. Either way, an > analytic frequency response needn't have an analytic complex log, and > doesn't for non-minimum phase filters.
i'm not sure that you aren't tossing about the term "analytic" here since there are two different usages of it, *both* possibly legitimately referenced when one is discussing minimum phase and Hilbert Transform. Analytical Signal: some complex f(t) (real t) such that Im{f(t)} = -Hilbert{ Re{f(t)} }. the Fourier Transform F(jw) will be one-sided. EE communications texts like to define this one. Analytic Function: some complex F(s) (complex s) such that Im{ (d/du)F(u+j*v) } = Re{ (d/dv)F(u+j*v) } in some region of interest. complex math texts like to do this one. if you do a line integral around a closed curve where the function is analytic everywhere inside, that line integral will be zero.
>> again, how do you show, that for a filter that has all zeros inside the unit >> circle (or in the left half s-plane for continuous-time) that such a filters >> log-magnitude and phase (in radians) are a Hilbert pair? > > I don't. Everyone seems to know that, and I had no reason to doubt them,
yeah, if only people hadn't done that regarding WMD and the evil Bush. sometimes the curious in us must demand proof of stuff that everyone seems to know.
> so > I just took their word after verifying a few cases. The other side of the > coin is important to me, however...
what's the other side?
>> and why is it not >> true if any of the zeros are reflected to outside the unit circle (or to the >> right half s-plane)? showing this equivalence is not easy even if it is a >> commonly referenced fact. > > If a discrete-time filter doesn't have the same number of poles as zeros > inside the unit circle, or if a continuous-time filter has (left poles + > right zeros) != (right poles + left zeros), then its phase response, i.e., > the integral of the group delay, has an antisymmetric divergent > characteristic. It does not approach zero as w -> +\inf and -\inf. > Instead, the positive side grows without bound in one direction while the > negative side grows without bound in the other (discrete case), or positive > side and the negative side have offset asymptotes (continuous case). You > only have to graph a few of these phase responses to see it. > > Either behaviour renders the Hilbert transform undefined -- if you try to > evaluate it, it goes infinite everywhere. Since such phase responses don't > even have Hilbert transforms, they certainly don't form Hilbert transform > pairs with the log-magnitude responses of those filters.
that's a bit unsatisfying to me since, normally, the application of the min-phase and HT property is that given a magnitude response, *if* you assume that the magnitude response is that of a minimum phase filter, then you can always determine what that phase response would be (it would be the negative of the HT of the natural log of the magnitude). any phase response different from that would have to that of a non-minimum phase filter and, as the name implies, it would have to be *more* than the min-phase. if you set up the limits of the integrals well, you can compute the HT of a lot of functions despite the problems of integrating 1/t. ~x(t) = Hilbert{ x(t) } -1/a +a = lim integral{ x(t-u)/(pi*u) du} + integral{ x(t-u)/(pi*u) du} a->+inf -a +1/a that will work for an awful lot of odd symmetry functions (which the phase guys are) because whatever nasty thing that happens around +a will get virtually cancelled by whatever nasty thing that happens around -a as a->inf.
In article c1iqi205el@enews3.newsguy.com, Bob Cain at
arcane@arcanemethods.com wrote on 02/25/2004 13:46:

> robert bristow-johnson wrote: > >> again, how do you show, that for a filter that has all zeros inside the unit >> circle (or in the left half s-plane for continuous-time) that such a filters >> log-magnitude and phase (in radians) are a Hilbert pair? and why is it not >> true if any of the zeros are reflected to outside the unit circle (or to the >> right half s-plane)? showing this equivalence is not easy even if it is a >> commonly referenced fact. > > But if anybody could do it, I'll bet it's you. :-)
it's too painful. in the s-plane, people who've taken a complex variables course and like to do contour integration and residues can come up with the "formulae of Poisson, Laplace, and Hilbert" which shows it as long as the complex function log(H(s)) is "analytic" in the right half plane (zeros there will blow it when you log()). one way i did it for myself (s-plane), was to show that the Hilbert Transform of 1/(1+w^2) is w/(1+w^2) and that those are the derivatives of arctan(w) and log(sqrt(1+w^2)) which is the phase and log magnitude of a single zero in the left half-plane. so if the derivatives are a Hilbert pair, so be the functions. then use linearity (both phases and log-magnitudes add) to extend to multiple poles and zeros in the left half-plane. using a Bilinear Transform argument didn't work for extending this to the z-plane and the only proof i understood was that in O&S 1989 (something to do with homomorphic processing). it's all a female dog. r b-j
In article <403a23f0$0$3095$61fed72c@news.rcn.com>,
Jerry Avins  <jya@ieee.org> wrote:
>Bob Cain wrote: >> Matt Timmermans wrote: >>> You actually have to put the poles outside the unit circle, as well as >>> the zeros, to get that effect. I believe the term "maximum phase", >>> though it isn't used much means with poles inside and zeros outside. >> >> Curious how to do that if they are all at the origin. > >They would all go to infinity, the only question being the angle. But it >doesn't happen. I meant an FIR structure; how else could the tap >coefficients be swapped end for end?, so the poles stay at the origin.
The poles do go to infinity if you truly swap the "entire" FIR structure end for end. That's because you'd have to swap the entire positive real axis to cover all possible FIR stuctures. All the FIR coeff's end up at positive infinity (in reverse order), so the impulse response in any finite interval around the origin becomes zero, corresponding to the denominator of the transfer function being infinite, corresponding to a pole at infinity. If you reflect the FIR structure around the origin you end up with no poles, but also a what's usually called a non-causal filter. If instead you reflect the FIR coeffs around some finite tap number (even an infinite extent of coeffs), a number of poles corresponding to the delay of that reflection point stay put or get added at zero. (Disclaimer: This post is probably due to line noise, no doubt caused by too large an evening glass of red wine placed on the modem cable.) IMHO. YMMV. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.