Reply by Jerry Avins February 26, 20042004-02-26
Ronald H. Nicholson Jr. wrote:

> In article <403e0f1d$0$3094$61fed72c@news.rcn.com>, > Jerry Avins <jya@ieee.org> wrote: > >>You lost me. I hold this truth to be, if not self evident, at least well >>established: every transversal (tapped delay line) filter has as many >>poles as zeros, and they are all at the origin on the z plane. > > > Not true, IMHO, unless you add more conditions. Dividing by the transfer > function by z (or more canonically multiplying by z^-1) is equivalent > to adding a one sample delay. You can have less poles than zeros in a > non-causal FIR filter (less by the number of samples by which the output > precedes its first input). This is equivalent to shifting all the tap > coeffs to the left. You can have more poles than zeros simply by adding > a delay line to the input or output. This is equivalent to shifting all > the tap coeffs to the right. > > >>Assuming >>that's correct, how can scrambling the values of the tap coefficients >>in any way put poles at infinity? > > > How else could you get a transfer function (with a non-zero impulse > response) to go to zero over any finite domain, which would happen > if all the usual coeffs got swapped end for end with those at > +infinity? > > > (don't forget. this would not have been posted except for line > noise caused by a glass of red wine pinching the modem cable. :) > IMHO. YMMV.
I'm still lost. Why does swapping the tap coefficients of a delay line end for end, or shuffling them in any other way, make the transfer function go to zero over over a finite domain? Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Ronald H. Nicholson Jr. February 26, 20042004-02-26
In article <403e0f1d$0$3094$61fed72c@news.rcn.com>,
Jerry Avins  <jya@ieee.org> wrote:
>You lost me. I hold this truth to be, if not self evident, at least well >established: every transversal (tapped delay line) filter has as many >poles as zeros, and they are all at the origin on the z plane.
Not true, IMHO, unless you add more conditions. Dividing by the transfer function by z (or more canonically multiplying by z^-1) is equivalent to adding a one sample delay. You can have less poles than zeros in a non-causal FIR filter (less by the number of samples by which the output precedes its first input). This is equivalent to shifting all the tap coeffs to the left. You can have more poles than zeros simply by adding a delay line to the input or output. This is equivalent to shifting all the tap coeffs to the right.
>Assuming >that's correct, how can scrambling the values of the tap coefficients >in any way put poles at infinity?
How else could you get a transfer function (with a non-zero impulse response) to go to zero over any finite domain, which would happen if all the usual coeffs got swapped end for end with those at +infinity? (don't forget. this would not have been posted except for line noise caused by a glass of red wine pinching the modem cable. :) IMHO. YMMV. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.
Reply by Jerry Avins February 26, 20042004-02-26
Ronald H. Nicholson Jr. wrote:

> In article <403a23f0$0$3095$61fed72c@news.rcn.com>, > Jerry Avins <jya@ieee.org> wrote: > >>Bob Cain wrote: >> >>>Matt Timmermans wrote: >>> >>>>You actually have to put the poles outside the unit circle, as well as >>>>the zeros, to get that effect. I believe the term "maximum phase", >>>>though it isn't used much means with poles inside and zeros outside. >>> >>>Curious how to do that if they are all at the origin. >> >>They would all go to infinity, the only question being the angle. But it >>doesn't happen. I meant an FIR structure; how else could the tap >>coefficients be swapped end for end?, so the poles stay at the origin. > > > The poles do go to infinity if you truly swap the "entire" FIR structure > end for end. That's because you'd have to swap the entire positive real > axis to cover all possible FIR stuctures. All the FIR coeff's end up > at positive infinity (in reverse order), so the impulse response in any > finite interval around the origin becomes zero, corresponding to the > denominator of the transfer function being infinite, corresponding to > a pole at infinity. > > If you reflect the FIR structure around the origin you end up with no > poles, but also a what's usually called a non-causal filter. > > If instead you reflect the FIR coeffs around some finite tap number > (even an infinite extent of coeffs), a number of poles corresponding to > the delay of that reflection point stay put or get added at zero. > > > (Disclaimer: This post is probably due to line noise, no doubt caused > by too large an evening glass of red wine placed on the modem cable.) > IMHO. YMMV.
You lost me. I hold this truth to be, if not self evident, at least well established: every transversal (tapped delay line) filter has as many poles as zeros, and they are all at the origin on the z plane. Assuming that's correct, how can scrambling the values of the tap coefficients in any way put poles at infinity? Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Matt Timmermans February 26, 20042004-02-26
"robert bristow-johnson" <rbj@surfglobal.net> wrote in message
news:BC6313C3.8E9C%rbj@surfglobal.net...
> it may be true, but doesn't appear obvious to me. if Re{f(w)} and
Im{f(w)}
> are a Hilbert pair, why is Re{exp(f(w))} and Im{exp(f(w))}?
Because pointwise sums and products preserve that property, and are sufficient to construct exp(f(w)) from f(w), using the Taylor series for exp(x).
> i'm not sure that you aren't tossing about the term "analytic" here since > there are two different usages of it, *both* possibly legitimately > referenced when one is discussing minimum phase and Hilbert Transform.
I was careful to tell you which I meant -- "has one-sided Fourier transform".
> [...] any phase response different from that would have to that of a > non-minimum phase filter and, as the name implies, it would have to > be *more* than the min-phase.
Either that or non-causal, yes.
> if you set > up the limits of the integrals well, you can compute the HT of a lot of > functions despite the problems of integrating 1/t. [...] that will work
for
> an awful lot of odd symmetry functions (which the phase > guys are) because whatever nasty thing that happens around +a will get > virtually cancelled by whatever nasty thing that happens around -a as > a->inf.
No, that does not work for odd symmetry functions. The HT kernel has odd symmetry too, so the +a and -a sides add constructively.
Reply by Ronald H. Nicholson Jr. February 26, 20042004-02-26
In article <403a23f0$0$3095$61fed72c@news.rcn.com>,
Jerry Avins  <jya@ieee.org> wrote:
>Bob Cain wrote: >> Matt Timmermans wrote: >>> You actually have to put the poles outside the unit circle, as well as >>> the zeros, to get that effect. I believe the term "maximum phase", >>> though it isn't used much means with poles inside and zeros outside. >> >> Curious how to do that if they are all at the origin. > >They would all go to infinity, the only question being the angle. But it >doesn't happen. I meant an FIR structure; how else could the tap >coefficients be swapped end for end?, so the poles stay at the origin.
The poles do go to infinity if you truly swap the "entire" FIR structure end for end. That's because you'd have to swap the entire positive real axis to cover all possible FIR stuctures. All the FIR coeff's end up at positive infinity (in reverse order), so the impulse response in any finite interval around the origin becomes zero, corresponding to the denominator of the transfer function being infinite, corresponding to a pole at infinity. If you reflect the FIR structure around the origin you end up with no poles, but also a what's usually called a non-causal filter. If instead you reflect the FIR coeffs around some finite tap number (even an infinite extent of coeffs), a number of poles corresponding to the delay of that reflection point stay put or get added at zero. (Disclaimer: This post is probably due to line noise, no doubt caused by too large an evening glass of red wine placed on the modem cable.) IMHO. YMMV. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.
Reply by robert bristow-johnson February 26, 20042004-02-26
In article c1iqi205el@enews3.newsguy.com, Bob Cain at
arcane@arcanemethods.com wrote on 02/25/2004 13:46:

> robert bristow-johnson wrote: > >> again, how do you show, that for a filter that has all zeros inside the unit >> circle (or in the left half s-plane for continuous-time) that such a filters >> log-magnitude and phase (in radians) are a Hilbert pair? and why is it not >> true if any of the zeros are reflected to outside the unit circle (or to the >> right half s-plane)? showing this equivalence is not easy even if it is a >> commonly referenced fact. > > But if anybody could do it, I'll bet it's you. :-)
it's too painful. in the s-plane, people who've taken a complex variables course and like to do contour integration and residues can come up with the "formulae of Poisson, Laplace, and Hilbert" which shows it as long as the complex function log(H(s)) is "analytic" in the right half plane (zeros there will blow it when you log()). one way i did it for myself (s-plane), was to show that the Hilbert Transform of 1/(1+w^2) is w/(1+w^2) and that those are the derivatives of arctan(w) and log(sqrt(1+w^2)) which is the phase and log magnitude of a single zero in the left half-plane. so if the derivatives are a Hilbert pair, so be the functions. then use linearity (both phases and log-magnitudes add) to extend to multiple poles and zeros in the left half-plane. using a Bilinear Transform argument didn't work for extending this to the z-plane and the only proof i understood was that in O&S 1989 (something to do with homomorphic processing). it's all a female dog. r b-j
Reply by robert bristow-johnson February 26, 20042004-02-26
In article %bd%b.16222$253.1014068@news20.bellglobal.com, Matt Timmermans at
mt0000@sympatico.nospam-remove.ca wrote on 02/25/2004 21:54:

> > "robert bristow-johnson" <rbj@surfglobal.net> wrote in message > news:BC61A194.8DF1%rbj@surfglobal.net... >> In article ESz_b.4178$ee3.228505@news20.bellglobal.com, Matt Timmermans at >> mt0000@sympatico.nospam-remove.ca wrote on 02/23/2004 22:52: >> [...] >>> Ah, but I said "analytic signal". The complex log of the frequency response >>> is an analytic signal, so the frequency response is an analytic signal, >> >> but that does not necessarily follow. you could have a non-minimum phase >> frequency response of a causal filter (which is an "analytical signal") but >> the complex log of it would not be. > > That's correct, but it *does* work in the other direction -- if f(w) is > analytic (in the sense of "analytic signal", that I'll be using henceforth > ;-), then exp(f(w)) is also analytic. This follows directly from the Taylor > series expansion of exp(x), with which you can build exp(f(w)) from f(w) > through a convergent sequence of point-wise sums and products. Both of > those operations preserve the analytic property.
it may be true, but doesn't appear obvious to me. if Re{f(w)} and Im{f(w)} are a Hilbert pair, why is Re{exp(f(w))} and Im{exp(f(w))}?
> You can't do that in the other direction, because a Taylor series for ln(x) > needs constants in the numerators that mess it up. I'm not even sure that > the Taylor series for ln(x) is valid for complex arguments. Either way, an > analytic frequency response needn't have an analytic complex log, and > doesn't for non-minimum phase filters.
i'm not sure that you aren't tossing about the term "analytic" here since there are two different usages of it, *both* possibly legitimately referenced when one is discussing minimum phase and Hilbert Transform. Analytical Signal: some complex f(t) (real t) such that Im{f(t)} = -Hilbert{ Re{f(t)} }. the Fourier Transform F(jw) will be one-sided. EE communications texts like to define this one. Analytic Function: some complex F(s) (complex s) such that Im{ (d/du)F(u+j*v) } = Re{ (d/dv)F(u+j*v) } in some region of interest. complex math texts like to do this one. if you do a line integral around a closed curve where the function is analytic everywhere inside, that line integral will be zero.
>> again, how do you show, that for a filter that has all zeros inside the unit >> circle (or in the left half s-plane for continuous-time) that such a filters >> log-magnitude and phase (in radians) are a Hilbert pair? > > I don't. Everyone seems to know that, and I had no reason to doubt them,
yeah, if only people hadn't done that regarding WMD and the evil Bush. sometimes the curious in us must demand proof of stuff that everyone seems to know.
> so > I just took their word after verifying a few cases. The other side of the > coin is important to me, however...
what's the other side?
>> and why is it not >> true if any of the zeros are reflected to outside the unit circle (or to the >> right half s-plane)? showing this equivalence is not easy even if it is a >> commonly referenced fact. > > If a discrete-time filter doesn't have the same number of poles as zeros > inside the unit circle, or if a continuous-time filter has (left poles + > right zeros) != (right poles + left zeros), then its phase response, i.e., > the integral of the group delay, has an antisymmetric divergent > characteristic. It does not approach zero as w -> +\inf and -\inf. > Instead, the positive side grows without bound in one direction while the > negative side grows without bound in the other (discrete case), or positive > side and the negative side have offset asymptotes (continuous case). You > only have to graph a few of these phase responses to see it. > > Either behaviour renders the Hilbert transform undefined -- if you try to > evaluate it, it goes infinite everywhere. Since such phase responses don't > even have Hilbert transforms, they certainly don't form Hilbert transform > pairs with the log-magnitude responses of those filters.
that's a bit unsatisfying to me since, normally, the application of the min-phase and HT property is that given a magnitude response, *if* you assume that the magnitude response is that of a minimum phase filter, then you can always determine what that phase response would be (it would be the negative of the HT of the natural log of the magnitude). any phase response different from that would have to that of a non-minimum phase filter and, as the name implies, it would have to be *more* than the min-phase. if you set up the limits of the integrals well, you can compute the HT of a lot of functions despite the problems of integrating 1/t. ~x(t) = Hilbert{ x(t) } -1/a +a = lim integral{ x(t-u)/(pi*u) du} + integral{ x(t-u)/(pi*u) du} a->+inf -a +1/a that will work for an awful lot of odd symmetry functions (which the phase guys are) because whatever nasty thing that happens around +a will get virtually cancelled by whatever nasty thing that happens around -a as a->inf.
Reply by Matt Timmermans February 25, 20042004-02-25
"robert bristow-johnson" <rbj@surfglobal.net> wrote in message
news:BC61A194.8DF1%rbj@surfglobal.net...
> In article ESz_b.4178$ee3.228505@news20.bellglobal.com, Matt Timmermans at > mt0000@sympatico.nospam-remove.ca wrote on 02/23/2004 22:52: > [...] > > Ah, but I said "analytic signal". The complex log of the frequency
response
> > is an analytic signal, so the frequency response is an analytic signal, > > but that does not necessarily follow. you could have a non-minimum phase > frequency response of a causal filter (which is an "analytical signal")
but
> the complex log of it would not be.
That's correct, but it *does* work in the other direction -- if f(w) is analytic (in the sense of "analytic signal", that I'll be using henceforth ;-), then exp(f(w)) is also analytic. This follows directly from the Taylor series expansion of exp(x), with which you can build exp(f(w)) from f(w) through a convergent sequence of point-wise sums and products. Both of those operations preserve the analytic property. You can't do that in the other direction, because a Taylor series for ln(x) needs constants in the numerators that mess it up. I'm not even sure that the Taylor series for ln(x) is valid for complex arguments. Either way, an analytic frequency response needn't have an analytic complex log, and doesn't for non-minimum phase filters.
> again, how do you show, that for a filter that has all zeros inside the
unit
> circle (or in the left half s-plane for continuous-time) that such a
filters
> log-magnitude and phase (in radians) are a Hilbert pair?
I don't. Everyone seems to know that, and I had no reason to doubt them, so I just took their word after verifying a few cases. The other side of the coin is important to me, however...
> and why is it not > true if any of the zeros are reflected to outside the unit circle (or to
the
> right half s-plane)? showing this equivalence is not easy even if it is a > commonly referenced fact.
If a discrete-time filter doesn't have the same number of poles as zeros inside the unit circle, or if a continuous-time filter has (left poles + right zeros) != (right poles + left zeros), then its phase response, i.e., the integral of the group delay, has an antisymmetric divergent characteristic. It does not approach zero as w -> +\inf and -\inf. Instead, the positive side grows without bound in one direction while the negative side grows without bound in the other (discrete case), or positive side and the negative side have offset asymptotes (continuous case). You only have to graph a few of these phase responses to see it. Either behaviour renders the Hilbert transform undefined -- if you try to evaluate it, it goes infinite everywhere. Since such phase responses don't even have Hilbert transforms, they certainly don't form Hilbert transform pairs with the log-magnitude responses of those filters.
Reply by Jerry Avins February 25, 20042004-02-25
Bob Cain wrote:

> Jerry Avins wrote: > > >> >> That's a whole new ball of string. I'm under the impression that till >> now, the discussion was limited to transversal FIRs. The link above is >> about IIRs. > > > Not really. The process accepts a finite length signal as input. He is > just pointing out that it can also be used to bring poles that are > outside the unit circle to the inside in IIR situations where that > arises, by application of it to the denominator. > > > Bob
It's not a surprising extension, but it doesn't lend itself to creating maximum-phase filters by reversing the order of the coefficients. :-) Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Bob Cain February 25, 20042004-02-25
Jerry Avins wrote:


> > That's a whole new ball of string. I'm under the impression that till > now, the discussion was limited to transversal FIRs. The link above is > about IIRs.
Not really. The process accepts a finite length signal as input. He is just pointing out that it can also be used to bring poles that are outside the unit circle to the inside in IIR situations where that arises, by application of it to the denominator. Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein