DSPRelated.com
Forums

Identification of Non-Minimum Phase zeros

Started by HelpmaBoab March 17, 2006
If we have a non-minimum phase FIR system (1-2z^-1) when it is quite easy to
identify this system (assuming the input is known white driving noise) using
LMS or recursive-least squares.

However, if we add uncorrelated  white noise at the output then the
equivalent system appears to be an innovations model where the zero is found
from spectral factorisation ie the LMS algorithm identifies the spectral
factor directly and this is minimum phase. Is it possible to idenify the
underlying non-min phase zero?


Tam


HelpmaBoab wrote:
> If we have a non-minimum phase FIR system (1-2z^-1) when it is quite easy to > identify this system (assuming the input is known white driving noise) using > LMS or recursive-least squares. > > However, if we add uncorrelated white noise at the output then the > equivalent system appears to be an innovations model where the zero is found > from spectral factorisation ie the LMS algorithm identifies the spectral > factor directly and this is minimum phase. Is it possible to idenify the > underlying non-min phase zero? > > > Tam > >
Your assertion that adding white noise to the output will change the essential nature of the answer is incorrect. Any ARMA model extraction procedure should yield an answer that converges on the correct one with enough sample points -- it's just that the more noise you have, the more points you will need to get a good answer. On what do you base the assertion? -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/
Tim Wescott wrote:
> HelpmaBoab wrote: > >> If we have a non-minimum phase FIR system (1-2z^-1) when it is quite >> easy to >> identify this system (assuming the input is known white driving noise) >> using >> LMS or recursive-least squares. >> >> However, if we add uncorrelated white noise at the output then the >> equivalent system appears to be an innovations model where the zero is >> found >> from spectral factorisation ie the LMS algorithm identifies the spectral >> factor directly and this is minimum phase. Is it possible to idenify the >> underlying non-min phase zero? >> >> >> Tam >> >> > Your assertion that adding white noise to the output will change the > essential nature of the answer is incorrect. Any ARMA model extraction > procedure should yield an answer that converges on the correct one with > enough sample points -- it's just that the more noise you have, the more > points you will need to get a good answer. > > On what do you base the assertion? >
This example is out of Van Trees vol IV (page 408 or so) Take an AR process f(n) with power spectrum sigma_f^2 / | A(w)|^2. add white noise w(n) with variance sigma_w^2 let x(n)=f(n)+w(n). The power spectrum of x, Pxx= sigma_w^2 + sigma_f^2/|A(w)|^2 = sigma_f^2 ( 1 + (sigma_w^2/sigma_f^2) |A(w)|^2) ------------------------------------------------- |A(w)|^2 Additive noise changes an AR process to an ARMA process.
Stan Pawlukiewicz wrote:

> Tim Wescott wrote: > >> HelpmaBoab wrote: >> >>> If we have a non-minimum phase FIR system (1-2z^-1) when it is quite >>> easy to >>> identify this system (assuming the input is known white driving >>> noise) using >>> LMS or recursive-least squares. >>> >>> However, if we add uncorrelated white noise at the output then the >>> equivalent system appears to be an innovations model where the zero >>> is found >>> from spectral factorisation ie the LMS algorithm identifies the spectral >>> factor directly and this is minimum phase. Is it possible to idenify the >>> underlying non-min phase zero? >>> >>> >>> Tam >>> >>> >> Your assertion that adding white noise to the output will change the >> essential nature of the answer is incorrect. Any ARMA model >> extraction procedure should yield an answer that converges on the >> correct one with enough sample points -- it's just that the more noise >> you have, the more points you will need to get a good answer. >> >> On what do you base the assertion? >> > > This example is out of Van Trees vol IV (page 408 or so) > > Take an AR process f(n) with power spectrum sigma_f^2 / | A(w)|^2. > > add white noise w(n) with variance sigma_w^2 > > let x(n)=f(n)+w(n). > > The power spectrum of x, > > Pxx= sigma_w^2 + sigma_f^2/|A(w)|^2 > > = sigma_f^2 ( 1 + (sigma_w^2/sigma_f^2) |A(w)|^2) > ------------------------------------------------- > |A(w)|^2 > > > Additive noise changes an AR process to an ARMA process. >
Just as clothing makes me appear to be civilized, additive noise makes an AR process appear to be an ARMA process. But if there's an AR process there it's still AR. And the locations of the zeros don't magically reflect around the unit circle when you add noise. If you have the input vector in hand you can find said zero locations in a manner which is only ambiguous because of noise, not because of teleporting zeros. If you need to take the zeros to be minimum-phase it is likely because you are setting out to construct a Wiener filter or one of its relatives, and having an unstable filter is far worse than having a mismatch between the pole and zero locations. The operation of reflecting the zeros around the unit circle isn't done because the noise does magic, it's done to get sane answers. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/
Tim Wescott wrote:
> Stan Pawlukiewicz wrote: > >> Tim Wescott wrote: >> >>> HelpmaBoab wrote: >>> >>>> If we have a non-minimum phase FIR system (1-2z^-1) when it is quite >>>> easy to >>>> identify this system (assuming the input is known white driving >>>> noise) using >>>> LMS or recursive-least squares. >>>> >>>> However, if we add uncorrelated white noise at the output then the >>>> equivalent system appears to be an innovations model where the zero >>>> is found >>>> from spectral factorisation ie the LMS algorithm identifies the >>>> spectral >>>> factor directly and this is minimum phase. Is it possible to idenify >>>> the >>>> underlying non-min phase zero? >>>> >>>> >>>> Tam >>>> >>>> >>> Your assertion that adding white noise to the output will change the >>> essential nature of the answer is incorrect. Any ARMA model >>> extraction procedure should yield an answer that converges on the >>> correct one with enough sample points -- it's just that the more >>> noise you have, the more points you will need to get a good answer. >>> >>> On what do you base the assertion? >>> >> >> This example is out of Van Trees vol IV (page 408 or so) >> >> Take an AR process f(n) with power spectrum sigma_f^2 / | A(w)|^2. >> >> add white noise w(n) with variance sigma_w^2 >> >> let x(n)=f(n)+w(n). >> >> The power spectrum of x, >> >> Pxx= sigma_w^2 + sigma_f^2/|A(w)|^2 >> >> = sigma_f^2 ( 1 + (sigma_w^2/sigma_f^2) |A(w)|^2) >> ------------------------------------------------- >> |A(w)|^2 >> >> >> Additive noise changes an AR process to an ARMA process. >> > Just as clothing makes me appear to be civilized, additive noise makes > an AR process appear to be an ARMA process. But if there's an AR > process there it's still AR.
How would you know the difference aprior. particularly if you didn't know the noise variance. If the noise was colored, I don't think you would have a prayer.
> > And the locations of the zeros don't magically reflect around the unit > circle when you add noise. If you have the input vector in hand you can > find said zero locations in a manner which is only ambiguous because of > noise, not because of teleporting zeros.
If you don't pick the orders correctly, they do shift. Thats true with or without additive noise. I didn't say reflect. I don't belive that parametric estimators are equivalent to unbiased consistent ML estimators.
> > If you need to take the zeros to be minimum-phase it is likely because > you are setting out to construct a Wiener filter or one of its > relatives, and having an unstable filter is far worse than having a > mismatch between the pole and zero locations. The operation of > reflecting the zeros around the unit circle isn't done because the noise > does magic, it's done to get sane answers.
The only beef I have is the claim that ARMA estimation is consitent and unbiased.
>
"Tim Wescott" <tim@seemywebsite.com> wrote in message
news:BL-dnaQrfcWwrIbZnZ2dnUVZ_t-dnZ2d@web-ster.com...
> Stan Pawlukiewicz wrote: > > > Tim Wescott wrote: > > > >> HelpmaBoab wrote: > >> > >>> If we have a non-minimum phase FIR system (1-2z^-1) when it is quite > >>> easy to > >>> identify this system (assuming the input is known white driving > >>> noise) using > >>> LMS or recursive-least squares. > >>> > >>> However, if we add uncorrelated white noise at the output then the > >>> equivalent system appears to be an innovations model where the zero > >>> is found > >>> from spectral factorisation ie the LMS algorithm identifies the
spectral
> >>> factor directly and this is minimum phase. Is it possible to idenify
the
> >>> underlying non-min phase zero? > >>> > >>> > >>> Tam > >>> > >>> > >> Your assertion that adding white noise to the output will change the > >> essential nature of the answer is incorrect. Any ARMA model > >> extraction procedure should yield an answer that converges on the > >> correct one with enough sample points -- it's just that the more noise > >> you have, the more points you will need to get a good answer. > >> > >> On what do you base the assertion? > >> > > > > This example is out of Van Trees vol IV (page 408 or so) > > > > Take an AR process f(n) with power spectrum sigma_f^2 / | A(w)|^2. > > > > add white noise w(n) with variance sigma_w^2 > > > > let x(n)=f(n)+w(n). > > > > The power spectrum of x, > > > > Pxx= sigma_w^2 + sigma_f^2/|A(w)|^2 > > > > = sigma_f^2 ( 1 + (sigma_w^2/sigma_f^2) |A(w)|^2) > > ------------------------------------------------- > > |A(w)|^2 > > > > > > Additive noise changes an AR process to an ARMA process. > > > Just as clothing makes me appear to be civilized, additive noise makes > an AR process appear to be an ARMA process. But if there's an AR > process there it's still AR. > > And the locations of the zeros don't magically reflect around the unit > circle when you add noise. If you have the input vector in hand you can > find said zero locations in a manner which is only ambiguous because of > noise, not because of teleporting zeros. > > If you need to take the zeros to be minimum-phase it is likely because > you are setting out to construct a Wiener filter or one of its > relatives, and having an unstable filter is far worse than having a > mismatch between the pole and zero locations. The operation of > reflecting the zeros around the unit circle isn't done because the noise > does magic, it's done to get sane answers. > > -- >
I don't disagree with that statement - this was my reasoning too - but how to get at the nmp zeros? Tam
Stan Pawlukiewicz wrote:

> Tim Wescott wrote:
-snip-
>>>> Your assertion that adding white noise to the output will change the >>>> essential nature of the answer is incorrect. Any ARMA model >>>> extraction procedure should yield an answer that converges on the >>>> correct one with enough sample points -- it's just that the more >>>> noise you have, the more points you will need to get a good answer. >>>>
-snip-
> > > > The only beef I have is the claim that ARMA estimation is consitent and > unbiased. >
Where did I say it was unbiased? I assume that by "consitent" you mean "consistent", and by that you mean that the answer converges to the correct result -- I haven't done a formal proof, but I'm pretty sure that for a strictly linear system that strictly adheres to the model then yes, it will converge. The problem, of course, is that whatever real process you work with will neither be linear, nor will it have exactly the number of states you think it does, so your estimate will necessarily be inaccurate. But none of that will make the zeros teleport from one spot to another. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/
HelpmaBoab wrote:
> "Tim Wescott" <tim@seemywebsite.com> wrote in message > news:BL-dnaQrfcWwrIbZnZ2dnUVZ_t-dnZ2d@web-ster.com... >> Stan Pawlukiewicz wrote: >> >>> Tim Wescott wrote: >>> >>>> HelpmaBoab wrote: >>>> >>>>> If we have a non-minimum phase FIR system (1-2z^-1) when it is quite >>>>> easy to >>>>> identify this system (assuming the input is known white driving >>>>> noise) using >>>>> LMS or recursive-least squares. >>>>> >>>>> However, if we add uncorrelated white noise at the output then the >>>>> equivalent system appears to be an innovations model where the zero >>>>> is found >>>>> from spectral factorisation ie the LMS algorithm identifies the > spectral >>>>> factor directly and this is minimum phase. Is it possible to idenify > the >>>>> underlying non-min phase zero? >>>>> >>>>> >>>>> Tam >>>>> >>>>> >>>> Your assertion that adding white noise to the output will change the >>>> essential nature of the answer is incorrect. Any ARMA model >>>> extraction procedure should yield an answer that converges on the >>>> correct one with enough sample points -- it's just that the more noise >>>> you have, the more points you will need to get a good answer. >>>> >>>> On what do you base the assertion? >>>> >>> This example is out of Van Trees vol IV (page 408 or so) >>> >>> Take an AR process f(n) with power spectrum sigma_f^2 / | A(w)|^2. >>> >>> add white noise w(n) with variance sigma_w^2 >>> >>> let x(n)=f(n)+w(n). >>> >>> The power spectrum of x, >>> >>> Pxx= sigma_w^2 + sigma_f^2/|A(w)|^2 >>> >>> = sigma_f^2 ( 1 + (sigma_w^2/sigma_f^2) |A(w)|^2) >>> ------------------------------------------------- >>> |A(w)|^2 >>> >>> >>> Additive noise changes an AR process to an ARMA process. >>> >> Just as clothing makes me appear to be civilized, additive noise makes >> an AR process appear to be an ARMA process. But if there's an AR >> process there it's still AR. >> >> And the locations of the zeros don't magically reflect around the unit >> circle when you add noise. If you have the input vector in hand you can >> find said zero locations in a manner which is only ambiguous because of >> noise, not because of teleporting zeros. >> >> If you need to take the zeros to be minimum-phase it is likely because >> you are setting out to construct a Wiener filter or one of its >> relatives, and having an unstable filter is far worse than having a >> mismatch between the pole and zero locations. The operation of >> reflecting the zeros around the unit circle isn't done because the noise >> does magic, it's done to get sane answers. >> >> -- >> > I don't disagree with that statement - this was my reasoning too - but how > to get at the nmp zeros? > > > Tam > >
I'm going by memory here, so forgive me if I make a mistake. If you are essentially using 2nd order statistics to identify your process then you automatically have a phase ambiguity. Any system with the same magnitude response (but a different phase response) will satisfy the LMS criterion. So not only do you have the minimum phase and maximum phase solutions, but everything in between as well. The minimum phase solution is usually chosen for convenience, since it leads to stable and causal inverses. Note: It is often forgotten that poles outside the unit circle are not necessarily unstable. It depends where you choose your Region of Convergence (ROC) for the Z transform. If the ROC occurs outside the magnitude of the pole, then the response is stable - unfortunately it also means the response is non-causal. Usually the ROC includes the unit circle (this happens if you use the Fourier Transform) which means the response is unstable. To factorize the power spectrum you may want to look at cepstrum techniques. Minimum and Maximum phase responses can be derived from the cepstrum. Unfortunately I don't know the details off hand. Some references would be: Oppenheim & Schafer - Discrete Time Signal Processing Vaidyanathan - Multirate Systems and Filter Banks Spectral Factorization often occurs in Filterbank design - they have a magnitude response and need to factorize it into a minimum phase filter. The algorithm used is described in an appendix in Vaid. Good Luck - Hope that helps. Cheers, David
Tim Wescott wrote:
> Stan Pawlukiewicz wrote: > >> Tim Wescott wrote: > > -snip- > >>>>> Your assertion that adding white noise to the output will change >>>>> the essential nature of the answer is incorrect. Any ARMA model >>>>> extraction procedure should yield an answer that converges on the >>>>> correct one with enough sample points -- it's just that the more >>>>> noise you have, the more points you will need to get a good answer. >>>>> > -snip- > >> >> >> >> The only beef I have is the claim that ARMA estimation is consitent >> and unbiased. >> > Where did I say it was unbiased? > > I assume that by "consitent" you mean "consistent", and by that you mean > that the answer converges to the correct result -- I haven't done a > formal proof, but I'm pretty sure that for a strictly linear system that > strictly adheres to the model then yes, it will converge.
I don't think that's true for AR with additive noise. The underlying model is not longer descriptive of what is measured. Consistent means that the error reduces as the sample size goes up.
> > The problem, of course, is that whatever real process you work with will > neither be linear, nor will it have exactly the number of states you > think it does, so your estimate will necessarily be inaccurate. > > But none of that will make the zeros teleport from one spot to another.
Not part of my beef.
>