DSPRelated.com
Forums

Should work but doesn't (and vice versa)

Started by woodpecker 9 months ago17 replieslatest reply 9 months ago213 views

Hi everyone,  

Not exclusively DSP this one, but please bear with me.

I expect that most of us (especially those of more advanced years such as myself) have dealt with software / hardware / systems / processes where you do everything correct according to the book (sometimes more than one book). Then you check it again and again but it simply Will Not Work !  For instance, perhaps a system which should be unconditionally stable (except for the one condition I hadn't thought of 😲)

Less common, but particularly worrying is where something works, but according to the experts / books / laws of gravity etc etc simply cannot happen. You simply can't let it go, especially if it is mission-critical, or even worse safety-critical. But if you tell a client or someone important you know they'll say "if it works, go with it, we're short of time."  So you tell no-one and work like crazy 'til you know why it works as well as how it works. 🙂

I once had this situation while working on an avionics system.   Anyone had something similar  ?



[ - ]
Reply by rbjAugust 6, 2023

About a quarter century ago, I was schooled a little by James McCartney (creator of SuperCollider) about this simple sinusoidal oscillator:

    y[n] = (2 cos(Ω))y[n-1] - y[n-2]

initial states (given the parameters, A, Ω, φ):

    y[-1] = A cos(-Ω + φ)

    y[-2] = A cos(-2Ω + φ)

what you get out from the math (for n ≥ 0) is

    y[n] = A cos(Ω n + φ)

Now, those of us who are electrical engineers (as opposed to computer science grads) might recognize that this is a "*marginally* stable" 2nd-order filter with two poles sitting right on top of the unit circle at e^(±jΩ).  So think of the initial states as the impulse that starts this marginally-stable filter ringing and it simply rings forever.

    y[n] = (2r cos(Ω))y[n-1] - r^2 y[n-2] + x[n]

This filter has transfer function:

    Y(z)/X(z) = z^2/(z^2 - 2r cos(Ω) z + r^2)

              = (z-0)/(z - r e^(+jΩ))  ×  (z-0)/(z - r e^(-jΩ))

(Two zeros at z = 0 and two poles at z = r e^(±jΩ).)  Now it turns out that because y[n-2] is simply subtracted, unscaled, that r=1 exactly.  But what about numerical issues?  Can we expect, with x[n]=0, this marginally stable oscillator to go on forever without either slow exponential growth or decay?  The engineer in me (with my experience of designing a similar quadrature oscillator) said "no, we have to worry about stability with poles resting directly on the unit circle".

In this specific case, I was wrong.  You can run this simple oscillator until the cows come home and it seems to be fine.

But I still worry about quantization error with the (2 cos(Ω))y[n-1] term where there is multiplication and the result has to be be quantized back to the original word width.  I worry that the quantization noise will accumulate and not die out because there is no decay.  But I could not make this simple oscillator blow up or die out.  It seemed to run fine, all day, balancing on the edge of stability.  Over a billion samples and no sign of change in amplitude.

[ - ]
Reply by Rick LyonsAugust 6, 2023

Hi.

We can build oscillators from 2nd-order recursive networks with conjugate poles that always lie *EXACTLY* on the z-plane's unit circle. And the oscillator's output peak amplitude never decreases or increases. The effect of quantized coefficients is that the oscillator's poles stay on the unit circle but their angles become shifted from the desired angle. That is, the oscillator's frequency is shifted in frequency away from our desired frequency.

[ - ]
Reply by rbjAugust 6, 2023

"The effect of quantized coefficients is that the oscillator's poles stay on the unit circle but their angles become shifted from the desired angle."

I understand that, Rick.  My question is about the signal quantization that is necessary in the z^{-1} term.  Some non-zero component of that quantization error must be living at the effective resonant frequency of the oscillator.  So, with that additive noise source added in, doesn't a small amount of energy get added to the resonant frequency, causing, eventually, the amplitude at the resonant frequency to increase?

[ - ]
Reply by Rick LyonsAugust 6, 2023

Hi Robert (I hope you're doin' well.)

The notion of input signal quantization error affecting the resonant frequency of a 2nd-order resonator is an idea that's new to me. I'll have to think more about that notion.

[ - ]
Reply by rbjAugust 6, 2023

If it's additive, it's a source.  Use superposition.  But, of course, modeling quantization error as additive noise is an assumption.  With the non-linear reality, I dunno exactly how the quantization error in the only multiplicative term (2cos(Ω))×y[n-1] will interact with the full-scale sinusoid that already exists.

It just seems to me that if it were a filter starting with zero states and you drove such a filter with an external broadband noise source at very small amplitude, it would resonate and create a sinusoid where none existed before.

[ - ]
Reply by Rick LyonsAugust 6, 2023

Hi Robert. You wrote,

"It just seems to me that if it were a filter starting with zero states and you drove such a filter with an external broadband noise source at very small amplitude, it would resonate and create a sinusoid where none existed before."

I agree with you on that.

[ - ]
Reply by rbjAugust 6, 2023

So then, I would expect that, even if the resonant sinusoid from the noise is at some random (and slowly changing) phase, it would team up with the intended sinusoid that you get with the initial conditions of the input-less filter (as I have shown above) and, after a billion samples or more, the sinusoid would increase in amplitude, slowly, over time.

But James McCartney told me it never blew up on him, so (this was back in the old C days for me), so I took the challenge and I coded this up real quick with single-precision floats (32-bit IEEE) and ran it all day and overnight (with a counter counting how many simulated sample periods transpired) and the amplitude was unchanged.  No limiting was applied and this oscillator was rock-solid.  I had to eat my words.  But I still am not sure I would bet my life on it remaining solid.

[ - ]
Reply by Rick LyonsAugust 6, 2023

My my Robert, you're up late tonight!

I applied a random sequence to a 2nd-order resonator (whose poles are on the unit circle) and the resonator did resonate at its expected frequency. However, the peak-to-peak amplitude of the resonator's output sinusoid jumped all over the place. And the peak-to-peak output amplitude sometimes grew into the thousands when the random input sequence length was in the millions.

[ - ]
Reply by rbjAugust 6, 2023

Did you do this?:

    y[n] = (2 cos(Ω))y[n-1] - y[n-2]

initial states (given the parameters, A, Ω, φ):

    y[-1] = A cos(-Ω + φ)

    y[-2] = A cos(-2Ω + φ)

And it was randomly-varying amplitude?

Floating-point, I presume.  Single or double? 


Or did you do this:

   y[n] = (2 cos(Ω))y[n-1] - y[n-2] + x[n]

and drive it with a noisy x[n]?







[ - ]
Reply by Rick LyonsAugust 6, 2023

Robert,

I implemented: y[n] = (2cos(Ω))y[n-1] - y[n-2] + x[n] where 2cos(Ω) was a single-precision floating point number.

I forgot to say that my random input sequence were double-precision floating point numbers having a mean = 0 & standard deviation = 1.

Also, different initial conditions (what you call y[n-1] and y[n-2]) didn’t seem to have any discernable effect on the output of my resonator.

Again, my resonator’s poles were on the z-plane’s unit circle. If I moved the poles inside the unit circle the resonators output sinusoidal frequency was no longer equal to the resonator’s specified resonant frequency.

[ - ]
Reply by rbjAugust 6, 2023

>"Also, different initial conditions (what you call y[n-1] and y[n-2]) didn’t seem to have any discernable effect on the output of my resonator."

Of course, Rick, I was portraying this two different ways.  One is as a stand-alone oscillator with no input.  The other is as a filter with input x[n].  In the latter, I am assuming zeroed states to begin with.  In the former, having no input (which can be modeled as x[n]=0 for all n), then it's *only* the non-zero initial states that can get this going.

However, if y[n-1] and y[n-2] are initialized to *much* greater values than the noise x[n] is (remember, the noisy x[n] is supposed to emulate the quantization error, which is much smaller in amplitude than the output of the oscillator), then I would expect the input x[n] to have very little effect on the short term.  My only surprize (that I had a quarter century ago) was that the quantization error had no discernable effect after billions of samples.  The amplitude of the sinusoid was still very close to A.  After billions of samples I thought that the amplitude would change visibly since there are no limiters or anything to keep the amplitude constant.

I have to confess, I can't get Visual Studio to do even a Hello World program for me.  The IDEs of the olden daze were pretty easy to start up with.  I can't get VS to do shit and I am disinterested in learning the idiosyncrasies of it.  So I've only been programming in MATLAB of late (and there is a lot to hate in MATLAB).  There is a way to force MATLAB to use a specified word size, I'll try to figure it out.

So, this is what I might think is interesting, if we run the oscillator with single-precision floats for *both* the single coefficient and the signal and states.  But, in that single multiplication, (2cos(Ω)) × y[n-1], first cast both to double, multiply to a double-precision result, then cast a copy to float, then subtract the double value from the float value.  That is the quantization error.  Then apply that quantization error to another identical system but with an input x[n].  *If* the output of that parallel system grows without bound after billions of samples, then I don't understand why the original oscillator doesn't show that with a change in amplitude.

Somehow, that quantization error should make a difference in a marginally stable filter that is resting direction on the knife edge.  The marginally stable filter is vulnerable to something pushing it over.  But it appears that nothing does and that is an example of something working *better* than it theoretically would be expected to work.







[ - ]
Reply by Rick LyonsAugust 6, 2023

Hi Robert.

I'm a bit busy right now. Maybe you have the time to perform the interesting test that you describe in the fifth paragraph of your Reply.

In case you hadn't seen it, in Appendix C of the PDF file associated with my blog at:

https://www.dsprelated.com/showarticle/1467.php

I present a mathematical proof of why a second-order resonator, with poles on the unit circle, can have a stable (never decreasing nor increasing) output sinusoidal peak amplitude.

 

[ - ]
Reply by rbjAugust 6, 2023

Thanks for the link, Rick.  I have done the Rader/Gold quadrature oscillator myself with the 56K back around 1992.  I had to have the fixed-point coefficients round up in magnitude (instead of rounding down) so that when the squared and added, it was guaranteed not to be less than 1.  But if they added to be more than 1, the output would grow exponentially (but very slowly).  I simply put the natural 56K limiter (using saturation) on the output to fix the output amplitude.  But the poles were resting just barely outside the unit circle.

Not familiar with Levine/Vicanek nor the Frerking thing.  I still wonder what they do with coefficient rounding and with signal quantization.  These poles sitting on the knife edge just seem to be a little "edgy" to me.  Without some way of determining the amplitude, comparing that amplitude to a set value, and then gently tweeking the coefficients, I don't feel comfortable of guaranteeing the amplitude does not drift toward zero or away from zero.

[ - ]
Reply by PlatybelAugust 6, 2023

My comments are about two stories related to 'woodpecker's,similar but not quite. One is about 3rd year undergrads in a Time Series Analysis (TSA) course (before 'DSP' was coined) several decades ago involving teaching Fortran (Oh Yes! Fortran); the second is on a piece of research that I did in seismology.

We are talking mid 1970s. No TSA software was available for undergrads, at least none that we could afford. The TSA course was a 2 semester course, with an aim teaching the students both TSA and Fortran programming. Most students had no prior exposure to computing, let alone write a program.  In the 2nd semester they started writing their own basic programs for filtering, convolution, noise reduction etc. Beginning was difficult. In time they became proficient, and even to love writing their own programs.  A girl told me 'Sir: you have turned me into a nerd. All I think of is about your programing assignments; I spend all my time debugging'.  So many times I heard something similar to 'woodpecker's remarks from the students "checked it again and again but it simply Will Not Work!".  Unlike woodpecker, the student's problems had solutions.

The second story involves the derivation of the the displacement, stress and strain fields of inclined geological faults. To derive the theory it took me about a year; then to test the theory I computed the fields for a vertical fault, which was known. The two solutions did not agree. I checked my derivation again and again. It was correct.  I checked the derivation for the known solution for the vertical fault several times.  It was correct. That's when so many times, in despair, I would think of woodpecker's phrase 'checked it again and again but it simply Will Not Work !'. I pondered over this conundrum for a year; thinking and dreaming about it, 24/7. Then one day I had an Eureka! moment. I had discovered the problem.  It lay in discarding some higher order terms in a Taylor expansion because they were 'too small'.  Turned out the discarded terms were of significance for the vertical fault case. Two years of my life spent on in deriving a few simple looking equations.

[ - ]
Reply by kazAugust 6, 2023

Hi,

It is hard to define what is meant by "working" and what is "not working".

You don't need to go far away. All the software tools we use in our daily life like various operating systems, application tools as well as test equipment are buggy at release then the users do the debugging (named feedback) free. Then the producers will send a repair (called update). No one waits until all input states are checked.

Having standards will help to limit the test states. 

[ - ]
Reply by TreefarmerAugust 6, 2023

Your comment that "someone important" tells you "if it works, go with it, we're short of time" is what happens when no individual is accountable. Classic examples are the Boeing 737 MAX sensor failures and the Corvair cars that overturned because GM didn't want to add the small expense of an axle stabilizer. The late John Delorean had a chapter in his book "On a Clear Day, You Can See General Motors" that dealt with why good people let bad things happen. To summarize, we can say that shared responsibility is no responsibility. And as Kaz replied, having standards will help. I would add that a good checklist of regression testing will help.

[ - ]
Reply by woodpeckerAugust 6, 2023

Thanks to all for the interesting replies. Good to hear from you again, Rick.

For beginners who might be rather baffled by the intricacies of recursive filters, I can highly recommend Chapter 6 of Rick Lyon's book "Understanding Digital Signal Processing" 3rd Edition. Especially section 6.7 "Pitfalls in Building IIR Filters"

This is perhaps why I'm more comfortable with FIR filters for anything the least bit complicated . . .