Reply by Tim Wescott August 20, 20122012-08-20
On Tue, 14 Aug 2012 10:17:47 -0700, benjamin.couillard wrote:

> Hi everyone, > > I've implemented a 6-order IIR in an FPGA (technically an order 2 with > scattered look-ahead with pole-zero cancellation, check > http://www.ece.umn.edu/users/parhi/SLIDES/chap10.pdf). > > I've tried using noise shaping (1st-order) to get away with using less > bits in my feedback loop (as I did sucessfully in a 1st-order IIR) but > it only added oscillation to the output. So basically, I was wondering > are there way to use noise-shaping with high-order IIR filters? Can > anyone suggest a good reference on the subject? > > Thanks > > Benjamin
You do realize that a 1st-order filter with noise shaping (or fraction saving, whatever you want to call it) is going to give you an oscillation? Where did you apply your noise shaping? What form of filter did you use (or, alternately, what is your difference function?) Noise shaping (or fraction saving, or whatever you want to call it) is a nonlinear operation, so the filter topology matters a lot. I would be inclined to try breaking the filter into a 1st-order low-pass filter and an integrator, with feedback to implement the resonant pole: in -- + --> H1(z) ---> H2(z) --o--> A | '--------- z^-1 ---------' where H1(z) = (1-d)z/(z-d), H2(z) = kz/(z-1), and d and k are adjusted to give the desired response (you should be able to get an arbitrary response this way). Then do noise shaping at both H1 and H2. I couldn't guarantee that you'd get less oscillation, but it may fly. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
Reply by Robert Adams August 19, 20122012-08-19
Sorry for using confusing terminology.  What I meant was that it is common =
to use double precision in the feedback path, but just use the high accumul=
ator bits as the final filter output. In this case the output noise is domi=
nated by simple truncation noise, or you could add dither before the trunca=
tion to make things nicer. The truncation that happens in processing the lo=
w order bits is still shaped by the filter poles, but unless the filter Q i=
s extreme it's unlikely it will rise to the level where it dominates the qu=
antization noise introduced by the output truncation.=20

Bob
Reply by robert bristow-johnson August 19, 20122012-08-19
On 8/19/12 12:58 PM, Robert Adams wrote:
> Vlad > > I agree with your points. I think you are agreeing with me that if the goal is to minimize the un-weighted noise from dc to pi then matching the coefficients (= double precision) is optimal. >
yeah, but Bob, that's the case with, say, simple dither. the dither of the lowest unweighted mean-square is flat or white dither. now you remember that Gerzon-Craven result using information theory. the integral of log(S/N) adds to the bit rate and noise shaping always trades this log(S/N) off with the case of minimum noise power being the case when it's flat. but the point is we trade off a quantization noise power for audibility of quantization noise. the minimum audible noise is not likely the same as the minimum power noise.
> With regards to the output noise, if you keep the full precision of the output then you are correct that you still have truncation noise that is shaped by the poles, but much further down in amplitude. I was assuming that you truncate to single precision on the way out in which case the truncation noise would swamp out the shaped noise,
what does this mean, Bob? what is the difference between the (net) truncation noise and "shaped noise"? the truncation noise is always shaped, even if you're doing nothing other than truncation, no?
> but this depends on how it is coded. > > I know that in audio there was a proposal for physchoacoustic noise shaping that was called Sony super-bit-mapping or something like that. I believe it came from a guy at the University of Waterloo. In this case the goal was to minimize the physchoacoustic weighted noise and I believe his shaping section was fairly high order. So that is an example of shaping the quantization noise to meet a specific non-white target spectrum, although this was not in the context of an iir filter.
yup. and they even had (or proposed) SBM-2 which was dynamically adjusted like compressed audio quantization error is. but this is not compressed. it's just dynamically-shaped quantization noise and intended to be dynamically shaped in such a way to be minimally audible. perhaps the best way to put (or "master") 32-bit music onto 16-bit red book CDs. you can kinda emulate it with MATLAB by segmenting the audio and computing (with FFT) the magnitude of the spectrum and adding a specially weighted copy of that to something that looks like the 0-dB curve of the Fletcher-Munson curve (or something more modern). then define a bunch of feedback coefs to hit that target spectrum. the noise would be "pumping", but if you synchronize it well (you have to delay the audio a little to line it up), it should pump up and down with the audio energy on a frequency band-by-band basis. if the audio goes down to zero, the noise spectrum becomes shaped like the Fletcher-Munson curve so should be minimally audible in that case. On 8/19/12 9:50 AM, Vladimir Vassilevsky wrote:
> "Robert Adams"<robert.adams@analog.com> wrote in message > news:95388eeb-d03d-45af-8244-d71053625da2@googlegroups.com... > >> , and that other alignments of the noise-shaping filter give worse >> performance. > > It depends.
yeah, i'm not sure that all non-flat alignments are worser. but i agree, in a filter where the quantized signals are fed back, you have to include the shaping the poles do. you would also have to do that for regular non-shaped quantization (dithered or not). -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by Robert Adams August 19, 20122012-08-19
Vlad

I agree with your points. I think you are agreeing with me that if the goal=
 is to minimize the un-weighted noise from dc to pi then matching the coeff=
icients (=3D double precision)  is optimal. =20

With regards to the output noise, if you keep the full precision of the out=
put then you are correct that you still have truncation noise that is shape=
d by the poles, but much further down in amplitude.  I was assuming that yo=
u truncate to single precision on the way out in which case the truncation =
noise would swamp out the shaped noise, but this depends on how it is coded=
.

I know that in audio there was a proposal for physchoacoustic noise shaping=
 that was called Sony super-bit-mapping or something like that. I believe i=
t came from a guy at the University of Waterloo.  In this case the goal was=
 to minimize the physchoacoustic weighted noise and I believe his shaping s=
ection was fairly high order.  So that is an example of shaping the quantiz=
ation noise to meet a specific non-white target spectrum, although this was=
 not in the context of an iir filter.

Bob
Reply by Vladimir Vassilevsky August 19, 20122012-08-19
"Robert Adams" <robert.adams@analog.com> wrote in message 
news:95388eeb-d03d-45af-8244-d71053625da2@googlegroups.com...

>1) Assume you have a direct form 1 biquad with a single adder that sums >together all the feed-forward and feedback terms. This >adder will have >extended resolution due to the multiplication of the data and the >coefficients.
That is one particular case. There could be more then one biquad in sequence and not necessarily implemented as DF1.
>3) The traditional view of error shaping is that you view the feedback >filter as a noise-shaping filter, and design it to minimize the >output >noise.
Noise shaping is intended to minimize noise in the band of interest. It is not the same as total noise.
>5) The last step is to realize that this is the same as double precision >just applied to the recursive section of the filter, assuming > that the coefficients in the recursive part of the biquad filter are the > same as the coefficients used in the noise-shaping filter;
Yes, on assumption of noise from 0 to Nyquist and a single section DF1.
> I have read papers that indicate that if you want to minimize the rms > noise, the best thing you can do is to whiten the noise spectrum, > so I > think this scheme is optimal
The output noise of this scheme won't be white; it would be double precision LSB noise shaped by recursive part of the filter.
>, and that other alignments of the noise-shaping filter give worse >performance.
It depends.
> In other words, putting an infinite noise-shaping notch at the pole > frequency may minimize the noise in that particular part > of the spectrum, but you will pay the price in that the energy in other > parts of the spectrum will become larger.
You can draw noise spectrum as log (energy) and log(frequency). If you dig a hole at some place in that spectrum, you will have to put no less then the same amout of ground on top at some other place in that spectrum. Vladimir Vassilevsky DSP and Mixed Signal Consultant www.abvolt.com
Reply by Robert Adams August 18, 20122012-08-18
Robert

I think we agree. If you are custom designing your own hardware, using nois=
e shaping gives you the freedom to use a lower order or maybe fewer bits, r=
esulting in less hardware. However when you are coding on a commercial dsp,=
 doing the double precision feedback may only take a few more cycles, and y=
ou have the hardware anyway so you might as well use it. But even on a comm=
ercial dsp I agree that 1st order shaping could be done with less overhead =
than full double precision.=20

Bob
Reply by robert bristow-johnson August 18, 20122012-08-18
On 8/14/12 9:42 PM, Robert Adams wrote:
> On Tuesday, August 14, 2012 8:59:01 PM UTC-4, robert bristow-johnson wrote: >> On 8/14/12 6:41 PM, Vladimir Vassilevsky wrote: >> >>> "Robert Adams"<robert.adams@analog.com> wrote: >> >>>> However when you follow this through to it's logical conclusion, you end >>>> up with using the same coefficients in your noise shaping >>>> loop as you use in the recursive iir section, which turns out to be simply >>>> double precision. >> >>> This is not the same as doing filter in double precision. >>> >> >> yeah, Bob. i agree with Vlad. it's not the same. >> > > Assume you have a 2nd order biquad using direct form 1. The only quantization error occurs at the output of the summer that combines the b0, b1, b2, a1, and a2 terms. The output contains the quantization shaped by 1/(1 + a1*z^-1 + a2*z^-2). > So now lets take the low bits from that summer that we would have thrown away, and apply to 2 cascaded delays with coefficients aa1 and aa2 feeding back to the same summation node. Since we know that the quantization noise reponse has a peak at the pole frequency, we might want to have the aa1 and aa2 coefficients selected such that the error feedback 2-tap FIR has a corresponding dip to minimize the output quantization noise. If you work through the math you will discover that to whiten the output noise you should set aa1 = a1 and aa2 = a2. When this is done, it can be seen that you have really just applied double-precision math to the recursive portion of the IIR. > I'll try to dig up the references on this, but it was well documented some years ago.
i agree that in the special case of the feedback coefs for noise shaping matching the IIR feedback coefs, that the result is white and it's equivalent to single times double precision (where you toss the least significant word). that's pretty clear if the quantization is always rounding down (just dropping the bits and feeding back those dropped bits, zero-extended). it's just splitting the LS word and MS word and applying the distributive property we learn in grade 7. but noise shaping is more general than that. the simplest noise shaping, where whatever bits you drop are fed back zero-extended (with a gain of 1) to the same quantization for the following sample, that is not equivalent to just doing double-precision. this method has been called "fraction saving" by Randy Yates and is *very* inexpensive and takes care of that limit-cycle problem where the IIR gets stuck on a non-zero DC value when the input goes to zero. in my opinion, *every* fixed-point audio IIR should use this method at every quantization point. and Direct Form 2 should never be used, but DF1 still works pretty good for me, unless there is a lot of coefficient modulation, then some lattice structure is usually better. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by Robert Adams August 17, 20122012-08-17
Yes I think you have the right idea. I would start with 


D3 = a3
D6= a6

Bob

Reply by August 17, 20122012-08-17
Back to my case

My transfer function looks something like this

H(z) = Num(z)/(1 - a3*z^-3 - a6 * z^-6).

If I understand correctly, the noise shaping block should have a transfer function like (1 + d3 * z^-3 + d6 * z^-6)? 
Reply by Robert Adams August 16, 20122012-08-16
On Tuesday, August 14, 2012 1:17:47 PM UTC-4, benjamin....@gmail.com wrote:
> Hi everyone, >=20 >=20 >=20 > I've implemented a 6-order IIR in an FPGA (technically an order 2 with sc=
attered look-ahead with pole-zero cancellation, check http://www.ece.umn.ed= u/users/parhi/SLIDES/chap10.pdf).
>=20 >=20 >=20 > I've tried using noise shaping (1st-order) to get away with using less bi=
ts in my feedback loop (as I did sucessfully in a 1st-order IIR) but it onl= y added oscillation to the output. So basically, I was wondering are there = way to use noise-shaping with high-order IIR filters? Can anyone suggest a = good reference on the subject?
>=20 >=20 >=20 > Thanks >=20 >=20 >=20 > Benjamin
Sorry I think you have to be a member of the AES and subscribe to the E lib= rary to get that. I realize that my previous postings left out a bit of information, so let m= e try to explain my reasoning here again, bearing in mind that I might be a= ll wrong about this. 1) Assume you have a direct form 1 biquad with a single adder that sums tog= ether all the feed-forward and feedback terms. This adder will have extende= d resolution due to the multiplication of the data and the coefficients. Ty= pically the output of this adder is truncated before feeding back into recu= rsive delay memory. In a commercial fixed-point DSP this extended resolutio= n is typically carried in the LOW portion of the accumulator. Most DSP's al= low you to access these low bits separately in order to do double-precision= operations. If we assume the injected truncation noise is uncorrelated, th= en the output spectrum will look like white noise filtered by the recursive= section of the filter, so if your filter has high-Q poles then there will = be a large peak in the output noise spectrum. If the injected quantization = noise is correlated then you can get limit cycles but let's not worry about= that here. 2) Now assume you want to do 2nd-order shaping of the quantization noise th= at is injected into the system when those LOW accumulator bits are truncate= d. The way this is done is to take those LOW bits and apply them to a 2-tap= FIR filter which is then fed back to the summer (with appropriate scaling = to account for the fact that you shifted the LOW bits up to process them, s= o they need to be shifted back down again before the addition). This is the= "textbook" signal flow diagram of error shaping that you will find in the = paper I referenced. 3) The traditional view of error shaping is that you view the feedback filt= er as a noise-shaping filter, and design it to minimize the output noise. I= n systems where you have high-Q poles in the recursive section of the filte= r, you would normally design the noise transfer function to have notches at= that frequency. 4) If you take it one step further and place the zeros of the noise-shaping= function directly on top of the poles of the filter, then the feedback coe= fficients of the noise-shaping filter and the feedback coefficients of the = recursive section of the biquad are exactly the same. In this case the quan= tization noise will be whitened. Remember that just because the error is wh= ite doesn't mean it hasn't been shaped; you need to compare the spectrum wi= th and without the shaping filter applied. 5) The last step is to realize that this is the same as double precision ju= st applied to the recursive section of the filter, assuming that the coeffi= cients in the recursive part of the biquad filter are the same as the coeff= icients used in the noise-shaping filter; a1*HIGH_BITS + a1*LOW_BITS =3D a1*(HIGH_BITS + LOW_BITS) =3D double-prec= ision and the same of course for a2. Note that technically this is "single X doub= le", since the coefficients are still single-precision. Note that we are assuming that the final output of the filter is the high b= its only, so maybe that is one difference between true double precision and= this scheme, in that for true double precision you would pass the full acc= umulator (HIGH + LOW) to the outside world, whereas we are only passing on = the HIGH bits. I have read papers that indicate that if you want to minimize the rms noise= , the best thing you can do is to whiten the noise spectrum, so I think thi= s scheme is optimal, and that other alignments of the noise-shaping filter = give worse performance. In other words, putting an infinite noise-shaping n= otch at the pole frequency may minimize the noise in that particular part o= f the spectrum, but you will pay the price in that the energy in other part= s of the spectrum will become larger. I think there was a famous result by = Gerzon and Craven in AESJ about this, but the math was over my head. Again I might have this all wrong, and I know rbj has enormous experience h= ere so that fact that he is skeptical worries me greatly. If I end up embar= rassing myself I reserve the right to use the Men-in-Black memory-erase opt= ion, adapted for newsgroups. Bob