DSPRelated.com
Forums

Quantizing the filter coefficients, what are the criterias?

Started by Mr. Ken June 8, 2006
On Sat, 10 Jun 2006 01:14:53 GMT, robert bristow-johnson
<rbj@audioimagination.com> wrote:

> |1 - (e^-jw)*G(e^jw)|^2 = A*|H(e^jw)|^2 > >for some real constant A>0 . > >so given the "ideal" FIR, h[n]. with spectrum H(e^jw), you want to design >G(z) so that the above equation is as accurate as possible, then run your >h[n] through the noise-shaped quantizer: > > > .-------------. > e1[n] .---| (z^-1)*G(z) |<----.----- e[n] = hq[n] - h1[n] > | '-------------' | > v - v > h[n] ---->(+)-------------------->(+)-------> hq[n] > + h1[n]
I'm wondering about the practicality of this approach. It seems like h[n] might be highly constrained -- not only in word length, but in number of coefficients. But hq[n] could be of infinite length, so you have basically traded word length for filter length. For "small" filters it would be possible to "brute force" the answer. If you constrain each quantization to either "round up" or "round down", then for an N-length filter there are 2^N possible quantization sets. With modern computational capabilities it would be possible to test every one of them in a reasonable amount of time, for values of N up to about 32 or so -- double that if there is symmetry in the coefficients and quantization set. For filters larger than that, or to accommodate individual quantization errors larger than &#4294967295;1 LSB, some other approach would be necessary. Greg
> does that make sense. >
sure..tnx for all the effort describing this!
in article kgco825bn1nsatbi8i5lgksv6eojqd7gdd@4ax.com, Greg Berchin at
76145.2455@compuswerve.com wrote on 06/11/2006 11:38:

> On Sat, 10 Jun 2006 01:14:53 GMT, robert bristow-johnson > <rbj@audioimagination.com> wrote: > >> |1 - (e^-jw)*G(e^jw)|^2 = A*|H(e^jw)|^2 >> >> for some real constant A>0 . >> >> so given the "ideal" FIR, h[n]. with spectrum H(e^jw), you want to design >> G(z) so that the above equation is as accurate as possible, then run your >> h[n] through the noise-shaped quantizer: >> >> >> .-------------. >> e1[n] .---| (z^-1)*G(z) |<----.----- e[n] = hq[n] - h1[n] >> | '-------------' | >> v - v >> h[n] ---->(+)-------------------->(+)-------> hq[n] >> + h1[n] > > I'm wondering about the practicality of this approach.
it's a good thing to wonder about. i'm still thinking about how to design G(z). the Jens J&#4294967295;rgen Nielsen paper deals with implementation a bit more.
> It seems like > h[n] might be highly constrained -- not only in word length, but in > number of coefficients. But hq[n] could be of infinite length, so you > have basically traded word length for filter length.
yup, but i don't think that, during the quantization process (where hq[n] gets discovered), that once h[n] goes to zero that the "ringing" of the 1 - (z^-1)*G(z) system will exceed the LSB step of the quantizer for very long (so those samples of hq[n] will be zero once e1[n] gets small enough). i have no proof of that, of course.
> For "small" filters it would be possible to "brute force" the answer. > If you constrain each quantization to either "round up" or "round down", > then for an N-length filter there are 2^N possible quantization sets. > With modern computational capabilities it would be possible to test > every one of them in a reasonable amount of time, for values of N up to > about 32 or so -- double that if there is symmetry in the coefficients > and quantization set.
probably something like adding some small random numbers before quantizing it and "annealing" the set could also be done. dunno how to optimize it. i think that Parks-McClellan could be modified to search for the extrema on the spectrum of the *quantized* h[n] during each particular recursive pass. i s'pose for those FPGA guys who like to zero some coefficients (us DSP guys wouldn't mind zeroing some regular set of coefs like in half-band filters), you could also reprogram P-McC in the later recursions to set the smallest coefs to zero and work with optimizing the remaining ones.
> For filters larger than that, or to accommodate individual quantization > errors larger than &#4294967295;1 LSB, some other approach would be necessary.
i guess i don't understand why or where there would be quantization errors of larger than &#4294967295;1/2 LSB. in article 448c5e14$0$5538$ba620e4c@news.skynet.be, NewLine at umts_remove_all_this@skynet.be wrote on 06/11/2006 14:16:
>> does that make sense? >> > > sure..tnx for all the effort describing this!
it beats finding any old posts where i or someone else went through it. (i made a mistake and typed "q[n]" when i meant to say "e[n]".) the Jens J&#4294967295;rgen Nielsen paper should probably be the next place to go with this. although i have done noise-shaping on signals in my life (and still do), i have never done this to get some kind of quantized h[n]. i just knew of its existence. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On Sun, 11 Jun 2006 20:48:23 GMT, robert bristow-johnson
<rbj@audioimagination.com> wrote:

>i guess i don't understand why or where there would be quantization errors >of larger than &#4294967295;1/2 LSB.
If "Eq" is the quantization error, and you allow EITHER "rounding up" OR "rounding down", then 0 <= |Eq| < |1 LSB|. If you constrain 0 <= |Eq| <= |&#4294967295; LSB|, then you must always "round toward nearest". Quantizing all filter coefficients by "rounding toward nearest" yields a unique solution that minimizes both the absolute error (DC component) and the total squared error, but offers no degrees of freedom to optimize the spectral characteristics of the quantized filter. Removing the "round toward nearest" constraint, but constraining 0 <= |Eq| < |1 LSB|, gives 2^N degrees of freedom for the entire set of N quantized coefficients. If you define some other measure of "goodness of fit", based for example upon the spectrum of the quantized filter, then there will be one of those 2^N sets of quantized coefficients that is "best" under your alternate figure of merit, even though it is not "best" for absolute error or total squared error. If you allow |Eq| to be even larger than |1 LSB| then you have even more degrees of freedom, but finding that "best" quantization scheme quickly becomes computationally untenable. Having |Eq| >= |1 LSB| is not without precedent -- it happens all the time in a noise-shaped quantizer like the one that you described, because of the feedback of previous error terms. Greg