"Jon Harris" <goldentully@hotmail.com> wrote in message news:<br5kc7$28m75d$1@ID-210375.news.uni-berlin.de>...
> "Jerry Avins" <jya@ieee.org> wrote in message
> news:3fd5e38a$0$14969$61fed72c@news.rcn.com...
> > Piergiorgio Sartor wrote:
> >
> > > Randy Yates wrote:
> > >
> > > My idea is not to use random dithering, but error distribution.
> > >
> > > In my application it is found to be more effective.
> > >
> > > The problem I have is that the quatization error is not
> > > clear to me.
> > >
> > > Specifically in this situation, let's say the input sequence is:
> > >
> > > 1, 1, 1, 1, 2, 2, 2, 2, ...
> > >
> > > output will be:
> > >
> > > 5, 5, 5, 5, 11, 11, 11, 11, ...
> > >
> > > Now, what's happening is that a slow ramp becomes a
> > > sharp step, which I do not want, I would prefer:
> > >
> > > 5, 6, 7, 8, 9, 10, 11, 11, ...
> > >
> > > as output sequence, which is more close to the original
> > > intended signal (at least, I can say it would be better).
> > >
> > > Of course such a result is quite unrealistic, but I have
> > > the hope dithering could help.
> > >
> > > Any ideas?
> > >
> > > bye,
> >
> > It's evident that I'm hardly a good guide here, but suppose that the
> > original input were in fact 1.04, 1.00, 1.01, 2.10, 2.06, 2.03, ...
> > (a noisy step), quantized to 1, 1, 1, 2, 2, 2, .... Your "improved"
> > sequence generator would degrade the result.
> >
> > Jerry
>
> Jerry alludes to a point that I've been thinking about lately. It has been
> stated in other threads (e.g. "Adding dither when increasing signal level?")
> that if you have a 16-bit input value and multiply it by a 16-bit
> coefficient, you end up with a 31-bit number and every one of those bits is
> significant/important. Throwing any of these bits away in re-quanitizing is
> a "bad thing".
>
> Now, back in my 1st year science courses, we learned the rules of
> significant digits for base 10 numbers. IIRC, a product of 2 numbers only
> has as many significant digits as the _minimum_ of the 2 inputs. Applying
> this logic to base 2, multiplying 2 16-bit numbers should only give you 16
> significant bits as a result.
>
> Looking at this another way, let's say your input is hex 0x7333 (29491
> decimal) or about 0.9 and your gain coefficient is 0x5A82 or about -3.01dB.
> The product is 0x28BA6DE6 which has 31 bits. I'll assume that the gain
> coefficient has infinite precision for the sake of argument. Now, if you
> could also say that your input was _exactly_ 29491.000... then I would say
> the extra bits are significant. But if the input signal came from say a
> 16-bit ADC, you don't know if the value is really 29490.501, 29491.000, or
> 29491.499, etc.. So it seems that the extra bits really are not
> significant.
>
> So why does everyone claim that the extra bits generated in a multiply are
> important? I know that dithering is important so I must be missing
> something. What's wrong with my logic?
If all you are doing is a single operation then your logic is OK but
what if your process involves many multiplications in a serial
operation (eg. implementing a fixed point FFT or or an FIR filter).
Each "re-quantisation" of each multiplication will introduce some
quantisation noise and if there are enough operations involved the
noise can become very significant.
The general approach is not to carry all the bits resulting from a
multiplication but to use enough extra bits in your arithmetic to
ensure that the added noise is less than the noise inherant in the
quantization of the intial data. For a fixed point FIR filter
implementation of 16 bit data that could be achieved by maintaining a
32 bit accumulated sum of tap products and truncating / dithering once
at the end of the summation. Then you have one truncation noise source
instead of N ( for an N tap filter).
Regards,
Paavo Jumppanen
Author of HarBal Harmonic Balancer
http://www.har-bal.com