DSPRelated.com
Forums

Dtihering, gain and again...

Started by Piergiorgio Sartor December 8, 2003
Hello,

I was following the nice thread about dithering a signal
in case of a gain (> 1) is applied.

I've a couple of naive questions.

Let's say the gain factor is 5.6, let's say there will
be no clipping (target integer is big enough or input
range in small enough).

Now, the transfer function will be something like:

0, 1,   2,    3,    4,    5,  6,    ...
0, 5.6, 11.2, 16.8, 22.4, 28, 33.6, ...

Truncated to:

0, 5, 11, 16, 22, 28, 33, ...

It would make sense to apply dithering to get some
better performances.

What's the dithering level that must be applied?
Something between -0.5 and 0.5?

Assuming I would like not to use random dither, but
error diffusion, is this the error to be distributed:

0, 0.6, 0.2, 0.8, 0.4, 0, 0.6, ...?

Is there any possibilities that log(5.6)/log(2) represents
somehow the quantization "level"?
If yes, the error should be in the range 0~5.6, or not?

Just some confused questions... :-)

bye,

-- 
   Piergiorgio Sartor

Piergiorgio Sartor wrote:

> Hello, > > I was following the nice thread about dithering a signal > in case of a gain (> 1) is applied. > > I've a couple of naive questions. > > Let's say the gain factor is 5.6, let's say there will > be no clipping (target integer is big enough or input > range in small enough). > > Now, the transfer function will be something like: > > 0, 1, 2, 3, 4, 5, 6, ... > 0, 5.6, 11.2, 16.8, 22.4, 28, 33.6, ... > > Truncated to: > > 0, 5, 11, 16, 22, 28, 33, ... > > It would make sense to apply dithering to get some > better performances. > > What's the dithering level that must be applied? > Something between -0.5 and 0.5? > > Assuming I would like not to use random dither, but > error diffusion, is this the error to be distributed: > > 0, 0.6, 0.2, 0.8, 0.4, 0, 0.6, ...? > > Is there any possibilities that log(5.6)/log(2) represents > somehow the quantization "level"? > If yes, the error should be in the range 0~5.6, or not? > > Just some confused questions... :-) > > bye,
Wouldn't it be better to round than to truncate? Then 0, 1, 2, 3, 4, 5, ... scales to 0, 5.6, 11.2, 16.8, 22.4, 28, ... and rounds to 0, 6, 11., 17, 22, 28, ... The error then becomes 0 +.4, -.2, +.2, -.4, 0, ... Which is better if even uncorrected and easier to distribute. According to R.B-J., a good dither waveform is a two-bit triangle wave, by which I assume he he means 0, 1, 2, 1, 0, -1, -2, -1, ... Truncation, rounding, and error distribution (fraction saving) can all introduce high-frequency artifacts. Subsequent filtering can reintroduce fractional samples. I'm confused too. Jerry P.S. Rounding when dividing by a power of two is easy. After the arithmetic shift, ADD (immediate) 0 with carry. -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Piergiorgio Sartor wrote:

> Hello, > > I was following the nice thread about dithering a signal > in case of a gain (> 1) is applied. > > I've a couple of naive questions. > > Let's say the gain factor is 5.6, let's say there will > be no clipping (target integer is big enough or input > range in small enough). > > Now, the transfer function will be something like: > > 0, 1, 2, 3, 4, 5, 6, ... > 0, 5.6, 11.2, 16.8, 22.4, 28, 33.6, ... > > Truncated to: > > 0, 5, 11, 16, 22, 28, 33, ... > > It would make sense to apply dithering to get some > better performances. > > What's the dithering level that must be applied? > Something between -0.5 and 0.5?
Hey Piergiorgio, The dithering level depends on the probability density function of the dither (which is of course an IID sequence), and the quantization step size. Since we're presumably talking about two's complement here, the quantization step size is 1. That means our error is between -0.5 and +0.5 (assuming we round). If we use the ideal TPDF, then the density would range from -1 to +1. This is from Robert Wannamaker's PhD thesis. This is just as if we were quantization an analog voltage using an A/D. In a real sense (pun intended!), the product of the gain factor with the integer two's complement values that are being scaled results in a real number. That number has to be re-quantized to integers. This is precisely the same situation as an A/D converter, so one would use the same quantization process. -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr
In article 3fd4cc07$0$14956$61fed72c@news.rcn.com, Jerry Avins at
jya@ieee.org wrote on 12/08/2003 14:07:

> Wouldn't it be better to round than to truncate?
it might be if there is no dithering. rounding is essentially the same as biasing up by 1/2 LSB and truncating and, in some implementations, truncating (using floor()) is easier to do and has consistent results. if you're dithering, you can bian the dither signal
> > According to R.B-J., a good dither waveform is a two-bit triangle wave, > by which I assume he he means 0, 1, 2, 1, 0, -1, -2, -1, ...
no, i meant that the dither (a *random* waveform) has triangle probability density function (TPDF) of 2 LSB maximum width. it might be white, but you can easily make high-pass dither by computing rectangular dither (of 1 LSB width) and subtract the previous RPDF sample from the present RPDF sample (that is run it through a differentiator, 1 - z^-1, filter). adding the 1 LSB RPDF dither decouples the mean of the error (DC component) from the input signal, but not the mean square. so it decouples the first moment, but not the second. adding 2 LSB TPDF decouples both the first and second moment, and we believe psychoacoustically that no one can hear couplings of higher moments. r b-j
robert bristow-johnson wrote:
> second moment, and we believe psychoacoustically that no one can hear > couplings of higher moments.
A belief which I've sometimes wondered is founded. Conversely, why is the second moment noticeable? -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr
In article v5bBb.6707$Ho3.476@newsread1.news.atl.earthlink.net, Randy Yates
at yates@ieee.org wrote on 12/08/2003 22:13:

> robert bristow-johnson wrote: >> second moment, and we believe psychoacoustically that no one can hear >> couplings of higher moments. > > A belief which I've sometimes wondered is founded. > > Conversely, why is the second moment noticeable?
noise power modulation. suppose that your RPDF is zero mean and your quantizer is in round-to-nearest mode. if your input is DC and sits exactly on a quantization level, you will hear no noise at all coming out because the dither will not be enough to cause the quantizer to land on the next step. now gradually move your DC input from dead-on step to mid-tread (half way between two quantization levels). now you will hear maximum noise because half of the time the quantizer will round down and the other half it will round up. so the amount of noise you hear is dependent on the level and the second moment is correlated. but, if you LPF the output, the DC coming out is the same as the DC going in, so there is no DC error and it is decoupled from the input (or the first moment is uncorrelated). r b-j
Randy Yates wrote:

> The dithering level depends on the probability density > function of the dither (which is of course an IID sequence), > and the quantization step size.
Hi to all, I reply here just for convenience. My idea is not to use random dithering, but error distribution. In my application it is found to be more effective. The problem I have is that the quatization error is not clear to me. Specifically in this situation, let's say the input sequence is: 1, 1, 1, 1, 2, 2, 2, 2, ... output will be: 5, 5, 5, 5, 11, 11, 11, 11, ... Now, what's happening is that a slow ramp becomes a sharp step, which I do not want, I would prefer: 5, 6, 7, 8, 9, 10, 11, 11, ... as output sequence, which is more close to the original intended signal (at least, I can say it would be better). Of course such a result is quite unrealistic, but I have the hope dithering could help. Any ideas? bye, -- Piergiorgio Sartor
Piergiorgio Sartor wrote:

> Randy Yates wrote: > >> The dithering level depends on the probability density >> function of the dither (which is of course an IID sequence), >> and the quantization step size. > > > Hi to all, > > I reply here just for convenience. > > My idea is not to use random dithering, but error distribution. > > In my application it is found to be more effective. > > The problem I have is that the quatization error is not > clear to me. > > Specifically in this situation, let's say the input sequence is: > > 1, 1, 1, 1, 2, 2, 2, 2, ... > > output will be: > > 5, 5, 5, 5, 11, 11, 11, 11, ... > > Now, what's happening is that a slow ramp becomes a > sharp step, which I do not want, I would prefer: > > 5, 6, 7, 8, 9, 10, 11, 11, ... > > as output sequence, which is more close to the original > intended signal (at least, I can say it would be better). > > Of course such a result is quite unrealistic, but I have > the hope dithering could help. > > Any ideas? > > bye,
It's evident that I'm hardly a good guide here, but suppose that the original input were in fact 1.04, 1.00, 1.01, 2.10, 2.06, 2.03, ... (a noisy step), quantized to 1, 1, 1, 2, 2, 2, .... Your "improved" sequence generator would degrade the result. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
"Jerry Avins" <jya@ieee.org> wrote in message
news:3fd5e38a$0$14969$61fed72c@news.rcn.com...
> Piergiorgio Sartor wrote: > > > Randy Yates wrote: > > > > My idea is not to use random dithering, but error distribution. > > > > In my application it is found to be more effective. > > > > The problem I have is that the quatization error is not > > clear to me. > > > > Specifically in this situation, let's say the input sequence is: > > > > 1, 1, 1, 1, 2, 2, 2, 2, ... > > > > output will be: > > > > 5, 5, 5, 5, 11, 11, 11, 11, ... > > > > Now, what's happening is that a slow ramp becomes a > > sharp step, which I do not want, I would prefer: > > > > 5, 6, 7, 8, 9, 10, 11, 11, ... > > > > as output sequence, which is more close to the original > > intended signal (at least, I can say it would be better). > > > > Of course such a result is quite unrealistic, but I have > > the hope dithering could help. > > > > Any ideas? > > > > bye, > > It's evident that I'm hardly a good guide here, but suppose that the > original input were in fact 1.04, 1.00, 1.01, 2.10, 2.06, 2.03, ... > (a noisy step), quantized to 1, 1, 1, 2, 2, 2, .... Your "improved" > sequence generator would degrade the result. > > Jerry
Jerry alludes to a point that I've been thinking about lately. It has been stated in other threads (e.g. "Adding dither when increasing signal level?") that if you have a 16-bit input value and multiply it by a 16-bit coefficient, you end up with a 31-bit number and every one of those bits is significant/important. Throwing any of these bits away in re-quanitizing is a "bad thing". Now, back in my 1st year science courses, we learned the rules of significant digits for base 10 numbers. IIRC, a product of 2 numbers only has as many significant digits as the _minimum_ of the 2 inputs. Applying this logic to base 2, multiplying 2 16-bit numbers should only give you 16 significant bits as a result. Looking at this another way, let's say your input is hex 0x7333 (29491 decimal) or about 0.9 and your gain coefficient is 0x5A82 or about -3.01dB. The product is 0x28BA6DE6 which has 31 bits. I'll assume that the gain coefficient has infinite precision for the sake of argument. Now, if you could also say that your input was _exactly_ 29491.000... then I would say the extra bits are significant. But if the input signal came from say a 16-bit ADC, you don't know if the value is really 29490.501, 29491.000, or 29491.499, etc.. So it seems that the extra bits really are not significant. So why does everyone claim that the extra bits generated in a multiply are important? I know that dithering is important so I must be missing something. What's wrong with my logic?
robert bristow-johnson wrote:
> In article v5bBb.6707$Ho3.476@newsread1.news.atl.earthlink.net, Randy Yates > at yates@ieee.org wrote on 12/08/2003 22:13: > > >>robert bristow-johnson wrote: >> >>>second moment, and we believe psychoacoustically that no one can hear >>>couplings of higher moments. >> >>A belief which I've sometimes wondered is founded. >> >>Conversely, why is the second moment noticeable? > > > noise power modulation. > > suppose that your RPDF is zero mean and your quantizer is in > round-to-nearest mode. if your input is DC and sits exactly on a > quantization level, you will hear no noise at all coming out because the > dither will not be enough to cause the quantizer to land on the next step. > > now gradually move your DC input from dead-on step to mid-tread (half way > between two quantization levels). now you will hear maximum noise because > half of the time the quantizer will round down and the other half it will > round up. > > so the amount of noise you hear is dependent on the level and the second > moment is correlated. but, if you LPF the output, the DC coming out is the > same as the DC going in, so there is no DC error and it is decoupled from > the input (or the first moment is uncorrelated).
A beautiful explanation, Robert! Thanks! But..., do we really hear this? I mean, we're not usually listening to slowly-varying DC levels. Has there been any listening tests done to compare RPDF vs. TPDF? Just curious. -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr