Reply by Randy Yates December 11, 20032003-12-11
Ben Bradley wrote:
> In comp.dsp, Randy Yates <yates@ieee.org> wrote: > > >>... > > >>A beautiful explanation, Robert! Thanks! >> >>But..., do we really hear this? I mean, we're not usually listening >>to slowly-varying DC levels. > > > A musical instrument creating significant bass could, when > recorded, be considered to be generating "slowly-varying DC levels."
Not really. Not even close. Any acoustic instrument is going to have harmonics, attack transients, etc., not to mention room reverberation if not recorded anechoically. But let's say we simply digitally generated a 20 Hz sine wave that was practically perfect. And let's assume that its amplitude is at the quantization threshold, i.e., +/- 1/2 LSB. Then even in this idealized situation, the noise power would be modulating at 40 Hz. I guess you might be able to hear that. But mostly an audio signal seems like it'd be too complex for this to be an issue. I guess it is easy enough to make sure it isn't an issue by using TPDF.
>>Has there been any listening tests done >>to compare RPDF vs. TPDF? > > > A search of rec.audio.pro would be fruitful. There are various > "brand names" of noise-shaped dither used over there. Do a search on > Chris Johnson who has posted quite a lot on dither. He's in a recent > thread over there titled "Dither cource code?" and his page is here: > http://www.airwindows.com/dithering
Thanks for the suggestions, Ben. I did take a look. I'd like to know what this new unnamed dither algorithm is. -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr
Reply by Ben Bradley December 11, 20032003-12-11
In comp.dsp, Randy Yates <yates@ieee.org> wrote:

> ...
>A beautiful explanation, Robert! Thanks! > >But..., do we really hear this? I mean, we're not usually listening >to slowly-varying DC levels.
A musical instrument creating significant bass could, when recorded, be considered to be generating "slowly-varying DC levels."
>Has there been any listening tests done >to compare RPDF vs. TPDF?
A search of rec.audio.pro would be fruitful. There are various "brand names" of noise-shaped dither used over there. Do a search on Chris Johnson who has posted quite a lot on dither. He's in a recent thread over there titled "Dither cource code?" and his page is here: http://www.airwindows.com/dithering
>Just curious. >-- >% Randy Yates % "...the answer lies within your soul >%% Fuquay-Varina, NC % 'cause no one knows which side >%%% 919-577-9882 % the coin will fall." >%%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO >http://home.earthlink.net/~yatescr
----- http://mindspring.com/~benbradley
Reply by Paavo Jumppanen December 9, 20032003-12-09
"Jon Harris" <goldentully@hotmail.com> wrote in message news:<br5kc7$28m75d$1@ID-210375.news.uni-berlin.de>...
> "Jerry Avins" <jya@ieee.org> wrote in message > news:3fd5e38a$0$14969$61fed72c@news.rcn.com... > > Piergiorgio Sartor wrote: > > > > > Randy Yates wrote: > > > > > > My idea is not to use random dithering, but error distribution. > > > > > > In my application it is found to be more effective. > > > > > > The problem I have is that the quatization error is not > > > clear to me. > > > > > > Specifically in this situation, let's say the input sequence is: > > > > > > 1, 1, 1, 1, 2, 2, 2, 2, ... > > > > > > output will be: > > > > > > 5, 5, 5, 5, 11, 11, 11, 11, ... > > > > > > Now, what's happening is that a slow ramp becomes a > > > sharp step, which I do not want, I would prefer: > > > > > > 5, 6, 7, 8, 9, 10, 11, 11, ... > > > > > > as output sequence, which is more close to the original > > > intended signal (at least, I can say it would be better). > > > > > > Of course such a result is quite unrealistic, but I have > > > the hope dithering could help. > > > > > > Any ideas? > > > > > > bye, > > > > It's evident that I'm hardly a good guide here, but suppose that the > > original input were in fact 1.04, 1.00, 1.01, 2.10, 2.06, 2.03, ... > > (a noisy step), quantized to 1, 1, 1, 2, 2, 2, .... Your "improved" > > sequence generator would degrade the result. > > > > Jerry > > Jerry alludes to a point that I've been thinking about lately. It has been > stated in other threads (e.g. "Adding dither when increasing signal level?") > that if you have a 16-bit input value and multiply it by a 16-bit > coefficient, you end up with a 31-bit number and every one of those bits is > significant/important. Throwing any of these bits away in re-quanitizing is > a "bad thing". > > Now, back in my 1st year science courses, we learned the rules of > significant digits for base 10 numbers. IIRC, a product of 2 numbers only > has as many significant digits as the _minimum_ of the 2 inputs. Applying > this logic to base 2, multiplying 2 16-bit numbers should only give you 16 > significant bits as a result. > > Looking at this another way, let's say your input is hex 0x7333 (29491 > decimal) or about 0.9 and your gain coefficient is 0x5A82 or about -3.01dB. > The product is 0x28BA6DE6 which has 31 bits. I'll assume that the gain > coefficient has infinite precision for the sake of argument. Now, if you > could also say that your input was _exactly_ 29491.000... then I would say > the extra bits are significant. But if the input signal came from say a > 16-bit ADC, you don't know if the value is really 29490.501, 29491.000, or > 29491.499, etc.. So it seems that the extra bits really are not > significant. > > So why does everyone claim that the extra bits generated in a multiply are > important? I know that dithering is important so I must be missing > something. What's wrong with my logic?
If all you are doing is a single operation then your logic is OK but what if your process involves many multiplications in a serial operation (eg. implementing a fixed point FFT or or an FIR filter). Each "re-quantisation" of each multiplication will introduce some quantisation noise and if there are enough operations involved the noise can become very significant. The general approach is not to carry all the bits resulting from a multiplication but to use enough extra bits in your arithmetic to ensure that the added noise is less than the noise inherant in the quantization of the intial data. For a fixed point FIR filter implementation of 16 bit data that could be achieved by maintaining a 32 bit accumulated sum of tap products and truncating / dithering once at the end of the summation. Then you have one truncation noise source instead of N ( for an N tap filter). Regards, Paavo Jumppanen Author of HarBal Harmonic Balancer http://www.har-bal.com
Reply by Randy Yates December 9, 20032003-12-09
robert bristow-johnson wrote:
> In article v5bBb.6707$Ho3.476@newsread1.news.atl.earthlink.net, Randy Yates > at yates@ieee.org wrote on 12/08/2003 22:13: > > >>robert bristow-johnson wrote: >> >>>second moment, and we believe psychoacoustically that no one can hear >>>couplings of higher moments. >> >>A belief which I've sometimes wondered is founded. >> >>Conversely, why is the second moment noticeable? > > > noise power modulation. > > suppose that your RPDF is zero mean and your quantizer is in > round-to-nearest mode. if your input is DC and sits exactly on a > quantization level, you will hear no noise at all coming out because the > dither will not be enough to cause the quantizer to land on the next step. > > now gradually move your DC input from dead-on step to mid-tread (half way > between two quantization levels). now you will hear maximum noise because > half of the time the quantizer will round down and the other half it will > round up. > > so the amount of noise you hear is dependent on the level and the second > moment is correlated. but, if you LPF the output, the DC coming out is the > same as the DC going in, so there is no DC error and it is decoupled from > the input (or the first moment is uncorrelated).
A beautiful explanation, Robert! Thanks! But..., do we really hear this? I mean, we're not usually listening to slowly-varying DC levels. Has there been any listening tests done to compare RPDF vs. TPDF? Just curious. -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr
Reply by Jon Harris December 9, 20032003-12-09
"Jerry Avins" <jya@ieee.org> wrote in message
news:3fd5e38a$0$14969$61fed72c@news.rcn.com...
> Piergiorgio Sartor wrote: > > > Randy Yates wrote: > > > > My idea is not to use random dithering, but error distribution. > > > > In my application it is found to be more effective. > > > > The problem I have is that the quatization error is not > > clear to me. > > > > Specifically in this situation, let's say the input sequence is: > > > > 1, 1, 1, 1, 2, 2, 2, 2, ... > > > > output will be: > > > > 5, 5, 5, 5, 11, 11, 11, 11, ... > > > > Now, what's happening is that a slow ramp becomes a > > sharp step, which I do not want, I would prefer: > > > > 5, 6, 7, 8, 9, 10, 11, 11, ... > > > > as output sequence, which is more close to the original > > intended signal (at least, I can say it would be better). > > > > Of course such a result is quite unrealistic, but I have > > the hope dithering could help. > > > > Any ideas? > > > > bye, > > It's evident that I'm hardly a good guide here, but suppose that the > original input were in fact 1.04, 1.00, 1.01, 2.10, 2.06, 2.03, ... > (a noisy step), quantized to 1, 1, 1, 2, 2, 2, .... Your "improved" > sequence generator would degrade the result. > > Jerry
Jerry alludes to a point that I've been thinking about lately. It has been stated in other threads (e.g. "Adding dither when increasing signal level?") that if you have a 16-bit input value and multiply it by a 16-bit coefficient, you end up with a 31-bit number and every one of those bits is significant/important. Throwing any of these bits away in re-quanitizing is a "bad thing". Now, back in my 1st year science courses, we learned the rules of significant digits for base 10 numbers. IIRC, a product of 2 numbers only has as many significant digits as the _minimum_ of the 2 inputs. Applying this logic to base 2, multiplying 2 16-bit numbers should only give you 16 significant bits as a result. Looking at this another way, let's say your input is hex 0x7333 (29491 decimal) or about 0.9 and your gain coefficient is 0x5A82 or about -3.01dB. The product is 0x28BA6DE6 which has 31 bits. I'll assume that the gain coefficient has infinite precision for the sake of argument. Now, if you could also say that your input was _exactly_ 29491.000... then I would say the extra bits are significant. But if the input signal came from say a 16-bit ADC, you don't know if the value is really 29490.501, 29491.000, or 29491.499, etc.. So it seems that the extra bits really are not significant. So why does everyone claim that the extra bits generated in a multiply are important? I know that dithering is important so I must be missing something. What's wrong with my logic?
Reply by Jerry Avins December 9, 20032003-12-09
Piergiorgio Sartor wrote:

> Randy Yates wrote: > >> The dithering level depends on the probability density >> function of the dither (which is of course an IID sequence), >> and the quantization step size. > > > Hi to all, > > I reply here just for convenience. > > My idea is not to use random dithering, but error distribution. > > In my application it is found to be more effective. > > The problem I have is that the quatization error is not > clear to me. > > Specifically in this situation, let's say the input sequence is: > > 1, 1, 1, 1, 2, 2, 2, 2, ... > > output will be: > > 5, 5, 5, 5, 11, 11, 11, 11, ... > > Now, what's happening is that a slow ramp becomes a > sharp step, which I do not want, I would prefer: > > 5, 6, 7, 8, 9, 10, 11, 11, ... > > as output sequence, which is more close to the original > intended signal (at least, I can say it would be better). > > Of course such a result is quite unrealistic, but I have > the hope dithering could help. > > Any ideas? > > bye,
It's evident that I'm hardly a good guide here, but suppose that the original input were in fact 1.04, 1.00, 1.01, 2.10, 2.06, 2.03, ... (a noisy step), quantized to 1, 1, 1, 2, 2, 2, .... Your "improved" sequence generator would degrade the result. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Piergiorgio Sartor December 9, 20032003-12-09
Randy Yates wrote:

> The dithering level depends on the probability density > function of the dither (which is of course an IID sequence), > and the quantization step size.
Hi to all, I reply here just for convenience. My idea is not to use random dithering, but error distribution. In my application it is found to be more effective. The problem I have is that the quatization error is not clear to me. Specifically in this situation, let's say the input sequence is: 1, 1, 1, 1, 2, 2, 2, 2, ... output will be: 5, 5, 5, 5, 11, 11, 11, 11, ... Now, what's happening is that a slow ramp becomes a sharp step, which I do not want, I would prefer: 5, 6, 7, 8, 9, 10, 11, 11, ... as output sequence, which is more close to the original intended signal (at least, I can say it would be better). Of course such a result is quite unrealistic, but I have the hope dithering could help. Any ideas? bye, -- Piergiorgio Sartor
Reply by robert bristow-johnson December 9, 20032003-12-09
In article v5bBb.6707$Ho3.476@newsread1.news.atl.earthlink.net, Randy Yates
at yates@ieee.org wrote on 12/08/2003 22:13:

> robert bristow-johnson wrote: >> second moment, and we believe psychoacoustically that no one can hear >> couplings of higher moments. > > A belief which I've sometimes wondered is founded. > > Conversely, why is the second moment noticeable?
noise power modulation. suppose that your RPDF is zero mean and your quantizer is in round-to-nearest mode. if your input is DC and sits exactly on a quantization level, you will hear no noise at all coming out because the dither will not be enough to cause the quantizer to land on the next step. now gradually move your DC input from dead-on step to mid-tread (half way between two quantization levels). now you will hear maximum noise because half of the time the quantizer will round down and the other half it will round up. so the amount of noise you hear is dependent on the level and the second moment is correlated. but, if you LPF the output, the DC coming out is the same as the DC going in, so there is no DC error and it is decoupled from the input (or the first moment is uncorrelated). r b-j
Reply by Randy Yates December 8, 20032003-12-08
robert bristow-johnson wrote:
> second moment, and we believe psychoacoustically that no one can hear > couplings of higher moments.
A belief which I've sometimes wondered is founded. Conversely, why is the second moment noticeable? -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr
Reply by robert bristow-johnson December 8, 20032003-12-08
In article 3fd4cc07$0$14956$61fed72c@news.rcn.com, Jerry Avins at
jya@ieee.org wrote on 12/08/2003 14:07:

> Wouldn't it be better to round than to truncate?
it might be if there is no dithering. rounding is essentially the same as biasing up by 1/2 LSB and truncating and, in some implementations, truncating (using floor()) is easier to do and has consistent results. if you're dithering, you can bian the dither signal
> > According to R.B-J., a good dither waveform is a two-bit triangle wave, > by which I assume he he means 0, 1, 2, 1, 0, -1, -2, -1, ...
no, i meant that the dither (a *random* waveform) has triangle probability density function (TPDF) of 2 LSB maximum width. it might be white, but you can easily make high-pass dither by computing rectangular dither (of 1 LSB width) and subtract the previous RPDF sample from the present RPDF sample (that is run it through a differentiator, 1 - z^-1, filter). adding the 1 LSB RPDF dither decouples the mean of the error (DC component) from the input signal, but not the mean square. so it decouples the first moment, but not the second. adding 2 LSB TPDF decouples both the first and second moment, and we believe psychoacoustically that no one can hear couplings of higher moments. r b-j