DSPRelated.com
Forums

What's the use of a 192 kHz sample rate?

Started by Green Xenon [Radium] May 3, 2008
rajesh <getrajeshin@gmail.com> writes:
> [...] > Remember the shannon's theorem which places a trade off between error > correcting codes and bandwidth.
No, Shannon's thereom places an upper limit on the rate at which one can reliably communicate in a white Gaussian noise channel based on bandwidth, signal power, and noise power. --Randy @article{shannon, title = "Communication in the Presence of Noise", author = "Claude E. Shannon", journal = "Proceedings of the Institute of Radio Engineers", year = "1949", volume = "37", pages = "10-21"} -- % Randy Yates % "Bird, on the wing, %% Fuquay-Varina, NC % goes floating by %%% 919-577-9882 % but there's a teardrop in his eye..." %%%% <yates@ieee.org> % 'One Summer Dream', *Face The Music*, ELO http://www.digitalsignallabs.com
On May 5, 8:47 am, rajesh <getrajes...@gmail.com> wrote:
> On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote: > > > > > On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > > Its also about how you store data. > > > > here is an simplified analogy. > > > Yes, simplified to the point of being factually wrong. > > > > say you need 44.1k samples per second to hear properly. > > > If the disk is corrupted with scrathes and 1 samples in his > > > region are lost your sound is distorted or lost for that period > > > of time. > > > Wrong. First, you have a pretty robust error correction > > scheme built in to the disk. The encoding and decoding > > is such that significant amounts of data can be lost > > but can be EXACTLY reconstructed on playback with NO > > loss. And if the disk is severely scratched to the point where > > the error correction algorith fails, interpolation takes place. > > > One can see thousands of uncorrected errors in the raw > > data coming of the disk, and once the error correction > > has been applied, the result might be a SMALL handful > > (like, oh, 4?) uncorrectable but interpolated errors > > > > Now if there are 196k samples even if (196/44.1) > > > samples are lost there is no difference to what you > > > hear. > > > False. Since you're cramming more data into the same > > area, and the physical faults take up the same area > > regardless of the data density, more bits, according to > > YOUR theory, will be lost on the higher density disk > > than on the lower density disk. > > > That means MORE data is missing, that means the > > error correction algorith is subject to higher rates of > > non-correctable errors, and so on. Your theory is > > bogus if for no other reason than it simply ignores the > > facts. > > > But, in EITHER case, unless the disk is SERIOUSLY > > damaged, the data loss in either case is repaired. > > > > DVD's come wih high density of data due to this > > > they are highly vulnerable to scratches this can > > > be avoided with better waveform matching achieved > > > by high sampling rate. > > > Sorry, this is nothing but technobabble nonsense. > > Thanks ! Your facts are proving my point. > Repeating samples is the most simplest form of error correcting codes. > All your error correcting codes and interpolation techniques become > 196/44.1 folds > more robust on 196 kHz signal compared 44.1 kHz signal.
No, they don't. If the same FEC techniques are used on both discs, then for the same proportion of raw errors (read errors), the number of uncorrectable errors will be the same. However, as dpierce already pointed out, given the same physical damage to both discs, the high- density disc will experience a proportionally higher density of raw errors, therefore the number of uncorrected errors will be higher. The same logic applies to interpolation techniques. Your "explanation" could only work if the high-density player was designed to interpolate over errors and then downsample to 44.1. -- Oli
On May 5, 3:47 am, rajesh <getrajes...@gmail.com> wrote:
> On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote: > > Sorry, this is nothing but technobabble nonsense. > > Thanks ! Your facts are proving my point.
Yes, and you continue below to spew pure nonsense.
> Repeating samples is the most simplest form of > error correcting codes.
Repeated samples ARE NOT "error correcting codes." And increasing the sample rate DOES NOT "repeat samples. In both cases, you deomnstrate your complete lack of understanding of the principles involved..
> All your error correcting codes and interpolation > techniques become 196/44.1 folds > more robust on 196 kHz signal compared 44.1 kHz signal.
No, they do not.
> " remembering and quoting facts is no big deal, > you have to learn to analyze them"
And when people like deomstrate a complete lack of understanding of the facts, yous substitute just making sh*t up. That's better how?
On May 5, 3:47 am, rajesh <getrajes...@gmail.com> wrote:
> On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote: > > Sorry, this is nothing but technobabble nonsense. > > Thanks ! Your facts are proving my point.
Yes, and you continue below to spew pure nonsense.
> Repeating samples is the most simplest form of > error correcting codes.
Repeated samples ARE NOT "error correcting codes." And increasing the sample rate DOES NOT "repeat samples. In both cases, you deomnstrate your complete lack of understanding of the principles involved..
> All your error correcting codes and interpolation > techniques become 196/44.1 folds > more robust on 196 kHz signal compared 44.1 kHz signal.
No, they do not.
> " remembering and quoting facts is no big deal, > you have to learn to analyze them"
And when people like deomstrate a complete lack of understanding of the facts, yous substitute just making sh*t up. That's better how?
On May 5, 4:05 am, rajesh <getrajes...@gmail.com> wrote:
> Remember the shannon's theorem which places a > trade off between error correcting codes and bandwidth.
Again, pure nonsense. Shannon's theorem never discusses error correcting codes AT ALL.
On May 5, 3:46 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 5, 4:05 am, rajesh <getrajes...@gmail.com> wrote: > > > Remember the shannon's theorem which places a > > trade off between error correcting codes and bandwidth. > > Again, pure nonsense. Shannon's theorem > never discusses error correcting codes AT ALL.
hi pierce , BTW from which school did u learn DSP?
"Green Xenon [Radium]" <glucegen1@excite.com> a &#4294967295;crit dans le message de 
news: 481becfe$0$5141$4c368faf@roadrunner.com...
> Hi: > > Why does DVD-Audio use 192 kHz sample rate? What's the advantage over 44.1 > kHz? Humans can't hear the full range of a 192 kHz sample rate?
I think that the answer is aliasing avoidance. Take it this way : - The audio band pass is limited to 16KHz, say 20KHz to get some extra marging for the most perfect ears on earth. - As far as I know ANY audio digitization circuit uses a low pass filter at around 20KHz, so even a 192Ksps ADC or DAC will be band pass limited to 20KHz signals, as there is absolutely no need to manage audio signals with a higher frequency. - If you use 44Ksps then you must insure that there is no power above 44/2=22KHz thanks to M. Nyquist, so your low pass filter must have a very sharp transition. As the filter will never be perfect you will get aliases. For example even if you use a 12th order filter (already difficult and expensive to build) then the attenuation will be "only" 72dB/octave, meaning that a 16KHz low pass filter will have an attenuation of only 50dB or so at 22KHz. And 50dB is not enough for good listeners as a -50dBc "noise" is clearly audible. - However if you use a 192Kbps sampling rate then the required performances on the low pass filter are drastically relaxed. This filter can keep a corner frequency at 16 or 20KHz, but even a 6th order filter will provide a at 86dB attenuation at 192/2=96KHz... And as a 192Ksps sampling rate is far cheaper to build than a very very good low pass filter... That's the beauty of oversampling... Does it make sense ? Cheers, Robert Lacoste www.alciom.com The mixed signal experts
On Mon, 5 May 2008 13:18:57 +0200, "Robert Lacoste"
<use-contact-at-www-alciom-com-for-email> wrote:

>"Green Xenon [Radium]" <glucegen1@excite.com> a &#4294967295;crit dans le message de >news: 481becfe$0$5141$4c368faf@roadrunner.com... >> Hi: >> >> Why does DVD-Audio use 192 kHz sample rate? What's the advantage over 44.1 >> kHz? Humans can't hear the full range of a 192 kHz sample rate? > >I think that the answer is aliasing avoidance. Take it this way : > >- The audio band pass is limited to 16KHz, say 20KHz to get some extra >marging for the most perfect ears on earth. > >- As far as I know ANY audio digitization circuit uses a low pass filter at >around 20KHz, so even a 192Ksps ADC or DAC will be band pass limited to >20KHz signals, as there is absolutely no need to manage audio signals with a >higher frequency. > >- If you use 44Ksps then you must insure that there is no power above >44/2=22KHz thanks to M. Nyquist, so your low pass filter must have a very >sharp transition. As the filter will never be perfect you will get aliases. >For example even if you use a 12th order filter (already difficult and >expensive to build) then the attenuation will be "only" 72dB/octave, meaning >that a 16KHz low pass filter will have an attenuation of only 50dB or so at >22KHz. And 50dB is not enough for good listeners as a -50dBc "noise" is >clearly audible. > >- However if you use a 192Kbps sampling rate then the required performances >on the low pass filter are drastically relaxed. This filter can keep a >corner frequency at 16 or 20KHz, but even a 6th order filter will provide a >at 86dB attenuation at 192/2=96KHz... > >And as a 192Ksps sampling rate is far cheaper to build than a very very good >low pass filter... That's the beauty of oversampling... > >Does it make sense ? > >Cheers, >Robert Lacoste >www.alciom.com >The mixed signal experts >
Not a lot. As far as I'm aware there are NO ADCs that sample at the data rate of the output signal. For example the 44.1ksps ADC in my PC samples at 2.8224MHz. When you sample at that rate it is trivially easy to make a gently sloping analogue lowpass filter that guarantees a lack of alias products. All further filtering and decimation is done digitally, where it is easy. THAT is what oversampling is all about, not using a 192ksps sampling rate. d -- Pearce Consulting http://www.pearce.uk.com
On May 5, 12:18 pm, "Robert Lacoste" <use-contact-at-www-alciom-com-
for-email> wrote:
> "Green Xenon [Radium]" <gluceg...@excite.com> a &#4294967295;crit dans le message denews: 481becfe$0$5141$4c368__BEGIN_MASK_n#9g02mG7!__...__END_MASK_i?a63jfAD$z__@roadrunner.com... > - However if you use a 192Kbps sampling rate then the required performances > on the low pass filter are drastically relaxed. This filter can keep a > corner frequency at 16 or 20KHz, but even a 6th order filter will provide a > at 86dB attenuation at 192/2=96KHz... > > And as a 192Ksps sampling rate is far cheaper to build than a very very good > low pass filter... That's the beauty of oversampling...
Oversampled conversion does not require one to *store* information at the oversampled rate. -- Oli
dpierce.cartchunk.org@gmail.com writes:
> [...] > Repeated samples ARE NOT "error correcting codes."
Yes they are, Dick. They are commonly called "repetition codes." See for example one of the following: @book{berlekamp, title = "{Algebraic Coding Theory}", author = "{Elwyn R. Berlekamp}", publisher = "Aegean Park Press", edition = "revised 1984 edition", year = "1984"} @book{wicker, title = "Error Control Systems for Digital Communication and Storage", author = "Stephen B. Wicker", publisher = "Prentice Hall", year = "1995"}
> [...] > And increasing the sample rate DOES NOT "repeat > samples.
I agree with this, so the point above is probably moot. -- % Randy Yates % "I met someone who looks alot like you, %% Fuquay-Varina, NC % she does the things you do, %%% 919-577-9882 % but she is an IBM." %%%% <yates@ieee.org> % 'Yours Truly, 2095', *Time*, ELO http://www.digitalsignallabs.com