DSPRelated.com
Forums

What's the use of a 192 kHz sample rate?

Started by Green Xenon [Radium] May 3, 2008
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 5, 12:47 pm, rajesh <getrajes...@gmail.com> wrote:
> On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote: > > > > > On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > > Its also about how you store data. > > > > here is an simplified analogy. > > > Yes, simplified to the point of being factually wrong. > > > > say you need 44.1k samples per second to hear properly. > > > If the disk is corrupted with scrathes and 1 samples in his > > > region are lost your sound is distorted or lost for that period > > > of time. > > > Wrong. First, you have a pretty robust error correction > > scheme built in to the disk. The encoding and decoding > > is such that significant amounts of data can be lost > > but can be EXACTLY reconstructed on playback with NO > > loss. And if the disk is severely scratched to the point where > > the error correction algorith fails, interpolation takes place. > > > One can see thousands of uncorrected errors in the raw > > data coming of the disk, and once the error correction > > has been applied, the result might be a SMALL handful > > (like, oh, 4?) uncorrectable but interpolated errors > > > > Now if there are 196k samples even if (196/44.1) > > > samples are lost there is no difference to what you > > > hear. > > > False. Since you're cramming more data into the same > > area, and the physical faults take up the same area > > regardless of the data density, more bits, according to > > YOUR theory, will be lost on the higher density disk > > than on the lower density disk. > > > That means MORE data is missing, that means the > > error correction algorith is subject to higher rates of > > non-correctable errors, and so on. Your theory is > > bogus if for no other reason than it simply ignores the > > facts. > > > But, in EITHER case, unless the disk is SERIOUSLY > > damaged, the data loss in either case is repaired. > > > > DVD's come wih high density of data due to this > > > they are highly vulnerable to scratches this can > > > be avoided with better waveform matching achieved > > > by high sampling rate. > > > Sorry, this is nothing but technobabble nonsense. > > Thanks ! Your facts are proving my point. > Repeating samples is the most simplest form of error correcting codes. > All your error correcting codes and interpolation techniques become > 196/44.1 folds > more robust on 196 kHz signal compared 44.1 kHz signal. > > You just have to accept this point of view although it may not justify > for going 196 kHz. > > " remembering and quoting facts is no big deal, you have to learn to > analyze them"
Try writing 44.1 khz signal on that high density disc(greed to store more music)....small scratch and and the disc is busted.
On May 5, 12:47 pm, rajesh <getrajes...@gmail.com> wrote:
> On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote: > > > > > On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > > Its also about how you store data. > > > > here is an simplified analogy. > > > Yes, simplified to the point of being factually wrong. > > > > say you need 44.1k samples per second to hear properly. > > > If the disk is corrupted with scrathes and 1 samples in his > > > region are lost your sound is distorted or lost for that period > > > of time. > > > Wrong. First, you have a pretty robust error correction > > scheme built in to the disk. The encoding and decoding > > is such that significant amounts of data can be lost > > but can be EXACTLY reconstructed on playback with NO > > loss. And if the disk is severely scratched to the point where > > the error correction algorith fails, interpolation takes place. > > > One can see thousands of uncorrected errors in the raw > > data coming of the disk, and once the error correction > > has been applied, the result might be a SMALL handful > > (like, oh, 4?) uncorrectable but interpolated errors > > > > Now if there are 196k samples even if (196/44.1) > > > samples are lost there is no difference to what you > > > hear. > > > False. Since you're cramming more data into the same > > area, and the physical faults take up the same area > > regardless of the data density, more bits, according to > > YOUR theory, will be lost on the higher density disk > > than on the lower density disk. > > > That means MORE data is missing, that means the > > error correction algorith is subject to higher rates of > > non-correctable errors, and so on. Your theory is > > bogus if for no other reason than it simply ignores the > > facts. > > > But, in EITHER case, unless the disk is SERIOUSLY > > damaged, the data loss in either case is repaired. > > > > DVD's come wih high density of data due to this > > > they are highly vulnerable to scratches this can > > > be avoided with better waveform matching achieved > > > by high sampling rate. > > > Sorry, this is nothing but technobabble nonsense. > > Thanks ! Your facts are proving my point. > Repeating samples is the most simplest form of error correcting codes. > All your error correcting codes and interpolation techniques become > 196/44.1 folds > more robust on 196 kHz signal compared 44.1 kHz signal. > > You just have to accept this point of view although it may not justify > for going 196 kHz. > > " remembering and quoting facts is no big deal, you have to learn to > analyze them"
Remember the shannon's theorem which places a trade off between error correcting codes and bandwidth.
"Green Xenon [Radium]" <glucegen1@excite.com> wrote in message
news:481becfe$0$5141$4c368faf@roadrunner.com...
> Why does DVD-Audio use 192 kHz sample rate? What's the advantage over > 44.1 kHz? Humans can't hear the full range of a 192 kHz sample rate?
This is the same guy who wanted a 3GHz sample rate in an earlier post!!!!
> On average, what is the minimum sample rate for a guy in his early to > mid 20s who likes treble?
44.1 ks/s.
> I agree there are a small percentage of humans who can hear above 20 > kHz. However, DVD-audio uses a sample-rate of 192 kHz which allows a > maximum frequency of 96 kHz. There is no known case of any human being > able to hear sounds nearly as high as 96 kHz. I can agree with 48 kHz > sample rate and even 96 kHz sample-rate [maybe], but 192 kHz is just
stupid.
> > So whats the justification fur using 192 kHz? If you ask me, its just a > total waste of bandwidth and energy. Any proof to the contrary?
The advertising sounds better.
> Please correct me if I'm wrong but AFAIK, its a waste of time, money, > energy to move to 192 kHz.
So why did you want 3GHz for audio then!!!!!!!!!!!!!! Please explain your sudden change of heart. (and yes I know he's just a troll) MrT.
"rickman" <gnuarm@gmail.com> wrote in message
news:0edc0747-6d9c-4cc7-9ec5-509523553e2e@b64g2000hsa.googlegroups.com...
> If it really is a waste of time and money to use 192 kHz ADC and DAC, > why do you think they would do it? Don't you think the people > designing DVD equipment understand the economics of consumer > products? > > Try to think about it and see if you can come up with a couple of > reasons yourself. I'll be interested in hearing what you think.
Because it costs them no more and the advertising sounds better to the uninformed. What did you come up with? MrT.