DSPRelated.com
Forums

What's the use of a 192 kHz sample rate?

Started by Green Xenon [Radium] May 3, 2008
On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote:
> Its also about how you store data. > > here is an simplified analogy.
Yes, simplified to the point of being factually wrong.
> say you need 44.1k samples per second to hear properly. > If the disk is corrupted with scrathes and 1 samples in his > region are lost your sound is distorted or lost for that period > of time.
Wrong. First, you have a pretty robust error correction scheme built in to the disk. The encoding and decoding is such that significant amounts of data can be lost but can be EXACTLY reconstructed on playback with NO loss. And if the disk is severely scratched to the point where the error correction algorith fails, interpolation takes place. One can see thousands of uncorrected errors in the raw data coming of the disk, and once the error correction has been applied, the result might be a SMALL handful (like, oh, 4?) uncorrectable but interpolated errors
> Now if there are 196k samples even if (196/44.1) > samples are lost there is no difference to what you > hear.
False. Since you're cramming more data into the same area, and the physical faults take up the same area regardless of the data density, more bits, according to YOUR theory, will be lost on the higher density disk than on the lower density disk. That means MORE data is missing, that means the error correction algorith is subject to higher rates of non-correctable errors, and so on. Your theory is bogus if for no other reason than it simply ignores the facts. But, in EITHER case, unless the disk is SERIOUSLY damaged, the data loss in either case is repaired.
> DVD's come wih high density of data due to this > they are highly vulnerable to scratches this can > be avoided with better waveform matching achieved > by high sampling rate.
Sorry, this is nothing but technobabble nonsense.
On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote:
> Its also about how you store data. > > here is an simplified analogy.
Yes, simplified to the point of being factually wrong.
> say you need 44.1k samples per second to hear properly. > If the disk is corrupted with scrathes and 1 samples in his > region are lost your sound is distorted or lost for that period > of time.
Wrong. First, you have a pretty robust error correction scheme built in to the disk. The encoding and decoding is such that significant amounts of data can be lost but can be EXACTLY reconstructed on playback with NO loss. And if the disk is severely scratched to the point where the error correction algorith fails, interpolation takes place. One can see thousands of uncorrected errors in the raw data coming of the disk, and once the error correction has been applied, the result might be a SMALL handful (like, oh, 4?) uncorrectable but interpolated errors
> Now if there are 196k samples even if (196/44.1) > samples are lost there is no difference to what you > hear.
False. Since you're cramming more data into the same area, and the physical faults take up the same area regardless of the data density, more bits, according to YOUR theory, will be lost on the higher density disk than on the lower density disk. That means MORE data is missing, that means the error correction algorith is subject to higher rates of non-correctable errors, and so on. Your theory is bogus if for no other reason than it simply ignores the facts. But, in EITHER case, unless the disk is SERIOUSLY damaged, the data loss in either case is repaired.
> DVD's come wih high density of data due to this > they are highly vulnerable to scratches this can > be avoided with better waveform matching achieved > by high sampling rate.
Sorry, this is nothing but technobabble nonsense.
On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote:
> Its also about how you store data. > > here is an simplified analogy.
Yes, simplified to the point of being factually wrong.
> say you need 44.1k samples per second to hear properly. > If the disk is corrupted with scrathes and 1 samples in his > region are lost your sound is distorted or lost for that period > of time.
Wrong. First, you have a pretty robust error correction scheme built in to the disk. The encoding and decoding is such that significant amounts of data can be lost but can be EXACTLY reconstructed on playback with NO loss. And if the disk is severely scratched to the point where the error correction algorith fails, interpolation takes place. One can see thousands of uncorrected errors in the raw data coming of the disk, and once the error correction has been applied, the result might be a SMALL handful (like, oh, 4?) uncorrectable but interpolated errors
> Now if there are 196k samples even if (196/44.1) > samples are lost there is no difference to what you > hear.
False. Since you're cramming more data into the same area, and the physical faults take up the same area regardless of the data density, more bits, according to YOUR theory, will be lost on the higher density disk than on the lower density disk. That means MORE data is missing, that means the error correction algorith is subject to higher rates of non-correctable errors, and so on. Your theory is bogus if for no other reason than it simply ignores the facts. But, in EITHER case, unless the disk is SERIOUSLY damaged, the data loss in either case is repaired.
> DVD's come wih high density of data due to this > they are highly vulnerable to scratches this can > be avoided with better waveform matching achieved > by high sampling rate.
Sorry, this is nothing but technobabble nonsense.
On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote:
> Its also about how you store data. > > here is an simplified analogy.
Yes, simplified to the point of being factually wrong.
> say you need 44.1k samples per second to hear properly. > If the disk is corrupted with scrathes and 1 samples in his > region are lost your sound is distorted or lost for that period > of time.
Wrong. First, you have a pretty robust error correction scheme built in to the disk. The encoding and decoding is such that significant amounts of data can be lost but can be EXACTLY reconstructed on playback with NO loss. And if the disk is severely scratched to the point where the error correction algorith fails, interpolation takes place. One can see thousands of uncorrected errors in the raw data coming of the disk, and once the error correction has been applied, the result might be a SMALL handful (like, oh, 4?) uncorrectable but interpolated errors
> Now if there are 196k samples even if (196/44.1) > samples are lost there is no difference to what you > hear.
False. Since you're cramming more data into the same area, and the physical faults take up the same area regardless of the data density, more bits, according to YOUR theory, will be lost on the higher density disk than on the lower density disk. That means MORE data is missing, that means the error correction algorith is subject to higher rates of non-correctable errors, and so on. Your theory is bogus if for no other reason than it simply ignores the facts. But, in EITHER case, unless the disk is SERIOUSLY damaged, the data loss in either case is repaired.
> DVD's come wih high density of data due to this > they are highly vulnerable to scratches this can > be avoided with better waveform matching achieved > by high sampling rate.
Sorry, this is nothing but technobabble nonsense.
On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote:
> Its also about how you store data. > > here is an simplified analogy.
Yes, simplified to the point of being factually wrong.
> say you need 44.1k samples per second to hear properly. > If the disk is corrupted with scrathes and 1 samples in his > region are lost your sound is distorted or lost for that period > of time.
Wrong. First, you have a pretty robust error correction scheme built in to the disk. The encoding and decoding is such that significant amounts of data can be lost but can be EXACTLY reconstructed on playback with NO loss. And if the disk is severely scratched to the point where the error correction algorith fails, interpolation takes place. One can see thousands of uncorrected errors in the raw data coming of the disk, and once the error correction has been applied, the result might be a SMALL handful (like, oh, 4?) uncorrectable but interpolated errors
> Now if there are 196k samples even if (196/44.1) > samples are lost there is no difference to what you > hear.
False. Since you're cramming more data into the same area, and the physical faults take up the same area regardless of the data density, more bits, according to YOUR theory, will be lost on the higher density disk than on the lower density disk. That means MORE data is missing, that means the error correction algorith is subject to higher rates of non-correctable errors, and so on. Your theory is bogus if for no other reason than it simply ignores the facts. But, in EITHER case, unless the disk is SERIOUSLY damaged, the data loss in either case is repaired.
> DVD's come wih high density of data due to this > they are highly vulnerable to scratches this can > be avoided with better waveform matching achieved > by high sampling rate.
Sorry, this is nothing but technobabble nonsense.
On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote:
> Its also about how you store data. > > here is an simplified analogy.
Yes, simplified to the point of being factually wrong.
> say you need 44.1k samples per second to hear properly. > If the disk is corrupted with scrathes and 1 samples in his > region are lost your sound is distorted or lost for that period > of time.
Wrong. First, you have a pretty robust error correction scheme built in to the disk. The encoding and decoding is such that significant amounts of data can be lost but can be EXACTLY reconstructed on playback with NO loss. And if the disk is severely scratched to the point where the error correction algorith fails, interpolation takes place. One can see thousands of uncorrected errors in the raw data coming of the disk, and once the error correction has been applied, the result might be a SMALL handful (like, oh, 4?) uncorrectable but interpolated errors
> Now if there are 196k samples even if (196/44.1) > samples are lost there is no difference to what you > hear.
False. Since you're cramming more data into the same area, and the physical faults take up the same area regardless of the data density, more bits, according to YOUR theory, will be lost on the higher density disk than on the lower density disk. That means MORE data is missing, that means the error correction algorith is subject to higher rates of non-correctable errors, and so on. Your theory is bogus if for no other reason than it simply ignores the facts. But, in EITHER case, unless the disk is SERIOUSLY damaged, the data loss in either case is repaired.
> DVD's come wih high density of data due to this > they are highly vulnerable to scratches this can > be avoided with better waveform matching achieved > by high sampling rate.
Sorry, this is nothing but technobabble nonsense.
On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote:
> Its also about how you store data. > > here is an simplified analogy.
Yes, simplified to the point of being factually wrong.
> say you need 44.1k samples per second to hear properly. > If the disk is corrupted with scrathes and 1 samples in his > region are lost your sound is distorted or lost for that period > of time.
Wrong. First, you have a pretty robust error correction scheme built in to the disk. The encoding and decoding is such that significant amounts of data can be lost but can be EXACTLY reconstructed on playback with NO loss. And if the disk is severely scratched to the point where the error correction algorith fails, interpolation takes place. One can see thousands of uncorrected errors in the raw data coming of the disk, and once the error correction has been applied, the result might be a SMALL handful (like, oh, 4?) uncorrectable but interpolated errors
> Now if there are 196k samples even if (196/44.1) > samples are lost there is no difference to what you > hear.
False. Since you're cramming more data into the same area, and the physical faults take up the same area regardless of the data density, more bits, according to YOUR theory, will be lost on the higher density disk than on the lower density disk. That means MORE data is missing, that means the error correction algorith is subject to higher rates of non-correctable errors, and so on. Your theory is bogus if for no other reason than it simply ignores the facts. But, in EITHER case, unless the disk is SERIOUSLY damaged, the data loss in either case is repaired.
> DVD's come wih high density of data due to this > they are highly vulnerable to scratches this can > be avoided with better waveform matching achieved > by high sampling rate.
Sorry, this is nothing but technobabble nonsense.
On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote:
> Its also about how you store data. > > here is an simplified analogy.
Yes, simplified to the point of being factually wrong.
> say you need 44.1k samples per second to hear properly. > If the disk is corrupted with scrathes and 1 samples in his > region are lost your sound is distorted or lost for that period > of time.
Wrong. First, you have a pretty robust error correction scheme built in to the disk. The encoding and decoding is such that significant amounts of data can be lost but can be EXACTLY reconstructed on playback with NO loss. And if the disk is severely scratched to the point where the error correction algorith fails, interpolation takes place. One can see thousands of uncorrected errors in the raw data coming of the disk, and once the error correction has been applied, the result might be a SMALL handful (like, oh, 4?) uncorrectable but interpolated errors
> Now if there are 196k samples even if (196/44.1) > samples are lost there is no difference to what you > hear.
False. Since you're cramming more data into the same area, and the physical faults take up the same area regardless of the data density, more bits, according to YOUR theory, will be lost on the higher density disk than on the lower density disk. That means MORE data is missing, that means the error correction algorith is subject to higher rates of non-correctable errors, and so on. Your theory is bogus if for no other reason than it simply ignores the facts. But, in EITHER case, unless the disk is SERIOUSLY damaged, the data loss in either case is repaired.
> DVD's come wih high density of data due to this > they are highly vulnerable to scratches this can > be avoided with better waveform matching achieved > by high sampling rate.
Sorry, this is nothing but technobabble nonsense.
On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote:
> Its also about how you store data. > > here is an simplified analogy.
Yes, simplified to the point of being factually wrong.
> say you need 44.1k samples per second to hear properly. > If the disk is corrupted with scrathes and 1 samples in his > region are lost your sound is distorted or lost for that period > of time.
Wrong. First, you have a pretty robust error correction scheme built in to the disk. The encoding and decoding is such that significant amounts of data can be lost but can be EXACTLY reconstructed on playback with NO loss. And if the disk is severely scratched to the point where the error correction algorith fails, interpolation takes place. One can see thousands of uncorrected errors in the raw data coming of the disk, and once the error correction has been applied, the result might be a SMALL handful (like, oh, 4?) uncorrectable but interpolated errors
> Now if there are 196k samples even if (196/44.1) > samples are lost there is no difference to what you > hear.
False. Since you're cramming more data into the same area, and the physical faults take up the same area regardless of the data density, more bits, according to YOUR theory, will be lost on the higher density disk than on the lower density disk. That means MORE data is missing, that means the error correction algorith is subject to higher rates of non-correctable errors, and so on. Your theory is bogus if for no other reason than it simply ignores the facts. But, in EITHER case, unless the disk is SERIOUSLY damaged, the data loss in either case is repaired.
> DVD's come wih high density of data due to this > they are highly vulnerable to scratches this can > be avoided with better waveform matching achieved > by high sampling rate.
Sorry, this is nothing but technobabble nonsense.
On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote:
> Its also about how you store data. > > here is an simplified analogy.
Yes, simplified to the point of being factually wrong.
> say you need 44.1k samples per second to hear properly. > If the disk is corrupted with scrathes and 1 samples in his > region are lost your sound is distorted or lost for that period > of time.
Wrong. First, you have a pretty robust error correction scheme built in to the disk. The encoding and decoding is such that significant amounts of data can be lost but can be EXACTLY reconstructed on playback with NO loss. And if the disk is severely scratched to the point where the error correction algorith fails, interpolation takes place. One can see thousands of uncorrected errors in the raw data coming of the disk, and once the error correction has been applied, the result might be a SMALL handful (like, oh, 4?) uncorrectable but interpolated errors
> Now if there are 196k samples even if (196/44.1) > samples are lost there is no difference to what you > hear.
False. Since you're cramming more data into the same area, and the physical faults take up the same area regardless of the data density, more bits, according to YOUR theory, will be lost on the higher density disk than on the lower density disk. That means MORE data is missing, that means the error correction algorith is subject to higher rates of non-correctable errors, and so on. Your theory is bogus if for no other reason than it simply ignores the facts. But, in EITHER case, unless the disk is SERIOUSLY damaged, the data loss in either case is repaired.
> DVD's come wih high density of data due to this > they are highly vulnerable to scratches this can > be avoided with better waveform matching achieved > by high sampling rate.
Sorry, this is nothing but technobabble nonsense.