DSPRelated.com
Forums

What's the use of a 192 kHz sample rate?

Started by Green Xenon [Radium] May 3, 2008
"geoff" <geoff@nospam-paf.co.nz> wrote in message
news:M5CdnfvR7O51q4PVnZ2dnUVZ_oSunZ2d@giganews.com
> rickman wrote: > >> If it really is a waste of time and money to use 192 kHz >> ADC and DAC, why do you think they would do it? Don't >> you think the people designing DVD equipment understand >> the economics of consumer products? >> >> Try to think about it and see if you can come up with a >> couple of reasons yourself. I'll be interested in >> hearing what you think. > > There are 3 reasons why people design and manufacture > 192KHz equipment: 1 - They imagine it makes a difference. > 2 - The technology is available, so why not > 3 - Everybody already has 44k1/48K gear, so what would we > sell them otherwise....
That is pretty well it. As time marches on, 192/24 converters with > 115 dB dynamic range are becoming jelly bean (highly inexpensive) chips. A number of years back us quality bugs smiled and ponied up about $800 for Lynx L33 cards, but now you can get pretty much the same converters and performance in < $200 eMu cards.
On May 4, 3:40 pm, nos...@nospam.com (Don Pearce) wrote:
> On Sun, 4 May 2008 12:28:34 -0700 (PDT), rickman <gnu...@gmail.com> > wrote: > > > > >On May 4, 12:18 pm, DigitalSignal <digitalsignal...@yahoo.com> wrote: > >> Hi rickman, can you refer the company who can produce or analyze the > >> signals with 140 dB SNR? I have never seen one. I guess they are not > >> using any digital technology, or they are look at very low frequency > >> signals. Right? > > >> Jameswww.go-ci.com > > >Thales Communications, makers of military radios. This figure came up > >in the context of digital interference with the RF sections. I was > >told that the digital noise had to be below -140 dB to prevent > >desensitization of the receiver. These units are *very* overbuilt and > >far surpass anything commercial I have seen. One spur anywhere in a > >huge range, 30 MHz to 500 MHz, IIRC and they are back to the testing > >room to add more copper tape or to put more resistors in clock lines, > >etc. > > A desensitization level is going to be measured in dBm, and -140 is a > very ordinary figure. If external noise interference is present at the > input of a receiver, it will add to the inherent noise already there, > raising it. This is called desensitization. A common interference > requirement is that a desensitization of no more than 1dB is > permitted. > > The -140 in this instance is just a receiver input level, and nothing > to do with S/N ratios, dynamic ranges or anything like that. > > I would be very surprised if anything military surpassed the spec of a > commercial design. Far too few units are built to justify the number > of trips round the design cycle that are needed to remove every spur > from and maximise performance. That fact that they have to bodge units > with copper tape to make them work rather bears this out. > > Certainly when I was designing low noise converters for domestic > satellite systems I achieved a guaranteed total noise figure 0f 0.25dB > at 12GHz, and the entire unit had a works cost price of 11 dollars. > Nothing military has ever come close to that kind of performance or > price.
I am not an RF guy and the figure I actually remember was 150 dB. I hedged it a bit as 150 sounded rather extreme to me. I dunno if 150 is anything outside of ordinary or not. When you say that it costs too much to go around the design cycle many times, you clearly don't know much about the military procurement process. On the last radio that they built while I was there, they were up to rev 14 of the board that had nothing but the UI controller and external interfaces. You need to remember that often the development process is cost plus and the customer is *asking* for tough specs. It is only when they can't be delivered that they back off. What is really funny is that you are getting wrapped around the axle about my use of this figure when that was really just an aside to an aside of my original point. Funny how these discussions get so far off topic. Did you read my post which used the original 140 dB figure?
On Sun, 4 May 2008 19:54:57 -0700 (PDT), rickman <gnuarm@gmail.com>
wrote:

>On May 4, 3:40 pm, nos...@nospam.com (Don Pearce) wrote: >> On Sun, 4 May 2008 12:28:34 -0700 (PDT), rickman <gnu...@gmail.com> >> wrote: >> >> >> >> >On May 4, 12:18 pm, DigitalSignal <digitalsignal...@yahoo.com> wrote: >> >> Hi rickman, can you refer the company who can produce or analyze the >> >> signals with 140 dB SNR? I have never seen one. I guess they are not >> >> using any digital technology, or they are look at very low frequency >> >> signals. Right? >> >> >> Jameswww.go-ci.com >> >> >Thales Communications, makers of military radios. This figure came up >> >in the context of digital interference with the RF sections. I was >> >told that the digital noise had to be below -140 dB to prevent >> >desensitization of the receiver. These units are *very* overbuilt and >> >far surpass anything commercial I have seen. One spur anywhere in a >> >huge range, 30 MHz to 500 MHz, IIRC and they are back to the testing >> >room to add more copper tape or to put more resistors in clock lines, >> >etc. >> >> A desensitization level is going to be measured in dBm, and -140 is a >> very ordinary figure. If external noise interference is present at the >> input of a receiver, it will add to the inherent noise already there, >> raising it. This is called desensitization. A common interference >> requirement is that a desensitization of no more than 1dB is >> permitted. >> >> The -140 in this instance is just a receiver input level, and nothing >> to do with S/N ratios, dynamic ranges or anything like that. >> >> I would be very surprised if anything military surpassed the spec of a >> commercial design. Far too few units are built to justify the number >> of trips round the design cycle that are needed to remove every spur >> from and maximise performance. That fact that they have to bodge units >> with copper tape to make them work rather bears this out. >> >> Certainly when I was designing low noise converters for domestic >> satellite systems I achieved a guaranteed total noise figure 0f 0.25dB >> at 12GHz, and the entire unit had a works cost price of 11 dollars. >> Nothing military has ever come close to that kind of performance or >> price. > >I am not an RF guy and the figure I actually remember was 150 dB. I >hedged it a bit as 150 sounded rather extreme to me. I dunno if 150 >is anything outside of ordinary or not. >
OK 150 - different bandwidth, then. The basic thermal noise at the front end of a receiver is -174dBm + 10 log(bandwidth) + noise figure. A noise or interference level that desensitizes that by 1dB will be about 10dB lower than that level. You need to do the know the bandwidth and noise figure to know the significance. What is significant is that this receiver has a desensitization limit for interference that it throws at itself. I have never, ever come across that before - it is always an external spec; you design the internals so it doesn't interfere.
>When you say that it costs too much to go around the design cycle many >times, you clearly don't know much about the military procurement >process. On the last radio that they built while I was there, they >were up to rev 14 of the board that had nothing but the UI controller >and external interfaces. You need to remember that often the >development process is cost plus and the customer is *asking* for >tough specs. It is only when they can't be delivered that they back >off. >
Iteration 14 and the company was still in business? Anyone can afford to do it again if they can't do it right. Was this a "cost-plus" contract? Three or perhaps four was more the number I had in mind.
>What is really funny is that you are getting wrapped around the axle >about my use of this figure when that was really just an aside to an >aside of my original point. Funny how these discussions get so far >off topic. >
If you like. It was clearly a number you threw in because it sounded impressively big, even though you hadn't a clue what it meant.
>Did you read my post which used the original 140 dB figure?
Yes. It was nonsense. d -- Pearce Consulting http://www.pearce.uk.com
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"
On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > Its also about how you store data. > > > here is an simplified analogy. > > Yes, simplified to the point of being factually wrong. > > > say you need 44.1k samples per second to hear properly. > > If the disk is corrupted with scrathes and 1 samples in his > > region are lost your sound is distorted or lost for that period > > of time. > > Wrong. First, you have a pretty robust error correction > scheme built in to the disk. The encoding and decoding > is such that significant amounts of data can be lost > but can be EXACTLY reconstructed on playback with NO > loss. And if the disk is severely scratched to the point where > the error correction algorith fails, interpolation takes place. > > One can see thousands of uncorrected errors in the raw > data coming of the disk, and once the error correction > has been applied, the result might be a SMALL handful > (like, oh, 4?) uncorrectable but interpolated errors > > > Now if there are 196k samples even if (196/44.1) > > samples are lost there is no difference to what you > > hear. > > False. Since you're cramming more data into the same > area, and the physical faults take up the same area > regardless of the data density, more bits, according to > YOUR theory, will be lost on the higher density disk > than on the lower density disk. > > That means MORE data is missing, that means the > error correction algorith is subject to higher rates of > non-correctable errors, and so on. Your theory is > bogus if for no other reason than it simply ignores the > facts. > > But, in EITHER case, unless the disk is SERIOUSLY > damaged, the data loss in either case is repaired. > > > DVD's come wih high density of data due to this > > they are highly vulnerable to scratches this can > > be avoided with better waveform matching achieved > > by high sampling rate. > > Sorry, this is nothing but technobabble nonsense.
Thanks ! Your facts are proving my point. Repeating samples is the most simplest form of error correcting codes. All your error correcting codes and interpolation techniques become 196/44.1 folds more robust on 196 kHz signal compared 44.1 kHz signal. You just have to accept this point of view although it may not justify for going 196 kHz. " remembering and quoting facts is no big deal, you have to learn to analyze them"