DSPRelated.com
Forums

What's the use of a 192 kHz sample rate?

Started by Green Xenon [Radium] May 3, 2008
On May 5, 4:48 pm, rickman <gnu...@gmail.com> wrote:
> On May 5, 1:27 pm, dpierce.cartchunk....@gmail.com wrote: > > On May 5, 9:44 am, rajesh <getrajes...@gmail.com> wrote: > > > > If we cant percieve freq higher than 20k doesnt > > > mean that they are not present > > > As the lawyers say, true but irrelevant. > > > If we can't perceive them, then their presence or > > absence is irrelevant. > > Are you guys still arguing over this??? The issue is > not whether ultrasonic signals can be perceived, it is > about the sample rate.
That's right. Now sit down, there's a chance for learning to happen.
> The microphone may well have a cutoff at or > below 20 kHz so that there is nothing in the > inaudible range. Still, a sample rate higher > than 40 kHz can be a good thing.
But, clearly, you miss the point. A sample rate of 44.1 kHz is good enough for 20 kHz and below. A sample rate of MORE than 44.1 kHz IS NOT ANY BETTER. Got it?
On Sat, 3 May 2008 22:14:00 -0700, dplatt@radagast.org (Dave Platt)
wrote:

>In article <96fq141o7bh82pho575op2q1upefkq61bv@4ax.com>, >Rick Lyons <R.Lyons@_BOGUS_ieee.org> wrote: > >>Hi Randy, >> you remind me of the transistor radios when I was >>kid (back when the air was clean, and sex was dirty). >>If a transistor radio manufacturer could claim that >>their radio had had more transistors than their competition, >>then that was strong "selling point". As such, >>some transistor radio manufacturers were using transitors >>in place of the diodes needed in AM demodulation. >>So instead of having four transistors and one diode, >>those manufacturers could claim "5-transistor performance", >>in the hope of increasing sales. Ha ha. > >I've read that there were some "7-transistor" radios, in which one or >two of the transistors were soldered to unconnected pads on the board. >They had no function at all, and they were often "floor sweeping" >parts known to be defective... but they _were_ transistors and were >present in the radio, and so the radio could be advertised (legally if >not all that ethically) as a "7-transistor" model.
When I was about 12-14 years old (early '70's) I looked carefully at the circuit board of such a transistor radio, and I saw two transistors that were: 1. each connected in a diode cofiguration, and 2. one of the two pads for each resulding "diode" did not connect anywhere. I turned that circuit board over dozens of times to make sure I was tracing the right transistor leads to the right pads. I couldn't figure out for the life of me how that thing worked.
On May 5, 4:34&#4294967295;am, "Mr.T" <MrT@home> wrote:
> "rickman" <gnu...@gmail.com> wrote in message > > news:0edc0747-6d9c-4cc7-9ec5-509523553e2e@b64g2000hsa.googlegroups.com... > > > If it really is a waste of time and money to use 192 kHz ADC and DAC, > > why do you think they would do it? &#4294967295;Don't you think the people > > designing DVD equipment understand the economics of consumer > > products? > > > Try to think about it and see if you can come up with a couple of > > reasons yourself. &#4294967295;I'll be interested in hearing what you think. > > Because it costs them no more and the advertising sounds better to the > uninformed. > What did you come up with? > > MrT.
If you look at 192KHz or SACD releases, they have been mastered and recorded with more care and skill than ordinary CD recordings. So they may indeed sound better, but probably not because of the sampling rate. If you are interested in a related subject, check out; http://www.holosonics.com This company creates sound by intersecting two ultrasonic beams in air, and the non-linearity of the air demodulates the AM modulation applied to one of the beams to create audio that appears to come from out of nowhere. But there is controversy; how do you know that the non- linearity occurs in the air, and not the ear, which is known to have intermodulation distortion? Well, you could pick up the sound with a microphone and look at the FFT, but the problem is that microphones also have non-linearity when presented with 140dB ultrasonic signals! But I don't think non-linear folding of ultrasonic signals by the ear is very relevant to the 192K argument, as the SPL levels must be extreme before any effect is apparent. Bob Adams
Robert Adams wrote:
> If you look at 192KHz or SACD releases, they have been mastered and > recorded with more care and skill than ordinary CD recordings. So they > may indeed sound better, but probably not because of the sampling > rate.
But that same remastered version can be put onto CD and sound (maybe) exactly the same. Sure there may be differnces in the DACs and the bit-depth conversuion, but these are unlikely to be as significant as the mastering changes that were possible only done to kid us that ther really is an 'improvement'. Talking aboout similtaneously released versions on both media, such as the DSOTM a few years ago ... geoff
On May 5, 8:14 am, rajesh <getrajes...@gmail.com> wrote:
> > I don't mean literal repetition.Take for example a sine wave > of 10khz and sample it with 1 mhz. One does think of > samples getting repeated are at least they are close.
okay, rajesh, i'll try to clarify (or muddy) the waters here: for a single sine wave (if it is already established that this is what you're looking at - a single sine wave), three sufficiently close samples (and spaced by 44.1 kHz or 1 MHz sampling rate are both sufficiently close) will contain all the information you need. of course if there are errors (like quantization errors) in those three samples, you'll get a different sine wave reconstructed. so rather than looking at a single sine wave at 10 kHz, let's consider a general waveform, but with one restriction of generality: a waveform *bandlimited* to just under 10 kHz. we will call that bandlimit, "B". now, if you're sampling at 1 MHz, it wouldn't be precise to say that there are samples getting repeated, but it is true that there is redundancy. that is 50 times oversampled because it only needs samples once every 1/20th millisecond. you don't have 49 copies of each necessary sample, but the 49 samples in-between each 50th sample can be constructed from only the knowledge of those samples spaced by 1/20th millisecond. so there is a *sense* of repeated samples (in that in both cases of repeated samples and this oversampled case, 49 outa 50 samples is redundant), but it is, in fact, not the case. now we *do* know that oversampling does, in the virtually ideal case, reduce noise and this (along with noise-shaping) is one of the neat properties of sigma-delta conversion. for an N-bit converter, the theoretical roundoff noise is roundoff noise energy = (1/12)*( (full scale)*2^(1-N) )^2 that energy must be the area under the curve of the noise power spectrum. note that the sampling rate is not a function of that. if the roundoff noise is truly random in nature (a bad assumption for very small signals, but not so bad for signals closer to full scale), then we think that the noise is white or flat, from -Fs/2 to +Fs/2. so, if the area under that constant function is this constant area = (1/12)*( (full scale)*2^(1-N) )^2 and the width is (+Fs/2 - (-Fs/2)) = Fs, then the height is 1/Fs * (1/12)*( (full scale)*2^(1-N) )^2 so, as Fs gets larger, the height if this noise level (the amount of noise per Hz) gets lower, and if you can sacrifice some of the spectrum above your bandlimit, you can filter out all of the noise from your bandlimit to Fs/2 and the level of noise has been reduced by a factor of B/(Fs/2) which is the reciprocal of the oversampling factor, Fs/(2B). this is not unheard of, but it comes into play when there is a limit to the number of bit in the word, N. if that is the case (you have a very fast converter with fewer bits) you can make meaningful samples with word width wider than the A/D converter. but you have to oversample by a factor of 4 to get one extra meaningful bit. that's how the math works out. (this is not assuming noise- shaping.) now, in audio practice, in the studio they get super-high quality A/D converters with, say, around 24 bits and sampling at a much higher sampling rate, maybe 192 kHz. this is for initial recording, mixing, editing, effects, etc. i don't disapprove of them throw extra bits at this whether the need is disputed or not. but eventually they are going into a CD of 16 bit words, two channels, and a 44.1 kHz sample rate. that's 1411200 bits per second flying out at you. (or a DVD or SACD with a lot more bits per second.) now, if the sample rate is increased (thus increasing the bit rate), how are we gonna "compare apples to apples"? if the bit rate increase with the sample rate, and if your bit error rate measured as bits of error/ second remains constant, of course it will sound better as you increase Fs. if it's 1 MHz sampling vs. 20 kHz sampling and you drop one sample per second in both cases, the 1 MHz data will care a lot less (in fact, 50 times less) than the 20 kHz data. you are knocking out a larger portion of the data in the 20 kHz case. but what if the portion was the same? what if, in the 1 MHz case, you lost 50 samples per second compared to the 20 kHz case where you lose 1 sample per second. which is worse? in a recording or transmission environment, which is the case? can you expect an equally noisy channel to have fewer bit errors per bit of data for the high sampling rate case? what would be the mechanism for that? r b-j
In article 
<92a11c44-b117-44cc-93b9-c61b10caffcc@q27g2000prf.googlegroups.com>,
 rajesh <getrajeshin@gmail.com> wrote:

> On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote: > > On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > > > Its also about how you store data. > > > > > here is an simplified analogy. > > > > Yes, simplified to the point of being factually wrong. > > > > > say you need 44.1k samples per second to hear properly. > > > If the disk is corrupted with scrathes and 1 samples in his > > > region are lost your sound is distorted or lost for that period > > > of time. > > > > Wrong. First, you have a pretty robust error correction > > scheme built in to the disk. The encoding and decoding > > is such that significant amounts of data can be lost > > but can be EXACTLY reconstructed on playback with NO > > loss. And if the disk is severely scratched to the point where > > the error correction algorith fails, interpolation takes place. > > > > One can see thousands of uncorrected errors in the raw > > data coming of the disk, and once the error correction > > has been applied, the result might be a SMALL handful > > (like, oh, 4?) uncorrectable but interpolated errors > > > > > Now if there are 196k samples even if (196/44.1) > > > samples are lost there is no difference to what you > > > hear. > > > > False. Since you're cramming more data into the same > > area, and the physical faults take up the same area > > regardless of the data density, more bits, according to > > YOUR theory, will be lost on the higher density disk > > than on the lower density disk. > > > > That means MORE data is missing, that means the > > error correction algorith is subject to higher rates of > > non-correctable errors, and so on. Your theory is > > bogus if for no other reason than it simply ignores the > > facts. > > > > But, in EITHER case, unless the disk is SERIOUSLY > > damaged, the data loss in either case is repaired. > > > > > DVD's come wih high density of data due to this > > > they are highly vulnerable to scratches this can > > > be avoided with better waveform matching achieved > > > by high sampling rate. > > > > Sorry, this is nothing but technobabble nonsense. > > Thanks ! Your facts are proving my point. > Repeating samples is the most simplest form of error correcting codes.
And one of the least efficient. It's easy to get a lot better performance with a lot less than twice as many bits. If you compare the two streams and they are different, *which one is the correct one*? Isaac
In article 
<3c3dd891-b05f-4865-8177-8ee079bfec05@w4g2000prd.googlegroups.com>,
 rajesh <getrajeshin@gmail.com> wrote:

> On May 5, 12:47 pm, rajesh <getrajes...@gmail.com> wrote: > > On May 4, 8:10 pm, dpierce.cartchunk....@gmail.com wrote: > > > > > > > > > On May 3, 9:16 am, rajesh <getrajes...@gmail.com> wrote: > > > > > > Its also about how you store data. > > > > > > here is an simplified analogy. > > > > > Yes, simplified to the point of being factually wrong. > > > > > > say you need 44.1k samples per second to hear properly. > > > > If the disk is corrupted with scrathes and 1 samples in his > > > > region are lost your sound is distorted or lost for that period > > > > of time. > > > > > Wrong. First, you have a pretty robust error correction > > > scheme built in to the disk. The encoding and decoding > > > is such that significant amounts of data can be lost > > > but can be EXACTLY reconstructed on playback with NO > > > loss. And if the disk is severely scratched to the point where > > > the error correction algorith fails, interpolation takes place. > > > > > One can see thousands of uncorrected errors in the raw > > > data coming of the disk, and once the error correction > > > has been applied, the result might be a SMALL handful > > > (like, oh, 4?) uncorrectable but interpolated errors > > > > > > Now if there are 196k samples even if (196/44.1) > > > > samples are lost there is no difference to what you > > > > hear. > > > > > False. Since you're cramming more data into the same > > > area, and the physical faults take up the same area > > > regardless of the data density, more bits, according to > > > YOUR theory, will be lost on the higher density disk > > > than on the lower density disk. > > > > > That means MORE data is missing, that means the > > > error correction algorith is subject to higher rates of > > > non-correctable errors, and so on. Your theory is > > > bogus if for no other reason than it simply ignores the > > > facts. > > > > > But, in EITHER case, unless the disk is SERIOUSLY > > > damaged, the data loss in either case is repaired. > > > > > > DVD's come wih high density of data due to this > > > > they are highly vulnerable to scratches this can > > > > be avoided with better waveform matching achieved > > > > by high sampling rate. > > > > > Sorry, this is nothing but technobabble nonsense. > > > > Thanks ! Your facts are proving my point. > > Repeating samples is the most simplest form of error correcting codes. > > All your error correcting codes and interpolation techniques become > > 196/44.1 folds > > more robust on 196 kHz signal compared 44.1 kHz signal. > > > > You just have to accept this point of view although it may not justify > > for going 196 kHz. > > > > " remembering and quoting facts is no big deal, you have to learn to > > analyze them" > > Remember the shannon's theorem which places a trade off between error > correcting codes and bandwidth.
It's not a "trade off". What Shannon says is that by adding bits for the proper sort of error correction, it's possible to send *more* data through the channel at a given error rate than it is if you send only the data with no correction capability. Proper error correction is a flat-out win for channel capacity, provided you can afford the increase in processing required at both ends of the channel. Isaac
On May 5, 10:33 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 5, 10:09 am, rajesh <getrajes...@gmail.com> wrote: > > > On May 5, 7:05 pm, Oli Charlesworth <ca...@olifilth.co.uk> wrote: > > > If we can, then of course a higher sampling rate will sound better. > > > But that goes against the premises of the OP, and is nothing to do > > > with the ECC or interpolation that you've been going on about! > > > > -- > > > Oli > > > I said we cant percieve, but i didnt say they arent there.. > > Again, true but irrelevant. > > > i will continue the dicussion on ECC tomorrow. > > Hopefully, you will be much better prepared. > > As a hint: the issue of proper sampling vs bandwidth > is a topic COMPLETELY separate from ECC. You > might want to keep that in mind during your preparations.
take for example h.264 video Apart from having many sophisticated techniques it also recommends simple one like repeating packets.
On May 5, 10:33 pm, dpierce.cartchunk....@gmail.com wrote:
> On May 5, 10:09 am, rajesh <getrajes...@gmail.com> wrote: > > > On May 5, 7:05 pm, Oli Charlesworth <ca...@olifilth.co.uk> wrote: > > > If we can, then of course a higher sampling rate will sound better. > > > But that goes against the premises of the OP, and is nothing to do > > > with the ECC or interpolation that you've been going on about! > > > > -- > > > Oli > > > I said we cant percieve, but i didnt say they arent there.. > > Again, true but irrelevant. > > > i will continue the dicussion on ECC tomorrow. > > Hopefully, you will be much better prepared. > > As a hint: the issue of proper sampling vs bandwidth > is a topic COMPLETELY separate from ECC. You > might want to keep that in mind during your preparations.
take for example h.264 video Apart from having many sophisticated techniques it also recommends simple one like repeating packets.
On May 6, 9:53 am, rajesh <getrajes...@gmail.com> wrote:
> On May 5, 10:33 pm, dpierce.cartchunk....@gmail.com wrote: > > > > > On May 5, 10:09 am, rajesh <getrajes...@gmail.com> wrote: > > > > On May 5, 7:05 pm, Oli Charlesworth <ca...@olifilth.co.uk> wrote: > > > > If we can, then of course a higher sampling rate will sound better. > > > > But that goes against the premises of the OP, and is nothing to do > > > > with the ECC or interpolation that you've been going on about! > > > > > -- > > > > Oli > > > > I said we cant percieve, but i didnt say they arent there.. > > > Again, true but irrelevant. > > > > i will continue the dicussion on ECC tomorrow. > > > Hopefully, you will be much better prepared. > > > As a hint: the issue of proper sampling vs bandwidth > > is a topic COMPLETELY separate from ECC. You > > might want to keep that in mind during your preparations. > > take for example h.264 video > Apart from having many sophisticated techniques it also > recommends simple one like repeating packets.
by sophisticated techniques i mean error ressilince tools