There are commonly found graphs of a warming trend going back to 1880 is. Now the warming is about 0.8C altogether. http://www.skepticalscience.com/surface-temperature-measurements.htm I was wondering if people back in 1880 took measurements to that sort of accuracy. I would have thought they would have rounded up or down to the nearest F or C. The people who are in the know tell me that this doesn't matter since over time the people who round up will cancel the people who round down! I then thought of ADCs and quantization error. Suppose you measure in degrees Celcius and round to teh nearest integer. Then you get a uniform distribution from -0.5 to +0.5 with size 1 ie the area is unity under the pdf. Work out its mean and you get zero as the good folks claim. However, this just tells us there is no dc or constant offset. It has an rms value of unity. For 3 sigma would this not mean you could be in error as much as 3C?
OT - Temperature readings in the 1890s
Started by ●January 7, 2015
Reply by ●January 7, 20152015-01-07
gyansorova@gmail.com wrote:> There are commonly found graphs of a warming trend going back to 1880 is. Now the warming is > about 0.8C altogether.> http://www.skepticalscience.com/surface-temperature-measurements.htm> I was wondering if people back in 1880 took measurements to that > sort of accuracy. I would have thought they would have rounded > up or down to the nearest F or C. The people who are in the know > tell me that this doesn't matter since over time the people who > round up will cancel the people who round down!If the rounding is consistent, it doesn't matter much. When you average N numbers, the uncertainty of the average is 1/sqrt(N) times the uncertainty of the values you are averaging. The trend will be both space (around the globe) and time averaged, such that it doesn't take all that many points to get an 0.1C uncertainty, most likely a lot better. Random error you can average out, but systematic error you can't. -- glen
Reply by ●January 7, 20152015-01-07
On Thursday, January 8, 2015 10:43:05 AM UTC+13, glen herrmannsfeldt wrote:> gyansorova@gmail.com wrote: > > There are commonly found graphs of a warming trend going back to 1880 is. Now the warming is > > about 0.8C altogether. > > > http://www.skepticalscience.com/surface-temperature-measurements.htm > > > I was wondering if people back in 1880 took measurements to that > > sort of accuracy. I would have thought they would have rounded > > up or down to the nearest F or C. The people who are in the know > > tell me that this doesn't matter since over time the people who > > round up will cancel the people who round down! > > If the rounding is consistent, it doesn't matter much. > > When you average N numbers, the uncertainty of the average is > 1/sqrt(N) times the uncertainty of the values you are averaging. > > The trend will be both space (around the globe) and time averaged, > such that it doesn't take all that many points to get an 0.1C > uncertainty, most likely a lot better. > > Random error you can average out, but systematic error you can't. > > -- glenThanks but doesn't answer the question really. If that was the case then there would be no quantization noise at all in an ADC.
Reply by ●January 8, 20152015-01-08
gyansorova@gmail.com wrote: (snip, I wrote)>> If the rounding is consistent, it doesn't matter much.>> When you average N numbers, the uncertainty of the average is >> 1/sqrt(N) times the uncertainty of the values you are averaging.>> The trend will be both space (around the globe) and time averaged, >> such that it doesn't take all that many points to get an 0.1C >> uncertainty, most likely a lot better.>> Random error you can average out, but systematic error you can't.> Thanks but doesn't answer the question really. If that was the > case then there would be no quantization noise at all in an ADC.I don't understand the point. For one, averaging will reduce the quantization noise, but it doesn't go away. Also, you need to do some averaging. For periodic signals, you can make multiple measurements and average them, to reduce noise. But the original question was a spatial average, which also reduces noise. Generating a least squares fit also reduces the noise relative to the uncertainties on individual points. -- glen
Reply by ●January 8, 20152015-01-08
On Thursday, January 8, 2015 6:30:33 PM UTC+13, glen herrmannsfeldt wrote:> gyansorova@gmail.com wrote: > > (snip, I wrote) > >> If the rounding is consistent, it doesn't matter much. > > >> When you average N numbers, the uncertainty of the average is > >> 1/sqrt(N) times the uncertainty of the values you are averaging. > > >> The trend will be both space (around the globe) and time averaged, > >> such that it doesn't take all that many points to get an 0.1C > >> uncertainty, most likely a lot better. > > >> Random error you can average out, but systematic error you can't. > > > Thanks but doesn't answer the question really. If that was the > > case then there would be no quantization noise at all in an ADC. > > I don't understand the point. For one, averaging will reduce the > quantization noise, but it doesn't go away. Also, you need to do > some averaging. For periodic signals, you can make multiple > measurements and average them, to reduce noise. > > But the original question was a spatial average, which also reduces > noise. Generating a least squares fit also reduces the noise relative > to the uncertainties on individual points. > > -- glenSure but once something is digitised you are surely stuck with the noise otherwise you could sample with 8 bits and get a -90dB signal to quantization noise ratio. (of course sigma delta designs do in fact do some filtering in the digital domain) hence once you make the decision to round up or round down you are stuck with the amount of noise.
Reply by ●January 8, 20152015-01-08
On Wed, 07 Jan 2015 22:16:30 -0800, gyansorova wrote:> On Thursday, January 8, 2015 6:30:33 PM UTC+13, glen herrmannsfeldt > wrote: >> gyansorova@gmail.com wrote: >> >> (snip, I wrote) >> >> If the rounding is consistent, it doesn't matter much. >> >> >> When you average N numbers, the uncertainty of the average is >> >> 1/sqrt(N) times the uncertainty of the values you are averaging. >> >> >> The trend will be both space (around the globe) and time averaged, >> >> such that it doesn't take all that many points to get an 0.1C >> >> uncertainty, most likely a lot better. >> >> >> Random error you can average out, but systematic error you can't. >> >> > Thanks but doesn't answer the question really. If that was the case >> > then there would be no quantization noise at all in an ADC. >> >> I don't understand the point. For one, averaging will reduce the >> quantization noise, but it doesn't go away. Also, you need to do some >> averaging. For periodic signals, you can make multiple measurements >> and average them, to reduce noise. >> >> But the original question was a spatial average, which also reduces >> noise. Generating a least squares fit also reduces the noise relative >> to the uncertainties on individual points. >> >> -- glen > > Sure but once something is digitised you are surely stuck with the noise > otherwise you could sample with 8 bits and get a -90dB signal to > quantization noise ratio. (of course sigma delta designs do in fact do > some filtering in the digital domain) hence once you make the decision > to round up or round down you are stuck with the amount of noise.Nonsense. I suggest that you avail yourself of a basic text in statistics and study the effects of taking the average of a large number of samples, each of which is corrupted by noise with finite variance. -- www.wescottdesign.com
Reply by ●January 8, 20152015-01-08
gyansorova@gmail.com wrote: (snip, I wrote)>> I don't understand the point. For one, averaging will reduce the >> quantization noise, but it doesn't go away. Also, you need to do >> some averaging. For periodic signals, you can make multiple >> measurements and average them, to reduce noise.>> But the original question was a spatial average, which also reduces >> noise. Generating a least squares fit also reduces the noise relative >> to the uncertainties on individual points.(snip)> Sure but once something is digitised you are surely stuck with > the noise otherwise you could sample with 8 bits and get a -90dB > signal to quantization noise ratio. (of course sigma delta designs > do in fact do some filtering in the digital domain) hence once > you make the decision to round up or round down you are stuck > with the amount of noise.There are books and whole courses on the subject, usually with names like "Statistical Treatment of Experimental Data. The imporrtant point is that much of the time you can make more than one measurement of the same data point. If not, you are stuck. OK, one example. The half life of the Uranium 238 isotope is, from wikipedia, 4.468 billion years. One thing you can be sure of is that no-one timed a single atom for 4.468 billion years. Instead, they measure a large number of atoms for a much shorter time. Apparently enough atoms to figure out the answer to four significant digits. There is one problem with this measurement, and that is that your sample might have some U235 in it, with a 703.8 million year half life. Fortunately it isn't hard to get a large number of such atoms to do the counts on. -- glen
Reply by ●January 8, 20152015-01-08
On Wed, 7 Jan 2015 22:16:30 -0800 (PST), gyansorova@gmail.com wrote:>On Thursday, January 8, 2015 6:30:33 PM UTC+13, glen herrmannsfeldt wrote: >> gyansorova@gmail.com wrote: >>=20 >> (snip, I wrote) >> >> If the rounding is consistent, it doesn't matter much.=20 >> =20 >> >> When you average N numbers, the uncertainty of the average is >> >> 1/sqrt(N) times the uncertainty of the values you are averaging. >> =20 >> >> The trend will be both space (around the globe) and time averaged, >> >> such that it doesn't take all that many points to get an 0.1C >> >> uncertainty, most likely a lot better. >> =20 >> >> Random error you can average out, but systematic error you can't. >> =20 >> > Thanks but doesn't answer the question really. If that was the=20 >> > case then there would be no quantization noise at all in an ADC. >>=20 >> I don't understand the point. For one, averaging will reduce the >> quantization noise, but it doesn't go away. Also, you need to do >> some averaging. For periodic signals, you can make multiple >> measurements and average them, to reduce noise.=20 >>=20 >> But the original question was a spatial average, which also reduces >> noise. Generating a least squares fit also reduces the noise relative >> to the uncertainties on individual points. >>=20 >> -- glen > >Sure but once something is digitised you are surely stuck with the noise ot= >herwise you could sample with 8 bits and get a -90dB signal to quantization= > noise ratio. (of course sigma delta designs do in fact do some filtering i= >n the digital domain) hence once you make the decision to round up or roun= >d down you are stuck with the amount of noise.The trick is to digitize it more than once, and average the readings. This only works to reduce quantization noise if there is also some signal noise, such that on repeated readings the signal + noise hits the quantizer step at a different point. This is the principle behind dither, where the averaging may take place in the auditory system. (See "Dither Demonstration" at <http://www.daqarta.com/dw_yydd.htm>) The averaging also reduces signal noise, as long as you average successive frames that each is exactly aligned with the signal. This can be used to measure neural auditory responses in lab animals, infants, or others who can't respond otherwise. You present the subject with a short tone burst of the test frequency, and record (say) 1024 samples of the response waveform from scalp electrodes. Then you do that again, and add the new 1024 samples to the old, and repeat 1000s of times. Each auditory brainstem reponse is totally buried in the noise of the ongoing neural firings of the rest of the brain, but those aren't in sync with the auditory response. On the next tone burst, the auditory response will be the same, but the rest of the background will be different and will thus average away over many repeats. (Noise goes down as the square root of number of frames averaged.) Using this approach you gain one bit of effective ADC resolution for every doubling of frames... so an 8-bit ADC can get 16-bit resolution (better than 90 dB) with only 256 frames averaged. See "Synchronous Waveform Averaging: Magic Bullet For Noise" at <http://www.daqarta.com/tm01.htm>. Best regards, Bob Masta DAQARTA v7.60 Data AcQuisition And Real-Time Analysis www.daqarta.com Scope, Spectrum, Spectrogram, Sound Level Meter Frequency Counter, Pitch Track, Pitch-to-MIDI FREE Signal Generator, DaqMusiq generator Science with your sound card!
Reply by ●January 8, 20152015-01-08
On Wed, 7 Jan 2015 22:16:30 -0800 (PST), gyansorova@gmail.com wrote:>On Thursday, January 8, 2015 6:30:33 PM UTC+13, glen herrmannsfeldt wrote: >> gyansorova@gmail.com wrote: >>=20 >> (snip, I wrote) >> >> If the rounding is consistent, it doesn't matter much.=20 >> =20 >> >> When you average N numbers, the uncertainty of the average is >> >> 1/sqrt(N) times the uncertainty of the values you are averaging. >> =20 >> >> The trend will be both space (around the globe) and time averaged, >> >> such that it doesn't take all that many points to get an 0.1C >> >> uncertainty, most likely a lot better. >> =20 >> >> Random error you can average out, but systematic error you can't. >> =20 >> > Thanks but doesn't answer the question really. If that was the=20 >> > case then there would be no quantization noise at all in an ADC. >>=20 >> I don't understand the point. For one, averaging will reduce the >> quantization noise, but it doesn't go away. Also, you need to do >> some averaging. For periodic signals, you can make multiple >> measurements and average them, to reduce noise.=20 >>=20 >> But the original question was a spatial average, which also reduces >> noise. Generating a least squares fit also reduces the noise relative >> to the uncertainties on individual points. >>=20 >> -- glen > >Sure but once something is digitised you are surely stuck with the noise ot= >herwise you could sample with 8 bits and get a -90dB signal to quantization= > noise ratio. (of course sigma delta designs do in fact do some filtering i= >n the digital domain) hence once you make the decision to round up or roun= >d down you are stuck with the amount of noise.If that were true then it wouldn't be possible to resolve small signals by exploiting processing gain. Fortunately, it is possible to resolve small signals and signal features by exploiting processing gain, so there are plenty of example proofs that your assertion is not correct. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
Reply by ●January 9, 20152015-01-09
gyansorova@gmail.com writes:> On Thursday, January 8, 2015 6:30:33 PM UTC+13, glen herrmannsfeldt wrote: >> gyansorova@gmail.com wrote: >> >> (snip, I wrote) >> >> If the rounding is consistent, it doesn't matter much. >> >> >> When you average N numbers, the uncertainty of the average is >> >> 1/sqrt(N) times the uncertainty of the values you are averaging. >> >> >> The trend will be both space (around the globe) and time averaged, >> >> such that it doesn't take all that many points to get an 0.1C >> >> uncertainty, most likely a lot better. >> >> >> Random error you can average out, but systematic error you can't. >> >> > Thanks but doesn't answer the question really. If that was the >> > case then there would be no quantization noise at all in an ADC. >> >> I don't understand the point. For one, averaging will reduce the >> quantization noise, but it doesn't go away. Also, you need to do >> some averaging. For periodic signals, you can make multiple >> measurements and average them, to reduce noise. >> >> But the original question was a spatial average, which also reduces >> noise. Generating a least squares fit also reduces the noise relative >> to the uncertainties on individual points. >> >> -- glen > > Sure but once something is digitised you are surely stuck with the > noise otherwise you could sample with 8 bits and get a -90dB signal to > quantization noise ratio.It is true that you are stuck with the noise in the entire Fs/2 bandwidth. However, if the signal spectrum is not wideband, then you can filter (lowpass, bandpass, etc.) and filter out some of the noise while keeping all (or most) of the signal. This goes for noise in the original signal (e.g., measurement error, if you assume errors are uncorrelated) as well as quantization noise. This is essentially what averaging does. -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com






