DSPRelated.com
Forums

Sampling: What Nyquist Didn't Say, and What to Do About It

Started by Tim Wescott December 20, 2010
On 12/20/2010 10:30 AM, Les Cargill wrote:
> John Larkin wrote: >> On Mon, 20 Dec 2010 12:03:37 -0500, Randy Yates<yates@ieee.org> >> wrote: >> >>> On 12/20/2010 11:46 AM, John Larkin wrote: >>>> [...] >>>> I sold a couple hundred thousand channels of an AC power meter, used >>>> for utility end-use surveys, that sampled the power line voltage and >>>> current signals at 27 Hz. I had a hell of a time arguing with >>>> "Nyquist" theorists who claimed I should be sampling at twice the >>>> frequency of the highest line harmonic, like the 15th maybe. >>> >>> John, >>> >>> If your AC signal had more than 13.5 Hz of bandwidth, how were you >>> able to accurately sample them at 27 Hz? As far as I know, even >>> subsampling assumes the _bandwidth_ is less than half the sample rate >>> (for real sampling). >> >> Read Tim's paper! >> >> The thing about an electric meter is that you're not trying to >> reconstruct the waveform, you're only gathering statistics on it. The >> 27.xxx Hz sample rate was chosen so that its harmonics would dance >> between the line harmonics up to some highish harmonic of 60 Hz, so as >> to not create any slow-wobble aliases in the reported values (trms >> volts, amps, power, PF) that would uglify the local realtime display >> or the archived time-series records. >> > > Is this something like heterodyning, then? You're building a detector, > not a ... recorder. Right?
Pretty much -- read my paper! You're taking advantage of the fact that the signal you're acquiring is very cyclic in character. So (for instance), instead of taking samples every 1/600 seconds, you could take samples every 1/60 + 1/600 seconds, and get the _effect_ of taking samples faster. John chose a frequency that would let him get decent statistics faster and more reliably, but he's just building on the basic idea that I present. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" was written for you. See details at http://www.wescottdesign.com/actfes/actfes.html
On 12/20/2010 10:00 AM, Randy Yates wrote:
> On 12/20/2010 02:34 AM, Tim Wescott wrote: >> [...] >> http://www.wescottdesign.com/articles/Sampling/sampling.pdf > > Tim, > > First let me say that overall the new paper looks really great! I > am pleased that you've chosen to utilize (La)TeX - it has served > me well for over two decades. There are a few rough edges such > as bitmapped fonts (which aren't necessary) and errors in spacing, > but I'm sure you'll get those worked out. > > What does concern me, however, is some of the theory you've presented. > Specifically, this section on p.11: > > Sampling at some frequency that is equal to the repetition rate > divided by a prime number will automatically stack these narrow bits > of signal spectrum right up in the same order that they were in the > original signal, only jammed much closer together in frequency which > is the roundabout frequency-domain way of saying that you can sample > at just the right rate, and interpret the resulting signal as a > slowed-down replica of the input waveform. > > There are two points in which I challenge the veracity of your assertions: > > 1. Sampling at a rate of F/N when N is integer will never help > subsample a signal since the period of the sampling, N/F, is always a > multiple of the repetition rate period 1/F. > > 2. It seems that to truely, completely sample a repetitive signal in > such a way, you would need a sampling period that will never be a > multiple of the repetition period. For example, for the 60 Hz example > you could use a sample rate of 60 Hz / sqrt(2). But then, even if you > sample at such a rate, it would take an INFINITE amount of time to > fully sample this signal. It's equivalent to sampling an interval on > the real line a point at a time; real analysis tells us that there are > an uncountably infinite number of points in such an interval! > > So, I'm afraid I cannot agree that an accurate sampling of a repetitive > waveform can be made in this manner. If you disagree, please show me > where my reasoning is wrong.
0: thanks for the kind words. I wrote my Master's thesis in LaTeX, and have been living in a continual state of disappointment since. I'm actually using Lyx, because I'm lazy, but it's still LaTeX underneath. 1 & 2: I felt that my arguments were not well stated in the paper. Since I have to re-post it _anyway_, I'll spend a bit of time with the math. I just replied to another post, and in the process realized, tentatively, a relationship: if you have a cycle interval T = 1/F and you want to capture N samples of a cycle, then sampling at Ts = (M + P/N) * T will do the job as long as M, N and P are integers, and P and N are relatively prime and both non-zero. Reordering things for P != 1 is a challenge, but not impossible. Whether I'm right and didn't argue my case well, or I'm just wrong, I need to change things there. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" was written for you. See details at http://www.wescottdesign.com/actfes/actfes.html
Hi Grant,

On 12/20/2010 9:57 AM, Grant Edwards wrote:
> On 2010-12-20, Cesar Rabak<csrabak@bol.com.br> wrote: > >> I gave a diagonal look at the paper, as I got curious about the
[Cesar: what is the intent of "diagonal look"? Sorry, I can't fathom what you mean, here :< Too early in the day...]
>> complaints on the font. They look OK to me :-) I'm used to read math's >> articles written in CMR fonts so perhaps I'm not a good judge on this. > > I don't see anything at all wrong with the font. The one thing that I > would change is the line length. It looks like a typical line is > upwards of 110 characters. That's a bit too much to read comfortably. > If you want to use a font that small, and don't want wide margins, I'd > recommend going to a two-column format.
<frown> I agree with your point re: line length. But, I adopted a two column format (3/8" gutter) years ago when I started my "notes" series. It *really* complicates layout. You end up having to create lots of "page width" boxes anchored to your text. This ends up breaking up the text columns A LOT. Especially if you have lots of illustrations, tables, etc. For example, putting code snippets in-line constrains the length of each code line severely (unless you go to the page wide boxes). <shrug> So far, I've not had to resort to "rotated pages" but that's only because I've been aggressive at keeping tables, illustrations, etc. tightly bound. :-/
In comp.dsp Randy Yates <yates@ieee.org> wrote:
 
(snip)

> What does concern me, however, is some of the theory you've presented. > Specifically, this section on p.11:
> Sampling at some frequency that is equal to the repetition rate > divided by a prime number will automatically stack these narrow bits > of signal spectrum right up in the same order that they were in the > original signal, only jammed much closer together in frequency which > is the roundabout frequency-domain way of saying that you can sample > at just the right rate, and interpret the resulting signal as a > slowed-down replica of the input waveform.
> There are two points in which I challenge the veracity of your assertions:
> 1. Sampling at a rate of F/N when N is integer will never help > subsample a signal since the period of the sampling, N/F, is always a > multiple of the repetition rate period 1/F.
One has to choose carefully.
> 2. It seems that to truely, completely sample a repetitive signal in > such a way, you would need a sampling period that will never be a > multiple of the repetition period. For example, for the 60 Hz example > you could use a sample rate of 60 Hz / sqrt(2). But then, even if you > sample at such a rate, it would take an INFINITE amount of time to > fully sample this signal. It's equivalent to sampling an interval on > the real line a point at a time; real analysis tells us that there are > an uncountably infinite number of points in such an interval!
That would be true for signals with infinite bandwidth. At least for the AC power meter, you won't have that. Harmonics from SCR (or triac) based light dimmers likely get into the MHz range, so one should be able to see that far. The usual computer power supply is a voltage double off the AC line, which shouldn't be as bad as the SCR, but still has significant harmonics. But as was previously said, the goal is not to sample the 60Hz waveform, but, as used in describing modulated signals, the envelope.
> So, I'm afraid I cannot agree that an accurate sampling of a repetitive > waveform can be made in this manner. If you disagree, please show me > where my reasoning is wrong.
If one sample 60Hz power usage at 60Hz, one would lose much important information. At 27Hz, where do the aliases end up? 60Hz --> 6Hz 120Hz --> 12Hz 180Hz --> -9Hz 240Hz --> -3Hz 300Hz --> 3Hz 360Hz --> 9Hz 420Hz --> -12Hz 480Hz --> -6Hz 540Hz --> 0Hz It seems that you don't want exactly 27Hz, maybe that is what he said previously. What you want to measure, though, is the RMS power over some period of time, taking into account the significant harmonics. Now, say you have a signal with harmonics up to a few MHz, and say, for example, that one of those aliases to 0Hz, and so you don't see. How much of a problem is that? If you have all the floor(1000000/60) harmonics up to that point, then you are likely pretty close. Floor(1000000/60) is 16666, so if you sample at 1000000/16666, for a sampling rate of 60.0024... Hz. If you want something near 27Hz that doesn't have harmonics that are multiples of 60 until 1000020, then it looks like 27.000027Hz is about right. It seems to me that you pick the harmonic that you can afford not to see, and plan the sampling rate accordingly. However, as that is getting close to crystal tolerance, I might suggest that phase locking to a multple of 60Hz, and then dividing down would be a good way to generate the sampling clock. -- glen
Hi David,

On 12/20/2010 3:19 AM, David Brown wrote:
> On 20/12/10 10:16, D Yuniskis wrote: >> For example, when I include detailed photos, I deliberately >> chose high enough resolutions that allow the user (reader) >> to "zoom" to examine high levels of detail without >> the image being rendered with jaggies, etc. > > That's a good plan, and something people often forget about - the result > being documents that look good on-screen, but poor in printout. In this > particular case, however, it seems the graphics are in a vector format > (pdf files support eps), which is the best choice for drawings.
Actually, I've found the opposite to be the case, more often than not. Printers are pretty much what they are. OTOH, on screen, you can choose to zoom to arbitrary levels into an image to see greater bits of detail. With images resampled at lower resolutions, you quickly end up with jaggies that wouldn't have been obvious to the unaided eye in paper form. But, very high resolution photographs quickly eat up lots of bytes. So, you have to come to a balance, somewhere.
>> Also, note that cropping an image in the PDF doesn't discard >> the "invisible" portion of the image. This can be embarassing >> if you think you've hidden (not included) a portion of the >> image that isn't "visible" :> > > This is seldom an issue with pdf files (though it be, depending on the > tools used to create it) - it is commonly found in MS Word files. But > Tim has used pdfLaTeX - the pdf file contains exactly what he wants it > to contain.
Dunno, I don't use word or pdfLaTeX. I use FrameMaker for all my DTP as it's "quickest" to merge sources into a presentable form (and ~20 years of experience with it has a significant bit of inertia). One typical technique I use is to include a photo of <something>. Then, create another "window" (not in the GUI sense) overlapping the original photo's "window". In this smaller, overlapping window, I paste yet another copy of the photo -- but zoomed to much higher magnification. Then, pan that image to the part of the underlying photo that is "of detailed interest". I.e., I end up with a "closeup" of some portion of the basic photo to which I want to draw attention. It's more economical on real estate than a separate "closeup photo" would be. And, gives viewers of the print edition the detail that would otherwise only be visible "on screen" in an interactive environment. Since FrameMaker writes PS, it relies on PS's innate abilities to do this cropping on its behalf. As a result, you end up with the whole image *in* the document, layered *under* a viewport built in PS. :-/ My point was to understand what your tool is doing to your "input"/data so that you aren't "leaking" anything that you don't want to leak (nor adding to the size of the resulting file, needlessly). Next, I want to try embedding audio in some documents (e.g., it would be far more informative for folks to *hear* certain phonetic sounds than to *see* visual symbols thereof.
On 20/12/10 19:49, John Devereux wrote:
> David Brown<david.brown@removethis.hesbynett.no> writes: > >> On 20/12/10 13:47, John Devereux wrote: >>> Jan Panteltje<pNaonStpealmtje@yahoo.com> writes: >>> >>>> On a sunny day (Mon, 20 Dec 2010 01:40:32 -0800) it happened Robert Baer >>>> <robertbaer@localnet.com> wrote in >>>> <UO-dnUZyBpEGuZLQnZ2dnUVZ_gidnZ2d@posted.localnet>: >>>> >>>>> Mikolaj wrote: >>>>>> Dnia 20-12-2010 o 08:34:44 Tim Wescott<tim@seemywebsite.com> napisa&#322;(a): >>>>>> >>>>>>> I know there's a few people out there who actually read the papers that I >>>>>>> post on my web site. >>>>>>> >>>>>>> I also know that the papers have gotten a bit ragged, and that I haven't >>>>>>> been maintaining them. >>>>>>> >>>>>>> So here: I've made a start. >>>>>>> >>>>>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf >>>>>>> >>>>>>> My intent (with apologies to all of you with dial-up), is to convert the >>>>>>> ratty HTML documents to pdf as time permits, and in a way that leaves the >>>>>>> documents easily maintainable and in a form that is easy to look at from >>>>>>> the web or to print out, as you desire. >>>>>>> >>>>>> >>>>>> My first thought was that fonts look a little bit to thin and bright. >>>>>> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options checked. >>>>>> >>>>> I agree, the font makes it very difficult to read, and is not >>>>> conducive to enhancing reading over a long term, namely longer than one >>>>> page.. >>>> >>>> I think the fonts look great, watching full screen on a 1680x1050 LCD with >>>> xpdf in Linux. >>>> wget http://www.wescottdesign.com/articles/Sampling/sampling.pdf >>>> xpdf sampling.pdf >>> >>> No, they do look a bit "bitmapped" I'm afraid. I am also using xpdf in >>> linux. A minor detail though, still quite readable IMO. >>> >> >> Getting /almost/ on-topic again, the issue is, I think, that xpdf >> doesn't do anti-aliasing very well and so the fonts look a bit poor at >> low resolution. Evince does better. But in general, CMR fonts are >> better on high-resolution devices - they were designed for use on >> laser printers, not to look nice on screens. > > You're right. Acrobat does better still. I guess I'm not used to this > since I don't see many bitmapped fonts. (Even with xpdf it is not at all > "terrible" by the way, and thanks Tim for posting it). >
CMR fonts are not actually bitmapped fonts, but they are by the time they end up in the pdf file. They are metafont fonts, described by a metafont program. But pdf format does not support metafont fonts - so pdfLaTeX uses a bitmapped CMR font build for something like a 300 dpi laser printer, and this is not optimal for screen usage. When used as intended - using dvi files on a system with the metafont sources and metafont program available - metafont fonts have much more flexibility than truetype, postscript or type 1 fonts, and will give you results that are fine-tuned to the exact printer you are using. But that information is lost with pdf files. The easiest way to improve the pdfs generated by pdfLaTeX is to add some usepackage lines: \usepackage{times} \usepackage{mathpazo} \usepackage{courier} \usepackage{helvet} This will result in the common fonts Times, Helvetica (Arial), and Courier being used as the serif, typewriter and sans serif fonts, which work well on all systems. Of course, you still get the better font handling of LaTeX - things like kerning and ligatures work as you would want. And it's always possible to use any one of a gazillion other font packages that are common in TeX installations - or to build the required metric files from any other fonts you might have.
On Mon, 20 Dec 2010 13:30:00 -0500, Les Cargill
<lcargill99@comcast.net> wrote:

>John Larkin wrote: >> On Mon, 20 Dec 2010 12:03:37 -0500, Randy Yates<yates@ieee.org> >> wrote: >> >>> On 12/20/2010 11:46 AM, John Larkin wrote: >>>> [...] >>>> I sold a couple hundred thousand channels of an AC power meter, used >>>> for utility end-use surveys, that sampled the power line voltage and >>>> current signals at 27 Hz. I had a hell of a time arguing with >>>> "Nyquist" theorists who claimed I should be sampling at twice the >>>> frequency of the highest line harmonic, like the 15th maybe. >>> >>> John, >>> >>> If your AC signal had more than 13.5 Hz of bandwidth, how were you >>> able to accurately sample them at 27 Hz? As far as I know, even >>> subsampling assumes the _bandwidth_ is less than half the sample rate >>> (for real sampling). >> >> Read Tim's paper! >> >> The thing about an electric meter is that you're not trying to >> reconstruct the waveform, you're only gathering statistics on it. The >> 27.xxx Hz sample rate was chosen so that its harmonics would dance >> between the line harmonics up to some highish harmonic of 60 Hz, so as >> to not create any slow-wobble aliases in the reported values (trms >> volts, amps, power, PF) that would uglify the local realtime display >> or the archived time-series records. >> > >Is this something like heterodyning, then? You're building a detector, >not a ... recorder. Right?
It records rms volts, amps, power, but doesn't try to reconstruct the raw waveforms; so the Sampling Theorem doesn't apply. That didn't stop all sorts of people from arguing that the sample rate had to be twice that of the highest reasonable AC line harmonic. As Tim says, lots of people fling "Nyquist Rate" around without really thinking about it. If the voltage waveform is a sine wave (which it pretty much is) then there's no energy in the current harmonics anyhow. John
On 20/12/10 17:57, Grant Edwards wrote:
> On 2010-12-20, Cesar Rabak<csrabak@bol.com.br> wrote: >> Em 20/12/2010 05:34, Tim Wescott escreveu: >>> I know there's a few people out there who actually read the papers that I >>> post on my web site. >>> >>> I also know that the papers have gotten a bit ragged, and that I haven't >>> been maintaining them. >>> >>> So here: I've made a start. >>> >>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf >>> >>> My intent (with apologies to all of you with dial-up), is to convert the >>> ratty HTML documents to pdf as time permits, and in a way that leaves the >>> documents easily maintainable and in a form that is easy to look at from >>> the web or to print out, as you desire. >> >> I gave a diagonal look at the paper, as I got curious about the >> complaints on the font. They look OK to me :-) I'm used to read math's >> articles written in CMR fonts so perhaps I'm not a good judge on this. > > I don't see anything at all wrong with the font. The one thing that I > would change is the line length. It looks like a typical line is > upwards of 110 characters. That's a bit too much to read comfortably. > If you want to use a font that small, and don't want wide margins, I'd > recommend going to a two-column format. >
I agree that the line width is /slightly/ too wide for comfort, but I'd avoid two-column format unless I were trying to save the last few trees on the planet. For papers with maths and figures, it makes things a lot more complicated, and it's always a pain to read if your screen is not large enough to fit a whole page comfortably on-screen. It's better to add a bit wider margins - they can be useful for notes or for binding on a printed version. If we are going to nit-pick on the typography (which seems a little unfair, given that it is vastly better than in most papers), I'd like to see a little more vertical space before the footnote delimiter line. There are occasional mistakes in the spacing (such as an extra space after "2f_0" in line three of page 1, and occasionally after figure references). I prefer not to have spaces around an em dash, but that's perhaps just a personal preference. It is considered poor style to start a line with a number, such as on page 3. I also think it is nice with a small space before a unit (such as "100\!kHz"). It is clearer if consider your title page as page 1 - it makes the document page number and the pdf page number consistent. Try the "varref" package for generating references - then you avoid things like "Figure 6 on page 7" appearing on page 7. It can be useful to have a table of contents in the pdf file, though I'm not sure how to generate one without including one in the document itself. I still haven't got round to reading the document itself - I hope the contents are worth the effort in the presentation!
On Mon, 20 Dec 2010 10:59:14 -0800, Tim Wescott <tim@seemywebsite.com>
wrote:

>On 12/20/2010 10:30 AM, Les Cargill wrote: >> John Larkin wrote: >>> On Mon, 20 Dec 2010 12:03:37 -0500, Randy Yates<yates@ieee.org> >>> wrote: >>> >>>> On 12/20/2010 11:46 AM, John Larkin wrote: >>>>> [...] >>>>> I sold a couple hundred thousand channels of an AC power meter, used >>>>> for utility end-use surveys, that sampled the power line voltage and >>>>> current signals at 27 Hz. I had a hell of a time arguing with >>>>> "Nyquist" theorists who claimed I should be sampling at twice the >>>>> frequency of the highest line harmonic, like the 15th maybe. >>>> >>>> John, >>>> >>>> If your AC signal had more than 13.5 Hz of bandwidth, how were you >>>> able to accurately sample them at 27 Hz? As far as I know, even >>>> subsampling assumes the _bandwidth_ is less than half the sample rate >>>> (for real sampling). >>> >>> Read Tim's paper! >>> >>> The thing about an electric meter is that you're not trying to >>> reconstruct the waveform, you're only gathering statistics on it. The >>> 27.xxx Hz sample rate was chosen so that its harmonics would dance >>> between the line harmonics up to some highish harmonic of 60 Hz, so as >>> to not create any slow-wobble aliases in the reported values (trms >>> volts, amps, power, PF) that would uglify the local realtime display >>> or the archived time-series records. >>> >> >> Is this something like heterodyning, then? You're building a detector, >> not a ... recorder. Right? > >Pretty much -- read my paper! > >You're taking advantage of the fact that the signal you're acquiring is >very cyclic in character. So (for instance), instead of taking samples >every 1/600 seconds, you could take samples every 1/60 + 1/600 seconds, >and get the _effect_ of taking samples faster. > >John chose a frequency that would let him get decent statistics faster >and more reliably, but he's just building on the basic idea that I present.
I thought about sampling close to 60 Hz. I could have taken a block of 256 samples at, say, 60+1/256 Hz, and walked the whole sine wave in a few seconds at equivalent steps of 1.406 degrees. But that had ugly side effects for sampled harmonics, specifically reporting the RMS value of ratty current waveforms. And I didn't have enough compute power anyhow. So I sampled at 26.9947, which is 800.156 degrees at 60 Hz, which still gives 256 evenly-spaces samples but the harmonic aliasing behavior is entirely different. Messy stuff. John
Op 20-Dec-10 17:12, Tim Wescott schreef:
> On 12/20/2010 01:40 AM, Robert Baer wrote: >> Mikolaj wrote: >>> Dnia 20-12-2010 o 08:34:44 Tim Wescott <tim@seemywebsite.com> >>> napisa&#4294967295;(a): >>> >>>> I know there's a few people out there who actually read the papers >>>> that I >>>> post on my web site. >>>> >>>> I also know that the papers have gotten a bit ragged, and that I >>>> haven't >>>> been maintaining them. >>>> >>>> So here: I've made a start. >>>> >>>> http://www.wescottdesign.com/articles/Sampling/sampling.pdf >>>> >>>> My intent (with apologies to all of you with dial-up), is to convert >>>> the >>>> ratty HTML documents to pdf as time permits, and in a way that leaves >>>> the >>>> documents easily maintainable and in a form that is easy to look at >>>> from >>>> the web or to print out, as you desire. >>>> >>> >>> My first thought was that fonts look a little bit to thin and bright. >>> I use AcrobatReader 9.4.1, preferences/rendering: LCD,all options >>> checked. >>> >> I agree, the font makes it very difficult to read, and is not conducive >> to enhancing reading over a long term, namely longer than one page.. > > What reader are you using? I'm getting a two-valued distribution here: > "looks great!", and "looks nasty!". If it's a reader issue -- > particularly if you're using Adobe -- then I'd like to test on the 'bad' > reader.
I agree with the people who don't like the font, though I would go as far as "looks nasty". On a 1920x1200 screen with Acrobat Reader the text isn't as comfortable to read as in most PDFs. When watching it with pages side-by-side (as I do with most documents) or at 100% the fonts are too thin/light. When I zoom in the fonts do look indeed bitmapped; the jaggies get worse as the zoom increases. The fonts used in the graphs look perfectly fine though, even when zoomed in.