Hi -- Hope this is being posted to the appropriate group(s) -- I didn't want to carpet-bomb all the NG's but I figured these 2 groups would be the most helpful. After banging my head into a wall for many days on a particular problem, I realized I might be able to benefit from the wisdom of these newsgroups. I have inherited a very resource-constrained embedded system, primarily performing data acquisition. Memory is the biggest constraint, the processor horsepower is not as big a constraint. Anyways, suppose I have system sampling (via ADC) at 10,000 samples/sec. Suppose several times per second, I get an "event", which is a rectified voltage spike. For the sake of this example, let's suppose I get an event every 10ms, i.e. every 100 samples. Thus every 100th sample would be non-zero, the rest would be zero. (Let's assume no noise, each event is one sample in width, etc...) So if I had an array of 50,000 elements, which would represent 5 seconds of data, with most of the samples reading "0", and about 500 elements reading non-zero when an event occurred. So far, so good.... Now here is the issue... there isn't memory to store 50k, 100k, etc... samples. This is a small 16 bit processor board designed years ago with limited RAM, not the common 32-bit processor running with 128M of RAM like so many "embedded" systems today. Also, to make things "worse", sometimes an event is missed, and sometimes a "ghost event" shows up 1/2 way between real events. So what I am saying is this... instead of an event every 100 samples, sometimes I will get an additional event at sample 150, 550, 750, etc... and sometimes we will get an event at 600 and 800, but not 700. These "event glitches" are due to shortcomings in the system hardware and cannot be fixed. The system is used to measure the linear speed of a moving system, the closer the events are to each other (i.e. smaller period), the faster the machinery is moving. The resource constrained system must frequently estimate the speed of the system by measuring the time between events. In a perfect world, with no missed events and no false events, we could just measure the period between the last 2 events, and that would be the best (most recent) estimate of the system speed. In an almost-perfect world (i.e. non-real time, post-processing on a PC, etc...), an FFT could be used on a huge array of time-domain samples, and the fundamental frequency of 100 would come through, is spite of the "ghost events" and missing events. But this system has limited memory, and it interrupts every 500 samples or so, and the driver stores indices of non-zero events. In other words, over 1 second (10k samples), if events are happening every 100 samples, the driver would store the values 100, 200, 300, etc... 9900, 10,000 in a small array. The reality is that the array will actually hold values like ....4200, 4250, 4300, 4400, 4600, 4650, 4700, etc.... due to false & duplicate events. I have tried things like a histogram (find the "mode" of the period), a median filter, etc... but they are all succeptible to problems.... I keep thinking, "if I could just do an FFT and take the dominant freq, I'd be set!" So the $64,000 question is this .... is there some way to find the "fundamental frequency" of this data (which would be 100 in this case), assuming I don't have the memory to save (or re-create) a huge time-domain array of samples, and perform an FFT? (Humility note: When I took this little project over, I thought, "no problem, visually, I can easily what the period is, this will be a piece of cake in firmware....")
Fundamental frequency -- limited resources
Started by ●August 18, 2004
Reply by ●August 18, 20042004-08-18
Short question, Why is a median filter not good enough? These filters are insensitive to glitches. It just takes the middle value of your sorted array. Or in your case, sort the periods, and take the middle one as reference. gl. Rob. "Email Unread" <aa99aa@verizonmail.com> wrote in message news:81c69dd.0408180350.2c79beea@posting.google.com...> Hi -- > > Hope this is being posted to the appropriate group(s) -- I didn't want > to carpet-bomb all the NG's but I figured these 2 groups would be the > most helpful. > > > After banging my head into a wall for many days on a particular > problem, I realized I might be able to benefit from the wisdom of > these newsgroups. > > I have inherited a very resource-constrained embedded system, > primarily performing data acquisition. Memory is the biggest > constraint, the processor horsepower is not as big a constraint. > > Anyways, suppose I have system sampling (via ADC) at 10,000 > samples/sec. Suppose several times per second, I get an "event", > which is a rectified voltage spike. For the sake of this example, > let's suppose I get an event every 10ms, i.e. every 100 samples. Thus > every 100th sample would be non-zero, the rest would be zero. (Let's > assume no noise, each event is one sample in width, etc...) > > So if I had an array of 50,000 elements, which would represent 5 > seconds of data, with most of the samples reading "0", and about 500 > elements reading non-zero when an event occurred. So far, so good.... > > Now here is the issue... there isn't memory to store 50k, 100k, etc... > samples. This is a small 16 bit processor board designed years ago > with limited RAM, not the common 32-bit processor running with 128M of > RAM like so many "embedded" systems today. > > Also, to make things "worse", sometimes an event is missed, and > sometimes a "ghost event" shows up 1/2 way between real events. > > So what I am saying is this... instead of an event every 100 samples, > sometimes I will get an additional event at sample 150, 550, 750, > etc... and sometimes we will get an event at 600 and 800, but not 700. > These "event glitches" are due to shortcomings in the system hardware > and cannot be fixed. > > The system is used to measure the linear speed of a moving system, the > closer the events are to each other (i.e. smaller period), the faster > the machinery is moving. The resource constrained system must > frequently estimate the speed of the system by measuring the time > between events. > > In a perfect world, with no missed events and no false events, we > could just measure the period between the last 2 events, and that > would be the best (most recent) estimate of the system speed. > > In an almost-perfect world (i.e. non-real time, post-processing on a > PC, etc...), an FFT could be used on a huge array of time-domain > samples, and the fundamental frequency of 100 would come through, is > spite of the "ghost events" and missing events. > > But this system has limited memory, and it interrupts every 500 > samples or so, and the driver stores indices of non-zero events. In > other words, over 1 second (10k samples), if events are happening > every 100 samples, the driver would store the values 100, 200, 300, > etc... 9900, 10,000 in a small array. > > The reality is that the array will actually hold values like ....4200, > 4250, 4300, 4400, 4600, 4650, 4700, etc.... due to false & duplicate > events. > > I have tried things like a histogram (find the "mode" of the period), > a median filter, etc... but they are all succeptible to problems.... I > keep thinking, "if I could just do an FFT and take the dominant freq, > I'd be set!" > > So the $64,000 question is this .... is there some way to find the > "fundamental frequency" of this data (which would be 100 in this > case), assuming I don't have the memory to save (or re-create) a huge > time-domain array of samples, and perform an FFT? > > (Humility note: When I took this little project over, I thought, "no > problem, visually, I can easily what the period is, this will be a > piece of cake in firmware....")
Reply by ●August 18, 20042004-08-18
I'd just maintain a running average of inter-event times, over the last N intervals, but discarding from this history/average any intervals too far outside (e.g. 40%+ lower, or 40%+ higher) of the current average. This will reject those 50% too low and 100% too high values while allowing for genuine instantaneous 40% +/- speed changes which presumably is more than adequate. Choose the N as a trade-off between quick reaction to speed changes, and smoothness of reported speed. If you don't want the limitation of that "never more than +/- 40% legitimate change", then maintain two running averages, one as above, the measured speed, and one of ALL last M intervals (i.e. without rejecting any), and make the event reject decision based on +/- 40% from this 2nd average. Choose the M as sufficiently large for average to be close to true value, despite glitches, but no larger so as to introduce minimal lag in measured speed when very large (> +/ 40%) instantaneous speed changes occur. Ben Email Unread wrote:> Hi -- > > Hope this is being posted to the appropriate group(s) -- I didn't want > to carpet-bomb all the NG's but I figured these 2 groups would be the > most helpful. > > > After banging my head into a wall for many days on a particular > problem, I realized I might be able to benefit from the wisdom of > these newsgroups. > > I have inherited a very resource-constrained embedded system, > primarily performing data acquisition. Memory is the biggest > constraint, the processor horsepower is not as big a constraint. > > Anyways, suppose I have system sampling (via ADC) at 10,000 > samples/sec. Suppose several times per second, I get an "event", > which is a rectified voltage spike. For the sake of this example, > let's suppose I get an event every 10ms, i.e. every 100 samples. Thus > every 100th sample would be non-zero, the rest would be zero. (Let's > assume no noise, each event is one sample in width, etc...) > > So if I had an array of 50,000 elements, which would represent 5 > seconds of data, with most of the samples reading "0", and about 500 > elements reading non-zero when an event occurred. So far, so good.... > > Now here is the issue... there isn't memory to store 50k, 100k, etc... > samples. This is a small 16 bit processor board designed years ago > with limited RAM, not the common 32-bit processor running with 128M of > RAM like so many "embedded" systems today. > > Also, to make things "worse", sometimes an event is missed, and > sometimes a "ghost event" shows up 1/2 way between real events. > > So what I am saying is this... instead of an event every 100 samples, > sometimes I will get an additional event at sample 150, 550, 750, > etc... and sometimes we will get an event at 600 and 800, but not 700. > These "event glitches" are due to shortcomings in the system hardware > and cannot be fixed. > > The system is used to measure the linear speed of a moving system, the > closer the events are to each other (i.e. smaller period), the faster > the machinery is moving. The resource constrained system must > frequently estimate the speed of the system by measuring the time > between events. > > In a perfect world, with no missed events and no false events, we > could just measure the period between the last 2 events, and that > would be the best (most recent) estimate of the system speed. > > In an almost-perfect world (i.e. non-real time, post-processing on a > PC, etc...), an FFT could be used on a huge array of time-domain > samples, and the fundamental frequency of 100 would come through, is > spite of the "ghost events" and missing events. > > But this system has limited memory, and it interrupts every 500 > samples or so, and the driver stores indices of non-zero events. In > other words, over 1 second (10k samples), if events are happening > every 100 samples, the driver would store the values 100, 200, 300, > etc... 9900, 10,000 in a small array. > > The reality is that the array will actually hold values like ....4200, > 4250, 4300, 4400, 4600, 4650, 4700, etc.... due to false & duplicate > events. > > I have tried things like a histogram (find the "mode" of the period), > a median filter, etc... but they are all succeptible to problems.... I > keep thinking, "if I could just do an FFT and take the dominant freq, > I'd be set!" > > So the $64,000 question is this .... is there some way to find the > "fundamental frequency" of this data (which would be 100 in this > case), assuming I don't have the memory to save (or re-create) a huge > time-domain array of samples, and perform an FFT? > > (Humility note: When I took this little project over, I thought, "no > problem, visually, I can easily what the period is, this will be a > piece of cake in firmware....")
Reply by ●August 18, 20042004-08-18
Email Unread wrote:> > > So if I had an array of 50,000 elements, which would represent 5 > seconds of data, with most of the samples reading "0", and about 500 > elements reading non-zero when an event occurred. So far, so good.... >Wouldn't most of your problems disappear if you store the data as the number of zeroes between events? If events are on average 500 elements apart you require only 100 word length data entries to store a record of 50,000 elements. -jim -----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- http://www.newsfeeds.com - The #1 Newsgroup Service in the World! -----== Over 100,000 Newsgroups - 19 Different Servers! =-----
Reply by ●August 18, 20042004-08-18
"Email Unread" <aa99aa@verizonmail.com> wrote in message news:81c69dd.0408180350.2c79beea@posting.google.com...> Hi -- > > Hope this is being posted to the appropriate group(s) -- I didn't want > to carpet-bomb all the NG's but I figured these 2 groups would be the > most helpful. > > > After banging my head into a wall for many days on a particular > problem, I realized I might be able to benefit from the wisdom of > these newsgroups. > > I have inherited a very resource-constrained embedded system, > primarily performing data acquisition. Memory is the biggest > constraint, the processor horsepower is not as big a constraint. > > Anyways, suppose I have system sampling (via ADC) at 10,000 > samples/sec. Suppose several times per second, I get an "event", > which is a rectified voltage spike. For the sake of this example, > let's suppose I get an event every 10ms, i.e. every 100 samples. Thus > every 100th sample would be non-zero, the rest would be zero. (Let's > assume no noise, each event is one sample in width, etc...) > > So if I had an array of 50,000 elements, which would represent 5 > seconds of data, with most of the samples reading "0", and about 500 > elements reading non-zero when an event occurred. So far, so good.... > > Now here is the issue... there isn't memory to store 50k, 100k, etc... > samples. This is a small 16 bit processor board designed years ago > with limited RAM, not the common 32-bit processor running with 128M of > RAM like so many "embedded" systems today. > > Also, to make things "worse", sometimes an event is missed, and > sometimes a "ghost event" shows up 1/2 way between real events. > > So what I am saying is this... instead of an event every 100 samples, > sometimes I will get an additional event at sample 150, 550, 750, > etc... and sometimes we will get an event at 600 and 800, but not 700. > These "event glitches" are due to shortcomings in the system hardware > and cannot be fixed. > > The system is used to measure the linear speed of a moving system, the > closer the events are to each other (i.e. smaller period), the faster > the machinery is moving. The resource constrained system must > frequently estimate the speed of the system by measuring the time > between events. > > In a perfect world, with no missed events and no false events, we > could just measure the period between the last 2 events, and that > would be the best (most recent) estimate of the system speed. > > In an almost-perfect world (i.e. non-real time, post-processing on a > PC, etc...), an FFT could be used on a huge array of time-domain > samples, and the fundamental frequency of 100 would come through, is > spite of the "ghost events" and missing events. > > But this system has limited memory, and it interrupts every 500 > samples or so, and the driver stores indices of non-zero events. In > other words, over 1 second (10k samples), if events are happening > every 100 samples, the driver would store the values 100, 200, 300, > etc... 9900, 10,000 in a small array. > > The reality is that the array will actually hold values like ....4200, > 4250, 4300, 4400, 4600, 4650, 4700, etc.... due to false & duplicate > events.I'd look at this as having two challenges: 1) The data and signal processing 2) Memory management If you can solve the first one, then you can get clever in solving the second one. In the end, you know when the ticks occur. You know that some of the ticks are "noise" to be rejected. One question is: how much impact of the noise is allowable? Something like a phase locked loop might be helpful to avoid the noise .... It seems like having an algorithm to throw out the noisy ticks would be more robust than any averaging method. For example, you could average the period between ticks if there were few noise ticks. But, the longer the average, the greater the lag between changes in frequency and the measurement. The same thing holds for an FFT. The longer the data sequence, the older its average length and the higher the frequency resolution. But, if the desired frequency changes during the sequence, then the result will be smeared. Something like this might work: Calculate the time difference between ticks and average a number of them. This is the "candidate" period. Assuming that there are noisy / unwanted ticks occurring in between, the candidate period will always be a bit low. (Adjust the averaging time to match your situation). Now, using the candidate period, start looking for ticks that are within some margin of being within the candidate period. Let's say that the candidate period is 95% of the "true" period (a number we obtain experimentally). We'll call this C=0.95*P Let's say that the true period can change +/-2% in every succeeding period. This is +/-0.02*P. So, we predict that the minimum M period is: M =(1-0.02)*P = 0.98*P and P=C/0.95 so M=0.98*C/0.95 = 1.03*C where the 0.98 and 0.95 are numbers you determine from your situation. Now, IF the next difference between ticks <M then throw out the newest tick and wait for the next one and take the difference. IF the new difference is > 1.02*P =1.02*C/0.95 = 1.07*C then maybe reset? Assuming this works, then you have a measure of P and can use that instead of C and can narrow the margins perhaps. On the other hand the margins I've suggested may be way too tight - because I only asked about the system and didn't consider other "noise". An optimum margin would: Allow all the good data points to be used for sure with no "false rejections" Stop all the bad data points from being used for sure with no "false acceptances" This implies a threshold that might be much wider than the 2% system character. You will deviate from the ideal if the noise ticks are truly random in time of occurrence. You will not be able to distinguish between a noisy tick that is very close to a true tick because it will fall in the "window". But, the closer it is, the less error generated. The process looks like this: Average a string of maybe 10 time differences. This becomes the candidate Period which defines the window. Thereafter, start with the next two ticks t1 and t2. Calculate the time difference between the two ticks T2-T1. If the time difference is in the window, accept the time difference T2-T1. Record this value in a sequence of such values. Otherwise, reject the newest tick and wait for the next one. Calculate the time difference T3-T1. If the time difference is out of the window .... figure out what to do next (e.g. reset). If the time difference is in the window, accept the time difference T3-T1. Record this value in a sequence of such values. Repeat by comparing T4 to T2 or T3. Average the sequence thus obtained as may be useful. This is similar to a "range gate" used in radar and sonar. Returns or echoes are only looked for at a range that's consistent with the range and range rate that's representative of a real target. Noise outside of that range (i.e. time) can be ignored. So, once you know where to look, you can eliminate other "events". When there's a "lost target" then the gate / window needs to be opened up - which is equivalent to doing a reset here by recalculating the candidate period perhaps. I hope this helps. Fred
Reply by ●August 18, 20042004-08-18
With all the technical theoretical hypothetical posts about this subject, it's time for a simple and practical solution. :-) Do not store all the samples, but just the length of time between periods. This saves a lot of memory and leaves less room for erroneous source code. Suppose you want to measure the fundamental frequency once every second. So, put all the period lengths that occured in the last second in an array, sort it, and use the middle value. I don't mean the average, but the actual middle of the array. (The median) When using averages you smear out the false glitches over your actual value, whilst with median you discard the extremely low and high values. And that is what you want. And it's quick. I suppose your array doesn't get larger than 10-20 items, so there's absolutely no need for an ingenious sorting algorithm. A simple Bubblesort is even faster with small arrays than a Radix- or Quick sort. Try it. You'll like it. :-) Good luck. Rob Vermeulen "Rob Vermeulen" <rvermeulen@arbor-audio-antispam-.com> wrote in message news:10i75jk8jv5r92@corp.supernews.com...> Short question, > > Why is a median filter not good enough? > These filters are insensitive to glitches. It just takes the middle valueof> your sorted array. > Or in your case, sort the periods, and take the middle one as reference. > > gl. > > Rob. > > "Email Unread" <aa99aa@verizonmail.com> wrote in message > news:81c69dd.0408180350.2c79beea@posting.google.com... > > Hi -- > > > > Hope this is being posted to the appropriate group(s) -- I didn't want > > to carpet-bomb all the NG's but I figured these 2 groups would be the > > most helpful. > > > > > > After banging my head into a wall for many days on a particular > > problem, I realized I might be able to benefit from the wisdom of > > these newsgroups. > > > > I have inherited a very resource-constrained embedded system, > > primarily performing data acquisition. Memory is the biggest > > constraint, the processor horsepower is not as big a constraint. > > > > Anyways, suppose I have system sampling (via ADC) at 10,000 > > samples/sec. Suppose several times per second, I get an "event", > > which is a rectified voltage spike. For the sake of this example, > > let's suppose I get an event every 10ms, i.e. every 100 samples. Thus > > every 100th sample would be non-zero, the rest would be zero. (Let's > > assume no noise, each event is one sample in width, etc...) > > > > So if I had an array of 50,000 elements, which would represent 5 > > seconds of data, with most of the samples reading "0", and about 500 > > elements reading non-zero when an event occurred. So far, so good.... > > > > Now here is the issue... there isn't memory to store 50k, 100k, etc... > > samples. This is a small 16 bit processor board designed years ago > > with limited RAM, not the common 32-bit processor running with 128M of > > RAM like so many "embedded" systems today. > > > > Also, to make things "worse", sometimes an event is missed, and > > sometimes a "ghost event" shows up 1/2 way between real events. > > > > So what I am saying is this... instead of an event every 100 samples, > > sometimes I will get an additional event at sample 150, 550, 750, > > etc... and sometimes we will get an event at 600 and 800, but not 700. > > These "event glitches" are due to shortcomings in the system hardware > > and cannot be fixed. > > > > The system is used to measure the linear speed of a moving system, the > > closer the events are to each other (i.e. smaller period), the faster > > the machinery is moving. The resource constrained system must > > frequently estimate the speed of the system by measuring the time > > between events. > > > > In a perfect world, with no missed events and no false events, we > > could just measure the period between the last 2 events, and that > > would be the best (most recent) estimate of the system speed. > > > > In an almost-perfect world (i.e. non-real time, post-processing on a > > PC, etc...), an FFT could be used on a huge array of time-domain > > samples, and the fundamental frequency of 100 would come through, is > > spite of the "ghost events" and missing events. > > > > But this system has limited memory, and it interrupts every 500 > > samples or so, and the driver stores indices of non-zero events. In > > other words, over 1 second (10k samples), if events are happening > > every 100 samples, the driver would store the values 100, 200, 300, > > etc... 9900, 10,000 in a small array. > > > > The reality is that the array will actually hold values like ....4200, > > 4250, 4300, 4400, 4600, 4650, 4700, etc.... due to false & duplicate > > events. > > > > I have tried things like a histogram (find the "mode" of the period), > > a median filter, etc... but they are all succeptible to problems.... I > > keep thinking, "if I could just do an FFT and take the dominant freq, > > I'd be set!" > > > > So the $64,000 question is this .... is there some way to find the > > "fundamental frequency" of this data (which would be 100 in this > > case), assuming I don't have the memory to save (or re-create) a huge > > time-domain array of samples, and perform an FFT? > > > > (Humility note: When I took this little project over, I thought, "no > > problem, visually, I can easily what the period is, this will be a > > piece of cake in firmware....") > >
Reply by ●August 18, 20042004-08-18
Email Unread wrote:> Hi -- > > Hope this is being posted to the appropriate group(s) -- I didn't want > to carpet-bomb all the NG's but I figured these 2 groups would be the > most helpful. > > > After banging my head into a wall for many days on a particular > problem, I realized I might be able to benefit from the wisdom of > these newsgroups. > > I have inherited a very resource-constrained embedded system, > primarily performing data acquisition. Memory is the biggest > constraint, the processor horsepower is not as big a constraint. > > Anyways, suppose I have system sampling (via ADC) at 10,000 > samples/sec. Suppose several times per second, I get an "event", > which is a rectified voltage spike. For the sake of this example, > let's suppose I get an event every 10ms, i.e. every 100 samples. Thus > every 100th sample would be non-zero, the rest would be zero. (Let's > assume no noise, each event is one sample in width, etc...) > > So if I had an array of 50,000 elements, which would represent 5 > seconds of data, with most of the samples reading "0", and about 500 > elements reading non-zero when an event occurred. So far, so good.... > > Now here is the issue... there isn't memory to store 50k, 100k, etc... > samples. This is a small 16 bit processor board designed years ago > with limited RAM, not the common 32-bit processor running with 128M of > RAM like so many "embedded" systems today. > > Also, to make things "worse", sometimes an event is missed, and > sometimes a "ghost event" shows up 1/2 way between real events. > > So what I am saying is this... instead of an event every 100 samples, > sometimes I will get an additional event at sample 150, 550, 750, > etc... and sometimes we will get an event at 600 and 800, but not 700. > These "event glitches" are due to shortcomings in the system hardware > and cannot be fixed. > > The system is used to measure the linear speed of a moving system, the > closer the events are to each other (i.e. smaller period), the faster > the machinery is moving. The resource constrained system must > frequently estimate the speed of the system by measuring the time > between events. > > In a perfect world, with no missed events and no false events, we > could just measure the period between the last 2 events, and that > would be the best (most recent) estimate of the system speed. > > In an almost-perfect world (i.e. non-real time, post-processing on a > PC, etc...), an FFT could be used on a huge array of time-domain > samples, and the fundamental frequency of 100 would come through, is > spite of the "ghost events" and missing events. > > But this system has limited memory, and it interrupts every 500 > samples or so, and the driver stores indices of non-zero events. In > other words, over 1 second (10k samples), if events are happening > every 100 samples, the driver would store the values 100, 200, 300, > etc... 9900, 10,000 in a small array. > > The reality is that the array will actually hold values like ....4200, > 4250, 4300, 4400, 4600, 4650, 4700, etc.... due to false & duplicate > events. > > I have tried things like a histogram (find the "mode" of the period), > a median filter, etc... but they are all succeptible to problems.... I > keep thinking, "if I could just do an FFT and take the dominant freq, > I'd be set!" > > So the $64,000 question is this .... is there some way to find the > "fundamental frequency" of this data (which would be 100 in this > case), assuming I don't have the memory to save (or re-create) a huge > time-domain array of samples, and perform an FFT? > > (Humility note: When I took this little project over, I thought, "no > problem, visually, I can easily what the period is, this will be a > piece of cake in firmware....")Piece of cake indeed! What do you do that the computer won't? It seems to me that you look at the event flags and discard the odd ones. Presumably, your system has inertia and the power into it is bounded, so there's a limit to its acceleration. Scan the events in order. As long as the intervals are consistent, accumulate their sum and number. If an event yields an inconsistent interval, discard the event but keep the interval if it's short. There's a good chance that the next interval will be short too, and that the sum of the two short intervals will be what you want. One thing: when it works (or if it doesn't :-(), please let me know. jerry -- ... the worst possible design that just meets the specification - almost a definition of practical engineering. .. Chris Bore ������������������������������������������������������������������������
Reply by ●August 19, 20042004-08-19
Ben Bridgwater wrote:> I'd just maintain a running average of inter-event times, over the last > N intervals, but discarding from this history/average any intervals too > far outside (e.g. 40%+ lower, or 40%+ higher) of the current average. > This will reject those 50% too low and 100% too high values while > allowing for genuine instantaneous 40% +/- speed changes which > presumably is more than adequate. Choose the N as a trade-off between > quick reaction to speed changes, and smoothness of reported speed. > > If you don't want the limitation of that "never more than +/- 40% > legitimate change", then maintain two running averages, one as above, > the measured speed, and one of ALL last M intervals (i.e. without > rejecting any), and make the event reject decision based on +/- 40% from > this 2nd average. Choose the M as sufficiently large for average to be > close to true value, despite glitches, but no larger so as to introduce > minimal lag in measured speed when very large (> +/ 40%) instantaneous > speed changes occur. > > Ben > > Email Unread wrote: > >> Hi --- snip - -- Or just keep the one average, but let large excursions from average only pull the sample a little bit. Simple would be to use a 1st-order lowpass with a ramp limit, more sophisticated would be to 1st-order lowpass with a velocity limit that drops to some small (but nonzero) number. -- Or go ahead and ignore all large excursions unless you lose lock, then just do a simple average. -- Or get real sophisticated and build a "ghost detector" in your software that sees the ghosts and snips them out, while also filling in the lost pulses (I've done the lost pulse thing before on a motor speed control, it can work very well). -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Reply by ●August 19, 20042004-08-19
Tim Wescott wrote: ...> -- Or get real sophisticated and build a "ghost detector" in your > software that sees the ghosts and snips them out, while also filling in > the lost pulses (I've done the lost pulse thing before on a motor speed > control, it can work very well).My suggestion amounted to detecting and discarding ghosts without filling in the lost pulses, merely discarding the intervals that contain them. That's simpler and I believe, adequate. Ine can effectively fill in lost pulses by adding the long intervals to the running sum and bumping the event counter by more than one (usually, two). It's probably not worth the code. Jerry -- ... the worst possible design that just meets the specification - almost a definition of practical engineering. .. Chris Bore ������������������������������������������������������������������������
Reply by ●August 19, 20042004-08-19
Jerry Avins <jya@ieee.org> writes:> Tim Wescott wrote: > ... > > > -- Or get real sophisticated and build a "ghost detector" in your > > software that sees the ghosts and snips them out, while also filling in > > the lost pulses (I've done the lost pulse thing before on a motor speed > > control, it can work very well). > > My suggestion amounted to detecting and discarding ghosts without > filling in the lost pulses, merely discarding the intervals that contain > them. That's simpler and I believe, adequate. Ine can effectively fill > in lost pulses by adding the long intervals to the running sum and > bumping the event counter by more than one (usually, two). It's probably > not worth the code.Another thought is that if it's possible to capture a representative set raw data, examine it for any charac- teristics that my prove useful for distinguishing between real pulses and the ghost ones. Perhaps one is wider than the other, ramps up and down, etc. It could prove useful to determine limits on the acceleration/deceleration of the thing being measured. These characteristics may suggest a method for then filtering the data in realtime.






