DSPRelated.com
Forums

Glitch/inconsistency detection in navigation data

Started by Rune Allnor June 26, 2006
Rune Allnor wrote:
> > If you have 3 points, A B and C, B can be considred an exception if the > > discance between B and the axis A-C is greater than some amount.
> > I think this approach might be useful. Do you have any references > (books, articles) to how these things are done in aviation?
No. The ideas were given to me here (sci.geo.satellite-nav) years ago while I was post processing some tracks data I had accumulated during a bike trip through australia. I built logic to simplify the number of points (remove those that do not define a change in direction, and remove those that are "stray".). Another thing to consider: if you are a large cargo ship overating in the st-lawrence seaway, a stray point may be really easy to spot because a ship simply cannot move on a dime. But if you are a canoe on a rapidly flowing river, you may catch an eddy currnt that quickly pushes you off course. But for an aircraft, you need to really define the requirements and handling os any glitch. Are you allowed to filter data out which will not get logged onto the flight recorder ? The advantage of aviation is that you can coorolate GPS with the primary instruments. (UINS and barometric altimetre). So if you see a glitch on the GPS but no glitch on the other systems, then the GOPS probably experienced a glitch. But if it is visible on all 3 systems, then it is probably turbulence etc etc.
Ulrich Bangert wrote:
> this problem is the classical application for so called "robust statistical > methods".
How would your method handle a situation where one is riding a bike at 30km on a very flat road for 6 hours in a steady speed, and for one minute, you have this kamakaze downhill where you reach 80km/ h and then ride up a hill at 10kmh for 7 minutes. Would it "filter out" those readings of going down and then up the hill ?
Another aspect to consider:

When I processed my australian track logs, those points, taken at 1
minute intervals,  that were very near to each other were in fact VERY
significant because this indicated a place where I stopped ( in some
cases, I want those to stand out to show places where whater was available).

So from a simplification of the trackpoints to draw a route, those
points can be removed, but you want to mark those points as significant
stops and possibly create a waypoint for them. 

So one must really be familiar with the activity and how the data was
collected before selecting some logic to simplify a series of points. 

Another aspect: many GPS have an "auto" feature to write track points
into the log.  Generallly, track points are written when there is a
change in speed/direction and possibly at some regular intervals. But if
you do not know the exact logic used by the unit to decide if a
trackpoint is to be written or not, you cannot really decide how to
remove trackpoints statistically.
Ulrich Bangert wrote:  lots of interesting stuff.

Thanks. Sounds like something to look into. Processing speed is
(as of yet) insignificant if it can release man-hours for other duties.
Where I am right now, man-hours are expensive. If a computer
needs 12 hours for this sort of job, then so be it, if it can be done
in the human operator's time off watch. 

Rune

JF Mezei wrote:
> Another aspect to consider: > > When I processed my australian track logs, those points, taken at 1 > minute intervals, that were very near to each other were in fact VERY > significant because this indicated a place where I stopped ( in some > cases, I want those to stand out to show places where whater was available). > > So from a simplification of the trackpoints to draw a route, those > points can be removed, but you want to mark those points as significant > stops and possibly create a waypoint for them. > > So one must really be familiar with the activity and how the data was > collected before selecting some logic to simplify a series of points. > > Another aspect: many GPS have an "auto" feature to write track points > into the log. Generallly, track points are written when there is a > change in speed/direction and possibly at some regular intervals. But if > you do not know the exact logic used by the unit to decide if a > trackpoint is to be written or not, you cannot really decide how to > remove trackpoints statistically.
Related fun things happen in radar tracking. There you usually have reasonable x and y estimates (based on the r and theta you measure), and a weak z measurement (based on a very limit accuracy angular measurement). You can track a target and predict based on what is a reasonable side to side turn rate. However, a fighter can roll over then do a hard downward half loop and go back in the direction it came. Your nicely filtered track basically stops abruptly and goes backwards, and your weak height information doesn't always help that much in tracking in 3D. Airliners are easy to track, but the interesting targets are a real pain. This kind of filtering is like the tale I heard of the world's most accurate weather forecasts. I was told a DJ in the Caribbean spent years saying the weather would be whatever it was yesterday. Because the weather there has long period of similar conditions punctuated by abrupt changes, his accuracy was very high, but his usefulness was zero. :-) Regards, Steve
To Rune:

On a typical pc with a window width of 100 I process 600000 data points in
1-2 minutes, so it is not THAT slow that my first mail may have indicated. I
use this algorithm for example in a freeware software named "Plotter". You
can download "Plotter" from my homepage

www.ulrich-bangert.de

If you manage to load your data files with that (chances are..) you can
immediatly test the quality and the speed of the outlier detection.

To JF Mezei:

I you managed to figure out exactly what the algorithm does, you will have
noticed that for detecting outliers everything is significant, that is
INSIDE the window, nothing else. For that reason, if this algorithm is
applied to the scenario you present, the first thing to say is, that it does
not matter at all whether you have been riding for 6, 12, 18 or anything
hours before you meet the hill. The algorithm is completely insensitive to
that!

The window is something like "If you want to detect outliers look only to
values in the neighbourhood and decide what is normal and what is not for
them". Please note also, that your scenario arises the question for a
definition of  "oulier". Other people would pehrhaps think that the "hill
scenario" IS indeed a outlier that should be removed while you think it is
very significant. Note, that the algorithm can fit BOTH kind of views by
adopting the window length. If you make the window length greater than 2 X
the "hill length" then the hill will be completely removed from the data. If
you find that the hill is significant then make the window length smaller
than 2X the "hill length", in this case the hill will not be filtered out.
By applying the rule "a event shorter than n/2 may be a outlier" YOU decide
what is an outlier not the algorithm.

I cannot accept your second objection, it is a outlier detection algorithm,
not a biker's rest detection algorithm. But if you want to put forward the
question whether the rest will be detected as an outlier or not, the same
rules apply as above: If the window length is set to value so that the
length of the braking action before stop and the window length "match" then
the stop will be recognized as a "normal" change in data

Regards
Ulrich

"Rune Allnor" <allnor@tele.ntnu.no> schrieb im Newsbeitrag
news:1151393854.224220.97860@p79g2000cwp.googlegroups.com...
> > Ulrich Bangert wrote: lots of interesting stuff. > > Thanks. Sounds like something to look into. Processing speed is > (as of yet) insignificant if it can release man-hours for other duties. > Where I am right now, man-hours are expensive. If a computer > needs 12 hours for this sort of job, then so be it, if it can be done > in the human operator's time off watch. > > Rune >
Ulrich Bangert wrote:
> To Rune: > > On a typical pc with a window width of 100 I process 600000 data points in > 1-2 minutes, so it is not THAT slow that my first mail may have indicated. I > use this algorithm for example in a freeware software named "Plotter". You > can download "Plotter" from my homepage > > www.ulrich-bangert.de > > If you manage to load your data files with that (chances are..) you can > immediatly test the quality and the speed of the outlier detection.
I'll definately have a look into this. Your first post indicated you have programmed these things in matlab? If so, there is a speed-up potential here. I usually get a speed-up on the order of 10-50x when I port from matlab to C or C++. Rune
Ulrich Bangert wrote:
> very significant. Note, that the algorithm can fit BOTH kind of views by > adopting the window length. If you make the window length greater than 2 X > the "hill length" then the hill will be completely removed from the data. If > you find that the hill is significant then make the window length smaller > than 2X the "hill length", in this case the hill will not be filtered out.
Ok. fair enough. But that still leaves the requirement that the user know about the type of data that he has to process, the types of irregularities which must be retained, and those that can be removed because this will be needed to decide on the window size. And one also need to know how the data was collected. Say on a long straight road, a car turns off and drives 100m to a water hole/pump. With periodic trackpoint recording, you could have a couple of stray points. With "auto" track recording, chances are very good that the GPS would record a point at the turnoff, one point at the stop for water, and again a point once the car gets back to main road and turns back into the normal direction. Now, both would have a couple of stray points from a purely "mathematical" point of view. But in the second case, a human could more clearly see a path away from road and back to the road at the same intersection to resume course. So one must really understand the "event" as well as how the data was recorded for that event before starting to process such data and eliminate points judged to be "bad".
Rune,

as a dedicated follower of PASCAL i program in Borland DELPHI which produces
native code that i do not suspect to be significantly slower then C/C++
generated code. But over the years I have found that the Matlab help system
gives me information about mathematical topics at exactly the level that
seems to match me, that's why i pointed to it. If Plotter does not read your
files, then (in case they are ASCII) send me a few lines of it. I am very
interested to make my file read routines as universal as possible, so every
no-go is a object of interest.

Regards
Ulrich


"Rune Allnor" <allnor@tele.ntnu.no> schrieb im Newsbeitrag
news:1151475859.983140.83250@d56g2000cwd.googlegroups.com...
> > Ulrich Bangert wrote: > > To Rune: > > > > On a typical pc with a window width of 100 I process 600000 data points
in
> > 1-2 minutes, so it is not THAT slow that my first mail may have
indicated. I
> > use this algorithm for example in a freeware software named "Plotter".
You
> > can download "Plotter" from my homepage > > > > www.ulrich-bangert.de > > > > If you manage to load your data files with that (chances are..) you can > > immediatly test the quality and the speed of the outlier detection. > > I'll definately have a look into this. Your first post indicated you > have programmed these things in matlab? If so, there is a speed-up > potential here. I usually get a speed-up on the order of 10-50x when > I port from matlab to C or C++. > > Rune >
Hello JF Mezei,

> Ok. fair enough. But that still leaves the requirement that the user > know about the type of data that he has to process, the types of > irregularities which must be retained, and those that can be removed > because this will be needed to decide on the window size. And one also > need to know how the data was collected.
Agreed!
> of stray points. With "auto" track recording, chances are very good that > the GPS would record a point at the turnoff, one point at the stop for > water, and again a point once the car gets back to main road and turns > back into the normal direction.
I am not sure if i interprete the term "auto track recording" in the right way. Perhaps it is even a "standard" term in navigation that i am not aware of (I have seen the question for outlier detection purely from a mathematical point of view). But if it is some kind of "event driven" track recording you are of course right that the proposed algorithm can not handle data acquired in this way because some frontend entity has already made the decision what an event is and what not and has missed to acquire the "surrounding data" that are necessary for the algorithm. Regards Ulrich "JF Mezei" <jfmezei.spamnot@teksavvy.com> schrieb im Newsbeitrag news:44A2289A.BF59B898@teksavvy.com...
> Ulrich Bangert wrote: > > very significant. Note, that the algorithm can fit BOTH kind of views by > > adopting the window length. If you make the window length greater than 2
X
> > the "hill length" then the hill will be completely removed from the
data. If
> > you find that the hill is significant then make the window length
smaller
> > than 2X the "hill length", in this case the hill will not be filtered
out.
> > > Ok. fair enough. But that still leaves the requirement that the user > know about the type of data that he has to process, the types of > irregularities which must be retained, and those that can be removed > because this will be needed to decide on the window size. And one also > need to know how the data was collected. > > Say on a long straight road, a car turns off and drives 100m to a water > hole/pump. With periodic trackpoint recording, you could have a couple > of stray points. With "auto" track recording, chances are very good that > the GPS would record a point at the turnoff, one point at the stop for > water, and again a point once the car gets back to main road and turns > back into the normal direction. > > Now, both would have a couple of stray points from a purely > "mathematical" point of view. But in the second case, a human could > more clearly see a path away from road and back to the road at the same > intersection to resume course. > > So one must really understand the "event" as well as how the data was > recorded for that event before starting to process such data and > eliminate points judged to be "bad".