I'm trying to make a vision-guided robot aircraft at uni and I want
to land on an 'H' shaped marker on the ground. The aircraft will only
use the information from its mounted video camera to navigate.
I have been looking into ways of identifying the helipad seen in the
video, which will be rotated, scaled and have perspective. These
properties will be used to determine the orientation and 3d position
of the aircraft w.r.t. the landing marker.
The processing will need to be very fast, close to real-time.
I understand that the fourier-mellin transformation can be used for
identifying rotated objects, but I am not sure how robust it is for
perpective and scaling (zooming).
I have made a program to segment and label objects seen by the camera,
now I want to test each labelled object to identify my 'H'.
During my university career I have specialised mainly in aircraft
control systems and simulation programming. Image recognition and
processing is very new to me and I expect there are better ways to do
things than those methods I know of.
If anyone out there could throw some suggestions my way, I would
really appreciate it.
sorry for posting this in more than one forum, I just want as many
people as possible to see it