Can anyone think of applications for a system that can fuse data from two
sensors into one visual stream? What could you do if you could have one image
with both visual and IR information, let's say, and it was
overlayed/processed such that the best of both could be accentuated? It seems
like there may be some good uses, and that in certain circumstances it could
help create images that the computer can more easily interpret. What do you
think? What about when combined with a field programmable gate array (FPGA)
based processing engine?
Here is one application I've thought of:
Traffic engineers use smart cameras to monitor the situation on the roadway,
including determining when to change a traffic light, or taking a photo of a car
that runs a red light. Sometimes this happens in very poor weather conditions
with snow or fog for example. With an IR camera and a visual camera, you can
pick up different elements of the scene, and merging them together, get a better
view of the car (presence and location) and it's characteristics. Traffic
engineers might value this. The traffic camera market is also rather full.
The question is if you had this what would you do?
c...@rokaconsulting.com
Applications for fusing an image from two sensors (for example Visual and IR)
Started by ●August 5, 2009