Real Time Panoramic View Generation from rotating camera via DM642 EVM
Started by 8 years ago●2 replies●latest reply 8 years ago●142 viewsHello,
I am new to Embedded Systems and DSPs. I have to design and implement real time image stitching algorithm to make 360 degree panoramic view on DM642 Evaluation Module (version 3).
I have successfully run (and understood) an example code of DM642 EVM which acquires (FVID) frames from NTSC source and displays them on screen after converting captured frames to RGB format.
I have some queries which need to be clarified:
1. I want to work on 320 by 240 resolution to improve processing speed. However, I cannot configure capture and display channels to acquire this resolution frame, process it and display it. From what I get, I can only capture in NTSC720 or PAL720 modes and display in XGA, SVGA or VGA modes. Will I have to down-sample, process and then up-sample the acquired FVID_frame or there is any more efficient and easier option?
2. To be able to do image stitching, i need to store previous frames? If yes, then how to store them efficiently? Do I need external memory or 4MB Flash and 32MB SDRAM is enough?
3. Please refer any book, online course, article or post (I could not find one) which explains or can help me accomplish the project starting from basics.
Thanking in anticipation.
Best Regards,
Hassan Iqbal
#DM642 #EVM #imageProcessing
I can't answer your questions, but I may be able to give you some more information.
1: NTSC video is 525 lines, of which around 480 are active (most TVs in the US don't show that many lines). So if you're capturing complete frames, you're stuck with 480 lines. You might be able to get into the guts of the acquisition algorithm and get 320 lines horizontally -- but maybe not, and getting good color resolution may be a challenge.
The output modes have similar properties. So down sampling and then upsampling may be your best bet.
2: Well, shouldn't you know the answer to that? Where are you going to store the panoramic view if not in RAM someplace?
I strongly suggest that you sit down with a pencil and paper and work this out. You should have an idea of what angle one pixel subtends horizontally. From that you should be able to work out how many of them will fit into 360 degrees. That'll give you the number of pixels in a line, then you can just multiply that by 240 to find out how many pixels overall.
3: I suggest a good book on video. The title I remember (from 10 years ago!) is "The Complete Idiot's Guide to Video". I probably don't have the title exactly, and you'd want to check to make sure that the thing covers NTSC and PAL video. That'll get you the video part, at least.
Stitching successive frames must account for inevitable* perspective distortion, but deals only with the edges of the frames. NTSC resolution is effectively 640 by 480. It downsampling by 2 in order to save processing time later a good trade-off?
*Thought experiment: Fasten a long shelf to a wall. Fasten disks to the wall just under it, all the same size. Place balls on the shelf of the same diameter as the disks. Record an image of the setup with the shelf at the horizontal midline from such a distance that it subtends, say, 90 degrees. Near the center of the image the balls and disks will both look round. What happens near the edges? What does "distortion-free" mean?
A "process" lens or a pinhole will render all the disks round and the balls increasingly oblate toward the edges of the image. A lens that renders all the balls round will exhibit barrel distortion. You will need heuristics that might depend on the subject matter. When I did this with film, I avoided the problem by using only narrow strips from the center of the frames. Good luck.