Hello- Disclaimer: I have no background in optics or image signal processing, so I apologize in advance for abusing any terminology. I just discovered this list via Google, and I hope that some of you might be able to help point me in the right direction. Here goes... I'm working on a software project that involves recognizing a certain pattern in extremely blurry images. I have experimented with standard deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but none have sharpened the image enough for the recognition algorithm to work. So instead, I thought I might try it the other way around -- that is, distort the pattern that I'm looking for and then compare the distorted pattern to the actual image. I already have a series of specimen images taken with a particular camera, and for each one, I can precisely define what the image signal is supposed to be. So I assume that the next step is to determine the point-spread function or convolution kernel that would best translate my model signal into the specimen images. So my questions are: Does this sort of recognition-by-reverse-deconvolution approach sound sane? What sort of parameters do I need to know about my camera and the images to compute the PSF? Can the PSF be estimated empirically, perhaps through some sort of regression analysis? Is there a particular book or body of research that you think would be most helpful to me? Thanks for your time. -Jeff
How does one "determine" the point spread function?
Started by ●August 10, 2008
Reply by ●August 10, 20082008-08-10
On 10 Aug, 06:07, j...@imaginative-software.com wrote:> Hello- > > Disclaimer: �I have no background in optics or image signal > processing, so I apologize in advance for abusing any terminology. �I > just discovered this list via Google, and I hope that some of you > might be able to help point me in the right direction. �Here goes... > > I'm working on a software project that involves recognizing a certain > pattern in extremely blurry images. �I have experimented with standard > deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but > none have sharpened the image enough for the recognition algorithm to > work. �So instead, I thought I might �try it the other way around -- > that is, distort the pattern that I'm looking for and then compare the > distorted pattern to the actual image. �I already have a series of > specimen images taken with a particular camera, and for each one, I > can precisely define what the image signal is supposed to be. �So I > assume that the next step is to determine the point-spread function or > convolution kernel that would best �translate my model signal into the > specimen images. > > So my questions are: > > Does this sort of recognition-by-reverse-deconvolution approach sound > sane?It *sounds* sane. Whether it *is* sane depends on the resources at your disposal; these kinds of things might not be trivial to implement.> What sort of parameters do I need to know about my camera and the > images to compute the PSF?In principle, all you need is a complete and total knowledge of every tiny detail of the camera. Of course, *obtaining* this complete and total knowledge of every tiny detail of the camera might prove to become quite a hazzle.> Can the PSF be estimated empirically, perhaps through some sort of > regression analysis?The PSF can be estiamted, but only within limits. In time series analysis it is usually no problem to estimate a power spectrum. But one needs the phase terms as well if one wants an accurate PSF. Obtaining the phase spectrum is not necessarily easy.> Is there a particular book or body of research that you think would be > most helpful to me?Gonzales & Woods "Digital Image Processing" 3rd ed., (2008) http://www.amazon.com/Digital-Image-Processing-Rafael-Gonzalez/dp/013168728X/ref=pd_bbs_1?ie=UTF8&s=books&qid=1218356784&sr=8-1 is the standard intro on these sorts of problems. It shows a comparision between a number of image reconstruction algorithms. Chapter 5 deals with image restoration and reconstruction techniques. Section 5.6 deals with estimating the degrading function. Rune
Reply by ●August 10, 20082008-08-10
<jeff@imaginative-software.com> wrote in message news:4d628ada-cbb1-48a5-ad26-3d421e9018d4@i24g2000prf.googlegroups.com...> ... none have sharpened the image enough for > the recognition algorithm to work. So instead, > I thought I might try it the other way around -- > that is, distort the pattern that I'm looking > for and then compare the distorted pattern > to the actual image. > > Does this sort of recognition-by-reverse- > deconvolution approach sound sane? >It seems to me that if the output of your program is "Yes the image is present" or "No, the image is not present", you might have to convince someone by letting them find the image with their own eyes. The "standard of proof" might be very high, depending on what else might be done with the knowledge that the image is or is not present. In other words, no matter what techniques your program uses, you might have to sharpen the actual image enough so that a human can convince himself that your identification is correct.
Reply by ●August 10, 20082008-08-10
On Aug 9, 10:07�pm, j...@imaginative-software.com wrote:> Hello- > > Disclaimer: �I have no background in optics or image signal > processing, so I apologize in advance for abusing any terminology. �I > just discovered this list via Google, and I hope that some of you > might be able to help point me in the right direction. �Here goes... > > I'm working on a software project that involves recognizing a certain > pattern in extremely blurry images. �I have experimented with standard > deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but > none have sharpened the image enough for the recognition algorithm to > work. �So instead, I thought I might �try it the other way around -- > that is, distort the pattern that I'm looking for and then compare the > distorted pattern to the actual image. �I already have a series of > specimen images taken with a particular camera, and for each one, I > can precisely define what the image signal is supposed to be. �So I > assume that the next step is to determine the point-spread function or > convolution kernel that would best �translate my model signal into the > specimen images. > > So my questions are: > > Does this sort of recognition-by-reverse-deconvolution approach sound > sane? > > What sort of parameters do I need to know about my camera and the > images to compute the PSF? > > Can the PSF be estimated empirically, perhaps through some sort of > regression analysis? > > Is there a particular book or body of research that you think would be > most helpful to me? > > Thanks for your time. > > -JeffYour PSF will be have the same shape as the iris aperture of your camera. The PSF can be estimated from an image of point source of light such as a star. Maybe, a laser pointer will work, but I never tried it.
Reply by ●August 10, 20082008-08-10
On Aug 10, 6:09�am, "John E. Hadstate" <jh113...@hotmail.com> wrote:> It seems to me that if the output of your program is "Yes the > image is present" or "No, the image is not present", you might > have to convince someone by letting them find the image with > their own eyes. �The "standard of proof" might be very high, > depending on what else might be done with the knowledge that > the image is or is not present. �In other words, no matter what > techniques your program uses, you might have to sharpen the > actual image enough so that a human can convince himself that > your identification is correct.It's a little more complicated than just "yes or no." The images contain one of several different patterns. So the objective is to determine 1) if a pattern is present, and 2) which pattern it is. Each pattern is well defined and can be mathematically described. You raise an interesting point, perhaps with respect to "logical correctness". However, in this case, we can assume that we know exactly which pattern appears in each image. And that's why I'm looking for some way to estimate the PSF by comparing the sample images with the model signals.
Reply by ●August 11, 20082008-08-11
On Aug 10, 5:07�am, j...@imaginative-software.com wrote:> Hello- > > Disclaimer: �I have no background in optics or image signal > processing, so I apologize in advance for abusing any terminology. �I > just discovered this list via Google, and I hope that some of you > might be able to help point me in the right direction. �Here goes... > > I'm working on a software project that involves recognizing a certain > pattern in extremely blurry images. �I have experimented with standard > deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but > none have sharpened the image enough for the recognition algorithm to > work. �So instead, I thought I might �try it the other way around -- > that is, distort the pattern that I'm looking for and then compare the > distorted pattern to the actual image. �I already have a series of > specimen images taken with a particular camera, and for each one, I > can precisely define what the image signal is supposed to be. �So I > assume that the next step is to determine the point-spread function or > convolution kernel that would best �translate my model signal into the > specimen images. > > So my questions are: > > Does this sort of recognition-by-reverse-deconvolution approach sound > sane? > > What sort of parameters do I need to know about my camera and the > images to compute the PSF? > > Can the PSF be estimated empirically, perhaps through some sort of > regression analysis? > > Is there a particular book or body of research that you think would be > most helpful to me? > > Thanks for your time. > > -JeffTry ImageJ (public domain software), and look for the deconvolution plug-in - this does pretty much what you are trying to do. It uses an adaptive iterative approach. Personally I find that these kinds of things don't work very well except on the example images they use to demonstrate, but ImageJ's is one of the better ones. It doesn't need you to have a PSF first.> Can the PSF be estimated empiricallyTo measure the point spread function (PSF) of a camera, I proceed as follows: 1) In the dark, get someone to hold a laser pointer shining a dot onto card at a distance, far enough that the size of the dot, subtended onto your camera pixel array, is less than the size of one pixel. For instance, suppose you have a 3.1 Mpixel camera, whose field of view is 45 degrees. Then 1 pixel at the center subtends an angle of 1' of arc. At 10 m, a 4 mm dot will subtend an angle less than 1' of arc. So a laser dot less than 4 mm in size will besmaller than one pixel. (You use a laser so you get enough light to be sen at this distance). Take a photo of the dot. This is a direct measure of the PSF. You can verify this by calculation - I have done this for many cameras and it usually works out about as expected. Another check is to have a photo with a sharp edge (black to white is best). The shape of the edge should match the shape of one half of your PSF. You can reverse this by measuring the Modulation Transfer Function (MTF - essentially, the spatial frequency spectrum of the PSF). Generate an image made of vertical bars whose shape is sinusoidal and whose spatial frequency increases up to, and a bit beyond, the Nyquist. Photograph this. The reduction of contrast should plot an MTF that is the Fourier Transform of the PSF. If you already have a set of 'ideal' and real images for each camera, then your PSF can be determined by de-convolving one with the other. Personally, I find that measuring the PSF with the laser dot is as good as anything, satisfyingly close to practical reality, and kind of fun too.> Does this sort of recognition-by-reverse-deconvolution approach sound sane?Your recognition by reverse de-convolution sounds sane, but why not transpose the problem into the frequency domain and do the recognition there? The features may be more recognizable, and you will be freed from the constraints of superimposing the image patterns. You can estimate PSF quite easily, because most cameras just have a blobby blur of two or three pixels width. Make sure to disable any auto sharpening if you can (although strictly, that sort of thing also counts towards PSF anyway).> Is there a particular book or body of research that you think would be most helpful to me?Well, at the risk of shameless plugging, I teach all about this in my short training class Image Processing for Consumer Electronics. And we do this PSF measurement and deconvolution as a practical exercise. http://www.bores.com/courses/ipce_ipce.htm We usually do these classes on demand for consumer electronics and mobile phone companies, but I can always ask if you could 'piggy back' onto one of those if you wanted to. email me - chris@bores.com - and I'll see what I can do. Chris ========================= Chris Bore BORES Signal Processing www.bores.com
Reply by ●August 11, 20082008-08-11
On 11 Aug, 10:35, Chris Bore <chris.b...@gmail.com> wrote:> On Aug 10, 5:07�am, j...@imaginative-software.com wrote:> > Can the PSF be estimated empirically > > To measure the point spread function (PSF) of a camera, I proceed as > follows: > > 1) In the dark, get someone to hold a laser pointer shining a dot onto > card at a distance,A very nifty trick! Just be aware that you need a reasonably good quality laser pointer to do this, or you measure the PSF of the pointer. I have two pointers, one red which is intended as aid during presentations and one green to point at astronomical objects in the night sky. The red pointer gives a rather large light spot because it needs to be seen from the back of a large auditorium. The green pointer is not at all 'pointed' but has a very bright spot surrounded by a fainter 'halo'. As long as one is aware about these things and don't try to invert for components that are caused by imperfections in the laser pointer, this approach ought to be very useful. Coming to think of it, that's why you said 'point at a card' and not 'point at a wall,' right? The main lobe hits and reflects from the card and there is nothing to reflect the sidelobes? Rune
Reply by ●August 11, 20082008-08-11
On Aug 11, 10:07�am, Rune Allnor <all...@tele.ntnu.no> wrote:> On 11 Aug, 10:35, Chris Bore <chris.b...@gmail.com> wrote: > > > On Aug 10, 5:07�am, j...@imaginative-software.com wrote: > > > Can the PSF be estimated empirically > > > To measure the point spread function (PSF) of a camera, I proceed as > > follows: > > > 1) In the dark, get someone to hold a laser pointer shining a dot onto > > card at a distance, > > A very nifty trick! > > Just be aware that you need a reasonably good quality laser pointer > to do this, or you measure the PSF of the pointer. > > I have two pointers, one red which is intended as aid during > presentations and one green to point at astronomical objects in > the night sky. > > The red pointer gives a rather large light spot because it needs > to be seen from the back of a large auditorium. The green pointer > is not at all 'pointed' but has a very bright spot surrounded by > a fainter 'halo'. > > As long as one is aware about these things and don't try to > invert for components that are caused by imperfections in the > laser pointer, this approach ought to be very useful. > > Coming to think of it, that's why you said 'point at a card' > and not 'point at a wall,' right? The main lobe hits and reflects > from the card and there is nothing to reflect the sidelobes? > > RuneYeah. The other point to note is, the laser pointer is held close to the card so the spot is small. People get confused and try to shine it from the camera, which gives a big dot. So you need two people to do the experiment (it is more fun if you are of opposite sex, and very friendly, because it is best done in total darkness)..
Reply by ●August 11, 20082008-08-11
aruzinsky wrote:> On Aug 9, 10:07 pm, j...@imaginative-software.com wrote: >> Hello- >> >> Disclaimer: I have no background in optics or image signal >> processing, so I apologize in advance for abusing any terminology. I >> just discovered this list via Google, and I hope that some of you >> might be able to help point me in the right direction. Here goes... >> >> I'm working on a software project that involves recognizing a certain >> pattern in extremely blurry images. I have experimented with standard >> deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but >> none have sharpened the image enough for the recognition algorithm to >> work. So instead, I thought I might try it the other way around -- >> that is, distort the pattern that I'm looking for and then compare the >> distorted pattern to the actual image. I already have a series of >> specimen images taken with a particular camera, and for each one, I >> can precisely define what the image signal is supposed to be. So I >> assume that the next step is to determine the point-spread function or >> convolution kernel that would best translate my model signal into the >> specimen images. >> >> So my questions are: >> >> Does this sort of recognition-by-reverse-deconvolution approach sound >> sane?Yes. It is one of the ways that regularised deconvolutions are done. Essentially take a model of the true outside world convolve it with the idealised point spread function of your imaging system and compare it with the actual observed data. The trick is in knowing the PSF and choosing a suitable regularising function.>> >> What sort of parameters do I need to know about my camera and the >> images to compute the PSF?To compute it accurately - just about everything including the geometry of optics, aperture used and the properties of the glass. But you might get away with a crude disk.>> >> Can the PSF be estimated empirically, perhaps through some sort of >> regression analysis?If there is a point specular reflection in the blurred image you can guestimate the psf from that.>> >> Is there a particular book or body of research that you think would be >> most helpful to me?Try looking for "blind deconvolution" that might do what you want.>> >> Thanks for your time. >> >> -Jeff > > Your PSF will be have the same shape as the iris aperture of your > camera.Provided that the image is not diffraction limited.> > The PSF can be estimated from an image of point source of light such > as a star. Maybe, a laser pointer will work, but I never tried it.Any specular reflection in the original image will give a rough idea of the psf at that particular depth of field. Unless everything is the same distance away then the psf varies with subject distance too. Astronomer have it easy since the subjects are effectively at infinity. Regards, Martin Brown ** Posted from http://www.teranews.com **
Reply by ●August 11, 20082008-08-11
jeff@imaginative-software.com wrote:> > On Aug 10, 6:09 am, "John E. Hadstate" <jh113...@hotmail.com> wrote: > > > It seems to me that if the output of your program is "Yes the > > image is present" or "No, the image is not present", you might > > have to convince someone by letting them find the image with > > their own eyes. The "standard of proof" might be very high, > > depending on what else might be done with the knowledge that > > the image is or is not present. In other words, no matter what > > techniques your program uses, you might have to sharpen the > > actual image enough so that a human can convince himself that > > your identification is correct. > > It's a little more complicated than just "yes or no." The images > contain one of several different patterns. So the objective is > to determine 1) if a pattern is present, and 2) which pattern it is. > Each pattern is well defined and can be mathematically described. > > You raise an interesting point, perhaps with respect to "logical > correctness". However, in this case, we can assume that > we know exactly which pattern appears in each image. > And that's why I'm looking for some way to estimate > the PSF by comparing the sample images with the model > signals.There are several things not so clear in your problem statement: Is the problem to identify the image content or to identify the PSF from any given image? I mean is the PSF always going to be the same and is your intent to identify it once in advance or are you trying to figure out how to extract it from the image alone. Are the patterns in the image always in the same position and orientation? The "mathematically described" statement suggests they might be. And also are your "extremely blurry images" due to the camera being out of focus or because the object is beyond the limits of the cameras resolution? -jim ----== Posted via Pronews.Com - Unlimited-Unrestricted-Secure Usenet News==---- http://www.pronews.com The #1 Newsgroup Service in the World! >100,000 Newsgroups ---= - Total Privacy via Encryption =---






