DSPRelated.com
Forums

How does one "determine" the point spread function?

Started by Jeffrey Thalhammer August 10, 2008
Hello-

Disclaimer: I have no background in optics or image processing, so I
apologize in advance for abusing any terminology. I just discovered
this mailing list via Google, and I hope that some of you might be
able to help point me in the right direction. Here goes...

I'm working on a software project that involves recognizing a certain
pattern in extremely blurry images. I have experimented with standard
deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but
none have sharpened the image enough for the recognition algorithm to
work. So instead, I thought I might try it the other way around --
that is, distort the pattern that I'm looking for and then compare the
distorted pattern to the actual image. I already have a series of
specimen images taken with a particular camera, and for each one, I
can precisely define what the image signal is supposed to be. So I
assume that the next step is to determine the point-spread function or
convolution kernel that would best translate my model signal into the
specimen images.

So my questions are:

Does this sort of recognition-by-reverse-deconvolution approach sound
sane?

What sort of parameters do I need to know about my camera and the
images to compute the PSF?

Can the PSF be estimated empirically, perhaps through some sort of
regression analysis?

Is there a particular book or body of research that you think would be
most helpful to me?

Thanks for your time.

-Jeff
Well, the general idea is to assume a model, for example:

J = I * P

Where J is the output image, I is the original (unblurred) image, P is the
point spread function, and * is the convolution operation. Then you need to
solve for P. By the convolution theorem:

I * P = F^-1(F(I)F(P))

Where F is the fourier transform operator. Then:

F(J) = F(I)F(P)

And:

P = F^-1(F(J)/F(I))

Note though that if you knew P and J but not I, solving for I would take
exactly the same form. Thus mathematically, solving for the original image
given the point spread function or solving for the point spread function
given the original image are the same thing. So whichever way you look at
it, it's still deconvolution, and naive deconvolution as I've shown here is
a really dumb idea (it's numerically unstable and exquisitely sensitive to
noise). You need to do it in a smarter way. As for this idea of regression
you had, it could work. Convolution is a linear transformation, you can
write it out as a matrix equation. So let p be the point spread function
vector, j the output image vector, and M be the matrix corresponding to the
original image, written as a convolution matrix. Then:

j = Mp

Assume Gaussian noise:

j = Mp + epsilon

Where epsilon ~ N(0,1). Then:

p(p|M,j) = p(j|M,p) p(p)

By Bayes theorem, up to a normalizing constant. But p(j|M,p) exp(-(j-Mp)^2) for Gaussian noise . Taking logs:

log p(p|M,j) = log p(j|M,p) + log p(p)

Then you can solve for the maximum a posteriori (MAP) which is argmax over p
of log p(p|M,j):

MAP = the p which maximizes -(j-Mp)^2 + log p(p)

Or:

MAP = the p which minimizes (j-Mp)^2 - log p(p)

In other words, "penalized least-squares". As to p(p), the prior, it could
be flat, which would reduce to ordinary least squares, or it could be a
Gaussian density, in which case you would get various forms of ridge
regression (or Tikhonov regularization), or it could be a Laplacian prior,
which is often used in compressed sensing and wavelet denoising, etc. etc.

In any case, you can pick up any book on deconvolution, or if you want to
learn about Bayesian estimation get Mackay's book.

Patrick

On Sat, Aug 9, 2008 at 11:42 PM, Jeffrey Thalhammer <
j...@imaginative-software.com> wrote:

> Hello-
>
> Disclaimer: I have no background in optics or image processing, so I
> apologize in advance for abusing any terminology. I just discovered
> this mailing list via Google, and I hope that some of you might be
> able to help point me in the right direction. Here goes...
>
> I'm working on a software project that involves recognizing a certain
> pattern in extremely blurry images. I have experimented with standard
> deconvolution techniques in Photoshop, GIMP, ImageMagick, etc., but
> none have sharpened the image enough for the recognition algorithm to
> work. So instead, I thought I might try it the other way around --
> that is, distort the pattern that I'm looking for and then compare the
> distorted pattern to the actual image. I already have a series of
> specimen images taken with a particular camera, and for each one, I
> can precisely define what the image signal is supposed to be. So I
> assume that the next step is to determine the point-spread function or
> convolution kernel that would best translate my model signal into the
> specimen images.
>
> So my questions are:
>
> Does this sort of recognition-by-reverse-deconvolution approach sound
> sane?
>
> What sort of parameters do I need to know about my camera and the
> images to compute the PSF?
>
> Can the PSF be estimated empirically, perhaps through some sort of
> regression analysis?
>
> Is there a particular book or body of research that you think would be
> most helpful to me?
>
> Thanks for your time.
>
> -Jeff
>
>
>