DSPRelated.com
Forums

image out-of-focus blur identification

Started by Ling Chen November 30, 2011
On Dec 1, 5:40&#4294967295;pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> Clay <c...@claysturner.com> wrote: > > (snip, I wrote) > > >> It will also mostly ignore light of the wrong wavelength, > >> generating what is called a white-light hologram. > >> Last I remember (when I took an optics lab class), you make the > >> hologram with red (HeNe) light, but, due to shrinkage of the > >> emulsion on processing, the image is green. > > The white light hologram (the volume type and not the rainbow type) > > require the bragg object and reference beams to approach the film from > > opposite sides to create the standing wave pattern in the film. Thus > > you get a "reflection" hologram that is quite wavelength selective. If > > the volume hologram is made with beams both approaching from the same > > side, you don't get a frequency selective enough reflection to make > > the image very viewable in white light. > > I remember making them that way, but I forgot the math by now. > > You send the beam down through the plate, where it reflects > off the object. &#4294967295;We did them on 2in square plates. > > Some time ago, I saw some big (8in by 10in I believe) white > light reflection holograms of microscopes. &#4294967295;You could then > look into the microscope and see an object through the > microscope. > > (snip of more useful information about white-light holograms.) > > -- glen
Glen, I've seen the microsope hologram - it is a pretty famous one. I recall it was a double exposure - one of the microscope and the other of the magnified image that you would see in the microscope. The magnified image was projected through 2 small pupils corresponding to where one places their eyes. The way you know it was not originally recorded as a single shot is the magnified image is on the plane of the holgram - hence being very sharp even in white light. A real microscope's image is focused at infinity and would not work well in a white light hologram. I spent a lot of time back in the 80s making holograms - it was a fun hobby and I did some research into the field. Too bad one can't buy the film any more. I used Agfa 8E75HD 4 by 5 inch sheets of film sandwiched between two plates of glass with a couple of drops of xylene used as an index matching fluid. I wrote a holocad program for designing hologram recording setups where the interference efficiency at the film plate was optimized. This goes beyond the simple matching of beam path lengths. Since I was using randomly polarized lasers (cost a lot less at that time) I had to worry about ellipical polarization caused by the mirrors. I still recall the index of refraction for Al at 632.8nm being 1+4.45i. I had students measure this by setting up labs where one starts with circularly polarized light and finds the angle where a single reflection converts it to linearly polarized - easily tested with a hand held polarizer. Then you can cascade two mirrors and find the angles where two bounces convert circular to linear and so on up to five or six reflections are used. Then the index of refraction in Fresnel's equation is adjusted until results (relative phase shift between plane parallel and plane perpendicular) match the data. I used the ellipsometry in my holocad to balance the polarization in all beam paths from the laser to the film. This worked very well. remembering the old days. Clay
jim <"sjedgingN0Sp"@m@mwt,net> wrote:

>> Suppose an image is out-of-focus blurred. We have the original image >> and its blurred version. What is the best approach to estimate the >> blur function? That is, given y(m), x(m), how to we estimate the >> point spread function (or impulse response function) h(m): >> y(m) = h(m)*x(m) + n(m)
>> where n(m) is some kind of background noise.
I am not sure what m is, but in any case, the blurring can be considered convolution, which is why the subject is deconvolution. The math says that convolution in one variable is equivalent to multiplication in the Fourier transform domain. For linear deconvolution, you transform, divide by the transform of the PSF, and transform back. Sounds nice and simple, but for two things. 1) What do you do where the transform PSF is zero (or very small)? 2) What about noise? At any frequency where the PSF transform is small, and noise is not small, the division produces a very large result, or sometimes a very negative result.
> If this is a strictly digital problem then you should be > able to do this without n(m) term. Since you say you have > the original and the blurred version, this sound like a digital > problem with no involvement of any analog operations.
Well, the first thing is to determine the PSF (or its transform) as well as possible from this data. If one does that digitally, then it should also be able to do the inverse on the same data. It may or may not be useful for any other image, as the PSF may not be exactly the same. If you can compute the PSF accurately, as they did for Hubble, then you should get good results, assuming noise is not too big. The first Hubble pictures were from bright (low noise) sources for that reason.
> If the image is blurred with a digital filter of the typical sort > used to blur images (e.g. gaussian blur) then the frequency response > of the blurring function can be decomposed into entirely positive > cosines and the inverse function that will unblur the image can > be precisely defined. The point is that type of blurring filter > preserves all frequency and phase information that the digital image > can support. Higher frequencies are attenuated but not completely > eliminated.
In the past, one might have wanted to do this for photographic film images, which have the non-linearities in the film response to consider. For digital images, say blurred to to focus errors, you only have the CCD and A/D converter between the image and you.
> The only limitation to accuracy is the precision of the media > used to store the inverse coefficients and the the blurred > image. If you put the blurred version of the image back into 8 bit > data containers that is going to seriously limit recovery of the > original. If you store the blurred image and and inverse filter as > double precision floats you should be able to recover the original > so that the eye can't tell the difference.
As long as there are no frequencies where the PSF transform is zero.
> Another way to recover the original from the blurred image is to use > an unsharp filter. If the original blurring filter is H[x,y] then > adding an image that was created by applying 1-H[x,y] to the original > will produce the original.
An unsharp filter has to greatly increase the higher frequencies, as they are the ones that get reduced the most. It will also increase high-frequency noise. The constraint on the result not being negative allows for some restriction on the reconstruction.
> Again all of this only applies if you are using digital processes > with enough precision, but it can work to some extent for images > where the blurring occurred outside the digital domain. However, if > the image has not been digitally blurred, I can't see how you would > have the unblurred version.
Look at the early Hubble images. They did some pretty amazing deconvolution on them, until the repair was installed. -- glen
glen herrmannsfeldt wrote:
> > jim <"sjedgingN0Sp"@m@mwt,net> wrote: > > >> Suppose an image is out-of-focus blurred. We have the original image > >> and its blurred version. What is the best approach to estimate the > >> blur function? That is, given y(m), x(m), how to we estimate the > >> point spread function (or impulse response function) h(m): > >> y(m) = h(m)*x(m) + n(m) > > >> where n(m) is some kind of background noise. > > I am not sure what m is, but in any case, the blurring can be > considered convolution, which is why the subject is deconvolution. > > The math says that convolution in one variable is equivalent to > multiplication in the Fourier transform domain. > > For linear deconvolution, you transform, divide by the transform > of the PSF, and transform back. Sounds nice and simple, but for > two things. > > 1) What do you do where the transform PSF is zero (or very small)?
For a typical digital image blur there are no frequencies at zero.
> > 2) What about noise?
there isn't any noise (except quantization) for a typical digital image blur. -jim
> > At any frequency where the PSF transform is small, and noise > is not small, the division produces a very large result, or > sometimes a very negative result. > > > If this is a strictly digital problem then you should be > > able to do this without n(m) term. Since you say you have > > the original and the blurred version, this sound like a digital > > problem with no involvement of any analog operations. > > Well, the first thing is to determine the PSF (or its transform) > as well as possible from this data. If one does that digitally, > then it should also be able to do the inverse on the same data. > > It may or may not be useful for any other image, as the PSF may > not be exactly the same. If you can compute the PSF accurately, > as they did for Hubble, then you should get good results, assuming > noise is not too big. The first Hubble pictures were from bright > (low noise) sources for that reason. > > > If the image is blurred with a digital filter of the typical sort > > used to blur images (e.g. gaussian blur) then the frequency response > > of the blurring function can be decomposed into entirely positive > > cosines and the inverse function that will unblur the image can > > be precisely defined. The point is that type of blurring filter > > preserves all frequency and phase information that the digital image > > can support. Higher frequencies are attenuated but not completely > > eliminated. > > In the past, one might have wanted to do this for photographic > film images, which have the non-linearities in the film response > to consider. For digital images, say blurred to to focus errors, > you only have the CCD and A/D converter between the image and you. > > > The only limitation to accuracy is the precision of the media > > used to store the inverse coefficients and the the blurred > > image. If you put the blurred version of the image back into 8 bit > > data containers that is going to seriously limit recovery of the > > original. If you store the blurred image and and inverse filter as > > double precision floats you should be able to recover the original > > so that the eye can't tell the difference. > > As long as there are no frequencies where the PSF transform is zero. > > > Another way to recover the original from the blurred image is to use > > an unsharp filter. If the original blurring filter is H[x,y] then > > adding an image that was created by applying 1-H[x,y] to the original > > will produce the original. > > An unsharp filter has to greatly increase the higher frequencies, > as they are the ones that get reduced the most. It will also > increase high-frequency noise. > > The constraint on the result not being negative allows for some > restriction on the reconstruction. > > > Again all of this only applies if you are using digital processes > > with enough precision, but it can work to some extent for images > > where the blurring occurred outside the digital domain. However, if > > the image has not been digitally blurred, I can't see how you would > > have the unblurred version. > > Look at the early Hubble images. They did some pretty amazing > deconvolution on them, until the repair was installed. > > -- glen
On Dec 2, 3:07&#4294967295;pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> jim <"sjedgingN0Sp"@m@mwt,net> wrote: > >> Suppose an image is out-of-focus blurred. We have the original image > >> and its blurred version. What is the best approach to estimate the > >> blur function? &#4294967295;That is, given y(m), x(m), how to we estimate the > >> point spread function (or impulse response function) h(m): > >> y(m) = h(m)*x(m) + n(m) > >> where n(m) is some kind of background noise. > > I am not sure what m is, but in any case, the blurring can be > considered convolution, which is why the subject is deconvolution. > > The math says that convolution in one variable is equivalent to > multiplication in the Fourier transform domain. > > For linear deconvolution, you transform, divide by the transform > of the PSF, and transform back. &#4294967295;Sounds nice and simple, but for > two things. > > 1) &#4294967295;What do you do where the transform PSF is zero (or very small)? > > 2) &#4294967295;What about noise? > > At any frequency where the PSF transform is small, and noise > is not small, the division produces a very large result, or > sometimes a very negative result. >
> -- glen
Actually - deconvolution is more problematic than that. If you consider a general filter of the form H(z) which has zeros outside the unit circle - then the inverse has poles outside the unit circle. These poles have 2 different interpretations depending on where the region of convergence (ROC) is chosen for the Z-transform. The poles can treated as stable but the response is non-causal - this may or may not be problematic for image processing. Or the response is causal but unstable - so if you use a Fourier transform (which requires a ROC which includes the unit circle) you end up with an unstable response. Note: That since the original H(z) didn't have zeros directly on the unit circle you don't have the divide by zero problem in the inverse. Cheers, Dave
On Fri, 2 Dec 2011 07:46:49 -0800 (PST), Clay <clay@claysturner.com> wrote:

   [...]

> I spent a lot of time back in the 80s making holograms - it was a fun > hobby and I did some research into the field.
[...] Clay, I don't know if you still have any interest in the field, but I thought I'd mention that the November/December 2011 issue of the American Scientist journal (Sigma Xi) has an article by Sean Johnston titled "Whatever Became of Holography?". Frank McKenney -- We should take care not to make the intellect our god; it has, of course, powerful muscles, but no personality. -- Albert Einstein -- Frank McKenney, McKenney Associates Richmond, Virginia / (804) 320-4887 Munged E-mail: frank uscore mckenney aatt mindspring ddoott com
On 4 Des, 17:36, Dave <dspg...@netscape.net> wrote:
> On Dec 2, 3:07&#4294967295;pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > > > > > > > jim <"sjedgingN0Sp"@m@mwt,net> wrote: > > >> Suppose an image is out-of-focus blurred. We have the original image > > >> and its blurred version. What is the best approach to estimate the > > >> blur function? &#4294967295;That is, given y(m), x(m), how to we estimate the > > >> point spread function (or impulse response function) h(m): > > >> y(m) = h(m)*x(m) + n(m) > > >> where n(m) is some kind of background noise. > > > I am not sure what m is, but in any case, the blurring can be > > considered convolution, which is why the subject is deconvolution. > > > The math says that convolution in one variable is equivalent to > > multiplication in the Fourier transform domain. > > > For linear deconvolution, you transform, divide by the transform > > of the PSF, and transform back. &#4294967295;Sounds nice and simple, but for > > two things. > > > 1) &#4294967295;What do you do where the transform PSF is zero (or very small)? > > > 2) &#4294967295;What about noise? > > > At any frequency where the PSF transform is small, and noise > > is not small, the division produces a very large result, or > > sometimes a very negative result. > > > -- glen > > Actually - &#4294967295;deconvolution is more problematic than that. If you > consider a general filter of the form H(z) which has zeros outside the > unit circle - then the inverse has poles outside the unit circle. > These poles have 2 different interpretations depending on where the > region of convergence (ROC) is chosen for the Z-transform. The poles > can treated as stable but the response is non-causal - this may or may > not be problematic for image processing. Or the response is causal but > unstable - so if you use a Fourier transform (which requires a ROC > which includes the unit circle) you end up with an unstable response.
Very little of this has any relevance for image processing: - In DSP, the ZT is used to describe the LTI system. Which has no relevance to image processing. - Causality has no meaning in image prcessing, as the PSF: In DSP it limits a response to after the time of the impulse. The similar argument in spatial domain would be that a PSF lies strictly to, say, the right and below the point of the impulse. Which clearly is meaningless.
> Note: That since the original H(z) didn't have zeros directly on the > unit circle you don't have the divide by zero problem in the inverse.
That's correct, assuming - A pole/zero description makes sense - The pole/zero description is available, neither of which is true in image processing. Rune
On Dec 4, 12:10&#4294967295;pm, Frnak McKenney
<fr...@far.from.the.madding.crowd.com> wrote:
> On Fri, 2 Dec 2011 07:46:49 -0800 (PST), Clay <c...@claysturner.com> wrote: > > &#4294967295; &#4294967295;[...] > > > I spent a lot of time back in the 80s making holograms - it was a fun > > hobby and I did some research into the field. > > &#4294967295; &#4294967295;[...] > > Clay, > > I don't know if you still have any interest in the field, but I > thought I'd mention that the November/December 2011 issue of the > American Scientist journal (Sigma Xi) has an article by Sean > Johnston titled "Whatever Became of Holography?". > > Frank McKenney > -- > &#4294967295; We should take care not to make the intellect our god; it has, of > &#4294967295; course, powerful muscles, but no personality. &#4294967295;-- Albert Einstein > -- > Frank McKenney, McKenney Associates > Richmond, Virginia / (804) 320-4887 > Munged E-mail: frank uscore mckenney aatt mindspring ddoott com
Frank, Thanks for the heads up. Clay
On Dec 4, 12:14=A0pm, Rune Allnor <all...@tele.ntnu.no> wrote:
> On 4 Des, 17:36, Dave <dspg...@netscape.net> wrote: > > > > > > > > > > > On Dec 2, 3:07=A0pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > > > > jim <"sjedgingN0Sp"@m@mwt,net> wrote: > > > >> Suppose an image is out-of-focus blurred. We have the original ima=
ge
> > > >> and its blurred version. What is the best approach to estimate the > > > >> blur function? =A0That is, given y(m), x(m), how to we estimate th=
e
> > > >> point spread function (or impulse response function) h(m): > > > >> y(m) =3D h(m)*x(m) + n(m) > > > >> where n(m) is some kind of background noise. > > > > I am not sure what m is, but in any case, the blurring can be > > > considered convolution, which is why the subject is deconvolution. > > > > The math says that convolution in one variable is equivalent to > > > multiplication in the Fourier transform domain. > > > > For linear deconvolution, you transform, divide by the transform > > > of the PSF, and transform back. =A0Sounds nice and simple, but for > > > two things. > > > > 1) =A0What do you do where the transform PSF is zero (or very small)? > > > > 2) =A0What about noise? > > > > At any frequency where the PSF transform is small, and noise > > > is not small, the division produces a very large result, or > > > sometimes a very negative result. > > > > -- glen > > > Actually - =A0deconvolution is more problematic than that. If you > > consider a general filter of the form H(z) which has zeros outside the > > unit circle - then the inverse has poles outside the unit circle. > > These poles have 2 different interpretations depending on where the > > region of convergence (ROC) is chosen for the Z-transform. The poles > > can treated as stable but the response is non-causal - this may or may > > not be problematic for image processing. Or the response is causal but > > unstable - so if you use a Fourier transform (which requires a ROC > > which includes the unit circle) you end up with an unstable response. > > Very little of this has any relevance for image processing: > > - In DSP, the ZT is used to describe the LTI system. > =A0 Which has no relevance to image processing. > - Causality has no meaning in image prcessing, as > =A0 the PSF: In DSP it limits a response to after > =A0 the time of the impulse. The similar argument in > =A0 spatial domain would be that a PSF lies strictly > =A0 to, say, the right and below the point of the > =A0 impulse. Which clearly is meaningless. > > > Note: That since the original H(z) didn't have zeros directly on the > > unit circle you don't have the divide by zero problem in the inverse. > > That's correct, assuming > > - A pole/zero description makes sense > - The pole/zero description is available, > > neither of which is true in image processing. > > Rune
Rune - Most of my background is in time-series analysis. So I would have thought there would have been some similar issues. My apologies if I was off base. Dave