DSPRelated.com
Forums

Unmasked Tempel 1

Started by tontoko December 10, 2006
In the following URL;

http://www.unmannedspaceflight.com/index.php?act=Attach&type=post&id=1866

the left side is the original image of Tempel 1 taken by Deep Impact
probe and
the right side is the processed image deconvoluted by Focus Corrector.

For detail of Focus Corrector, visit;

http://139.134.5.123/tiddler2/c22508/focus.htm

tontoko wrote:
> In the following URL; > > http://www.unmannedspaceflight.com/index.php?act=Attach&type=post&id=1866 > > the left side is the original image of Tempel 1 taken by Deep Impact > probe and > the right side is the processed image deconvoluted by Focus Corrector. > > For detail of Focus Corrector, visit; > > http://139.134.5.123/tiddler2/c22508/focus.htm
Tontoko, You just don't learn. Your "Focus Corrector" doesn't correct focus, and the people here are smart enough to know that, even if you aren't. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Jerry Avins skrev:
> tontoko wrote: > > In the following URL; > > > > http://www.unmannedspaceflight.com/index.php?act=Attach&type=post&id=1866 > > > > the left side is the original image of Tempel 1 taken by Deep Impact > > probe and > > the right side is the processed image deconvoluted by Focus Corrector. > > > > For detail of Focus Corrector, visit; > > > > http://139.134.5.123/tiddler2/c22508/focus.htm > > Tontoko, > > You just don't learn. Your "Focus Corrector" doesn't correct focus, and > the people here are smart enough to know that, even if you aren't.
Just to elaborate: No one dispute that the processed images look nicer than the originals. The question is how one got there. Mask operators are taught in image processing 101, and is well known. One of the hottest debated technical issues here, is the comparision between fixed-point and floating-point arithmetics. Another hot topic here is precise terminology. So the OP basically pusshed a few buttons in discussing elementary material in terms of basic technology using imprecise terminology. Rune

Rune Allnor wrote:
\

> Just to elaborate: No one dispute that the processed images > look nicer than the originals.
Actually they look extremely nice. If you apply the simple mask operator that you are accusing him of applying to his image you don't get anything even close to as nice as his result. He is obviously doing something a lot more sophisticated than his detractors imagine. Its not just a simple unsharp mask. What it is (or how its implemented) is not at all clear from his descriptions. -jim The question is how one got there.
> Mask operators are taught in image processing 101, and is well > known. One of the hottest debated technical issues here, is the > comparision between fixed-point and floating-point arithmetics. > Another hot topic here is precise terminology. > > So the OP basically pusshed a few buttons in discussing > elementary material in terms of basic technology using > imprecise terminology. > > Rune
----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Rune Allnor wrote:

> Jerry Avins skrev: > >>tontoko wrote: >> >>>In the following URL; >>> >>>http://www.unmannedspaceflight.com/index.php?act=Attach&type=post&id=1866 >>> >>>the left side is the original image of Tempel 1 taken by Deep Impact >>>probe and >>>the right side is the processed image deconvoluted by Focus Corrector. >>> >>>For detail of Focus Corrector, visit; >>> >>>http://139.134.5.123/tiddler2/c22508/focus.htm >> >>Tontoko, >> >>You just don't learn. Your "Focus Corrector" doesn't correct focus, and >>the people here are smart enough to know that, even if you aren't. > > > Just to elaborate: No one dispute that the processed images > look nicer than the originals. The question is how one got there. > Mask operators are taught in image processing 101, and is well > known. One of the hottest debated technical issues here, is the > comparision between fixed-point and floating-point arithmetics. > Another hot topic here is precise terminology. > > So the OP basically pusshed a few buttons in discussing > elementary material in terms of basic technology using > imprecise terminology. > > Rune >
I, the ultimate uninformed neophyte, was bothered by two things: 1. when challenged, started a new thread 2. the subject image was not of the same object Credibility now ~0
jim skrev:
> Rune Allnor wrote: > \ > > > Just to elaborate: No one dispute that the processed images > > look nicer than the originals. > > Actually they look extremely nice. If you apply the simple mask operator > that you are accusing him of applying to his image you don't get > anything even close to as nice as his result. He is obviously doing > something a lot more sophisticated than his detractors imagine. Its not > just a simple unsharp mask. What it is (or how its implemented) is not > at all clear from his descriptions.
I don't know or understand what he is doing; that's why I asked the questions in the first place and why I followed up by asking if he could post the originals. The description of what he calls a "focus corrector" is, as far as I can see, merely a mask operator implemented in floating point arithmetics. Either this guy is doing something other or more than what he says he does, or the fixed/floating point issue is way more important than at least I would imagine at the outset. Rune
jim wrote:
> > Rune Allnor wrote: > \ > >> Just to elaborate: No one dispute that the processed images >> look nicer than the originals. > > Actually they look extremely nice. If you apply the simple mask operator > that you are accusing him of applying to his image you don't get > anything even close to as nice as his result. He is obviously doing > something a lot more sophisticated than his detractors imagine. Its not > just a simple unsharp mask. What it is (or how its implemented) is not > at all clear from his descriptions.
The mask he claims to use is clear from his description. IIRC,: -1/9 -1/9 -1/9 -1 -1 -1 -1/9 1 -1/9 which he claims is superior to -1 9 -1 in some -1/9 -1/9 -1/9 -1 -1 -1 (to me) magical way. Those are sharpening masks, not focus-correction masks. Defocusing amounts to convolving with a blur pattern, usually circular and not far from Gaussian. For any given blur profile, there exists a mask that will deconvolve the blurred image to produce a sharp one. As with any other deconvolution, greed usually results in instability. Calculated deblur masks are run once, without iteration. The proof of a real deblurrer consists of three photos. One is a sharp image of a resolution chart. The second is the another image taken out of focus. The third is the second image processed. The more it resembles the first, the better the technique. It is usual to determine the blur profile by examining the first two, but it is can be directly measured from the image of an isolated out-of-focus point. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������

Jerry Avins wrote:
> =
> jim wrote: > > > > Rune Allnor wrote: > > \ > > > >> Just to elaborate: No one dispute that the processed images > >> look nicer than the originals. > > > > Actually they look extremely nice. If you apply the simple mask opera=
tor
> > that you are accusing him of applying to his image you don't get > > anything even close to as nice as his result. He is obviously doing > > something a lot more sophisticated than his detractors imagine. Its n=
ot
> > just a simple unsharp mask. What it is (or how its implemented) is no=
t
> > at all clear from his descriptions. > =
> The mask he claims to use is clear from his description. IIRC,: > =
> -1/9 -1/9 -1/9 -1 -1 -1 > -1/9 1 -1/9 which he claims is superior to -1 9 -1 in some=
> -1/9 -1/9 -1/9 -1 -1 -1 > =
> (to me) magical way. =
As I said that standard unsharpen mask you show on the left would not produce results as good as he is showing. Assuming his algorithm did produce those images, then obviously his programming skills are better than his explanation skills.
>Those are sharpening masks, not focus-correction > masks. =
And the difference in meaning is what? = Subtracting a blurred version from the original is more or less the same as the darkroom technique photographers have used for a long time to deblur an image. I'm guessing what he is claiming as unique is his method of arriving at the blurred version. The blur part of the filter is obviously a little better than the box-car blur you show above.
>Defocusing amounts to convolving with a blur pattern, usually > circular and not far from Gaussian. For any given blur profile, there > exists a mask that will deconvolve the blurred image to produce a sharp=
> one. As with any other deconvolution, greed usually results in > instability. Calculated deblur masks are run once, without iteration.
And the difference in meaning between calculated and something arrived at by iteration is what?
> =
> The proof of a real deblurrer consists of three photos. One is a sharp > image of a resolution chart. The second is the another image taken out > of focus. The third is the second image processed. The more it resemble=
s
> the first, the better the technique.
Well you could ask - he might offer that evidence. But you are talking about only one out of focus state. He claims his method is adjustable and can be used for a variety of out of focus states. And if you don't have the original as a reference (most people don't) you still might want to deblur an image. -jim = It is usual to determine the blur
> profile by examining the first two, but it is can be directly measured > from the image of an isolated out-of-focus point. > =
> Jerry > -- > Engineering is the art of making what you want from things you can get.=
> =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=
=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF= =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF ----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
jim wrote:
> > Jerry Avins wrote: >> jim wrote: >>> Rune Allnor wrote: >>> \ >>> >>>> Just to elaborate: No one dispute that the processed images >>>> look nicer than the originals. >>> Actually they look extremely nice. If you apply the simple mask operator >>> that you are accusing him of applying to his image you don't get >>> anything even close to as nice as his result. He is obviously doing >>> something a lot more sophisticated than his detractors imagine. Its not >>> just a simple unsharp mask. What it is (or how its implemented) is not >>> at all clear from his descriptions. >> The mask he claims to use is clear from his description. IIRC,: >> >> -1/9 -1/9 -1/9 -1 -1 -1 >> -1/9 1 -1/9 which he claims is superior to -1 9 -1 in some >> -1/9 -1/9 -1/9 -1 -1 -1 >> >> (to me) magical way. > > > As I said that standard unsharpen mask you show on the left would not > produce results as good as he is showing. Assuming his algorithm did > produce those images, then obviously his programming skills are better > than his explanation skills.
That's a sharpening mask, not unsharpening. A typo, I presume?
>> Those are sharpening masks, not focus-correction >> masks. > > And the difference in meaning is what?
A sharpening mask increases contrast. A focus-correction mask undoes blur from a known cause. Recall the poor images yielded by Hubble's misshapen mirror. because the actual shape was known, it was possible to calculate a filter to reverse its effect, and produce strikingly good final images. Merely turning up the contrast would not have done it.
> Subtracting a blurred version from the original is more or less the > same as the darkroom technique photographers have used for a long time > to deblur an image. I'm guessing what he is claiming as unique is his > method of arriving at the blurred version. The blur part of the filter > is obviously a little better than the box-car blur you show above.
What I show above is what he says he uses. See http://139.134.5.123/tiddler2/c22508/focus.htm
>> Defocusing amounts to convolving with a blur pattern, usually >> circular and not far from Gaussian. For any given blur profile, there >> exists a mask that will deconvolve the blurred image to produce a sharp >> one. As with any other deconvolution, greed usually results in >> instability. Calculated deblur masks are run once, without iteration. > > And the difference in meaning between calculated and something arrived > at by iteration is what?
The iteration I referred to is described by Tontoko. He passes the filter kernel over the image repeatedly. Of course, that is equivalent to convolving the kernel with itself the iteration number of times, producing a larger one (and closer to round) but he doesn't seem to realize that.
>> The proof of a real deblurrer consists of three photos. One is a sharp >> image of a resolution chart. The second is the another image taken out >> of focus. The third is the second image processed. The more it resembles >> the first, the better the technique. > > Well you could ask - he might offer that evidence. But you are talking > about only one out of focus state. He claims his method is adjustable > and can be used for a variety of out of focus states. And if you don't > have the original as a reference (most people don't) you still might > want to deblur an image.
In that case, one assumes blur profiles and tries them out, choosing the one that yields the best result. (The kind of iteration you had in mind is appropriate here. Call it binary search.) The general nature of the profile is known /a priori/ as the effect of a maladjusted circular lens intrinsically able to make a good image. Different parts of an image benefit most from different blur profiles, depending on distance. With enough work, one can vastly improve depth of field. With satellite surveillance photos, the blur removed is primarily a result of the diffraction limit, and an very different blur profile is appropriate. (Mexican hat, anyone?) ... There is an anomaly in Tontoko's images. Each pass of the filter kernel leaves garbage at the edges of the image, yet none is visible in the sharpened images. Eight passes with a 3X3 kernel is equivalent to a single pass with a 17x17 kernel, and there should be a degraded border all around 16 pixels wide It is reasonable yo suppose that the squashed pixels are cropped out both from the original and the sharpened images, but if so, he should have said so. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������

Jerry Avins wrote:


> =
> That's a sharpening mask, not unsharpening. A typo, I presume? > =
No I meant what I said. An unsharp filter is when you subtract a blurred version from the original: g(x,y) =3D f(x,y) - f'(x,y) = where f'(x,y) is a blurred version of f(x,y) The smoothing filter here is just a box car filter which is a bit crude. But if you apply the smoothing (the box-car part) with many iterations it becomes more gaussian like. And if you give the user control over the amount of that the smoothed part gets subtracted from the original you have an interactive de-blurring tool that could produce results like he is showing. = This is a standard technique for improving blurry images that has been used by photographers since before computers were around. I'm guessing what he is claiming as unique is the method of creating the blurred version -> f'(x,y) -jim = = >> Those are sharpening masks, not focus-correction
> >> masks. > > > > And the difference in meaning is what? > =
> A sharpening mask increases contrast. A focus-correction mask undoes > blur from a known cause. Recall the poor images yielded by Hubble's > misshapen mirror. because the actual shape was known, it was possible t=
o
> calculate a filter to reverse its effect, and produce strikingly good > final images. Merely turning up the contrast would not have done it. > =
> > Subtracting a blurred version from the original is more or less=
the
> > same as the darkroom technique photographers have used for a long tim=
e
> > to deblur an image. I'm guessing what he is claiming as unique is his=
> > method of arriving at the blurred version. The blur part of the filte=
r
> > is obviously a little better than the box-car blur you show above. > =
> What I show above is what he says he uses. See > http://139.134.5.123/tiddler2/c22508/focus.htm > =
> >> Defocusing amounts to convolving with a blur pattern, usually > >> circular and not far from Gaussian. For any given blur profile, ther=
e
> >> exists a mask that will deconvolve the blurred image to produce a sh=
arp
> >> one. As with any other deconvolution, greed usually results in > >> instability. Calculated deblur masks are run once, without iteration=
=2E
> > > > And the difference in meaning between calculated and something arrive=
d
> > at by iteration is what? > =
> The iteration I referred to is described by Tontoko. He passes the > filter kernel over the image repeatedly. Of course, that is equivalent > to convolving the kernel with itself the iteration number of times, > producing a larger one (and closer to round) but he doesn't seem to > realize that. > =
> >> The proof of a real deblurrer consists of three photos. One is a sha=
rp
> >> image of a resolution chart. The second is the another image taken o=
ut
> >> of focus. The third is the second image processed. The more it resem=
bles
> >> the first, the better the technique. > > > > Well you could ask - he might offer that evidence. But you are talkin=
g
> > about only one out of focus state. He claims his method is adjustable=
> > and can be used for a variety of out of focus states. And if you don'=
t
> > have the original as a reference (most people don't) you still might > > want to deblur an image. > =
> In that case, one assumes blur profiles and tries them out, choosing th=
e
> one that yields the best result. (The kind of iteration you had in mind=
> is appropriate here. Call it binary search.) The general nature of the > profile is known /a priori/ as the effect of a maladjusted circular len=
s
> intrinsically able to make a good image. Different parts of an image > benefit most from different blur profiles, depending on distance. With > enough work, one can vastly improve depth of field. With satellite > surveillance photos, the blur removed is primarily a result of the > diffraction limit, and an very different blur profile is appropriate. > (Mexican hat, anyone?) > =
> ... > =
> There is an anomaly in Tontoko's images. Each pass of the filter kernel=
> leaves garbage at the edges of the image, yet none is visible in the > sharpened images. Eight passes with a 3X3 kernel is equivalent to a > single pass with a 17x17 kernel, and there should be a degraded border > all around 16 pixels wide It is reasonable yo suppose that the squashed=
> pixels are cropped out both from the original and the sharpened images,=
> but if so, he should have said so. > =
> Jerry > -- > Engineering is the art of making what you want from things you can get.=
> =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=
=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF= =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF ----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----