Reply by walala October 20, 20032003-10-20
"Olli Niemitalo" <o@iki.fi> wrote in message
news:Pine.GSO.4.58.0310200558460.289@paju.oulu.fi...
> On Sun, 19 Oct 2003, walala wrote: > > > [...] I am doing "floating-point" search and trial & error ... so the > > search space is huge. I just try to search for a few days... > > If you want to make this time much shorter, try evolutionary algorithms. > We have written a small c++ library for that: > > http://www.iki.fi/o/opti/ok > > -olli
Olli, Thanks a lot for the pointer... I have looked at the program but have a problem: Basically I am doing everything in Matlab, since I need the easy-manipulation of image data, and the easy-scripting, and the filtering operation, etc. And your program is in C++... I tried to compile Matlab script file into standalone executable and C++ source codes and then combine with yours... Before I can see any results, it just give me a lot of error during compilation. Also the C++ source code generated by Matlab is quite weird and not understandable. I wonder how to combine with yours. I suggest you to make a Matlab version or at least make a Matlab interface for your optimization code. Otherwise people may need go through a lot of trouble before they can try on your optimization code. Thanks a lot for your work! -Walalal
Reply by Rune Allnor October 20, 20032003-10-20
"walala" <mizhael@yahoo.com> wrote in message news:<bmvju8$dgl$1@mozo.cc.purdue.edu>...
> > > > Wiener filters come to mind... again, there are plenty filters and tricks > > available for image processing. Whether you can do stuff completely > > automatic or need a human operator in the loop, depends on the > application. > > > > Rune > > Rune, > > You said it... after doing searching a lot, I am tired of trial and error. > This cannot let us write a report. I will try to apply a Wiener filter... > just headache about how to get the Rxx of the input image...which is the > covariance matrix of the input? Can you give me information on this?
Sorry, can't help. I am not even sure Wiener filters work for image processing, but somehow I think I have seen the term used with images... perhaps you should ask at an image processing newsgroup? As for writing reports... I have the extremely controversial view that a report should reflect actual progress, not some imaginary path towards the desired goals. Some times the solving of a problem is just too difficult or require more resources than available in a project, and I believe the report should reflect that. That way there is a chance that planning a next project over the same theme has a chance to succeed. If used properly, such progress reports would actually be a great tool to show which managers actually know their job and which would be better off somewhere else. But beware! These counterproductive "non-team player" attitudes of mine forced me to resign from one job and got me into a long sick leave from the next. You may want to play your cards differently. Rune
Reply by walala October 20, 20032003-10-20
> > Wiener filters come to mind... again, there are plenty filters and tricks > available for image processing. Whether you can do stuff completely > automatic or need a human operator in the loop, depends on the
application.
> > Rune
Rune, You said it... after doing searching a lot, I am tired of trial and error. This cannot let us write a report. I will try to apply a Wiener filter... just headache about how to get the Rxx of the input image...which is the covariance matrix of the input? Can you give me information on this? Thanks, -wa
Reply by Olli Niemitalo October 20, 20032003-10-20
On Sun, 19 Oct 2003, walala wrote:

> [...] I am doing "floating-point" search and trial & error ... so the > search space is huge. I just try to search for a few days...
If you want to make this time much shorter, try evolutionary algorithms. We have written a small c++ library for that: http://www.iki.fi/o/opti/ok -olli
Reply by Ben Pope October 19, 20032003-10-19
jim" <"N0sp wrote:
> And most important the compression in jpegs is done in part by quantizing > the frequency levels of each block.
...which is one of the reasons why a simple low pass filter increases PSNR. I guess predominantly however, it is because it reduces the high contrast edges, which is where the error is greatest. Since these edges tend to be the worst perceived artifact, low-pass filter helps drastically there. (Bearing in mind that PSNR is not entirely equivelent to perceived image quality) Ben -- I'm not just a number. To many, I'm known as a String...
Reply by jim October 19, 20032003-10-19

walala wrote:

> The input images are some jpeg images having block-artifacts. As you know > that JPEG images have block-artifacts under low bit rate. Someone else in my > group did some experiments to pre-filter the images to try to get rid of the > artifacts. But that also introduces other artifacts. So now it is a combined > artifact... and to be frankly, all processing techniques looked the same to > me on images, because they are nearly same, but in terms PSRN, they have big > difference... so I trial and error and found one with both good PSNR and > looking(but still ugly though)...
Okay, now you are providing the type of information needed. So the task is to remove the blocking artifacts from jpegs. Here are some things you may want to consider: Jpegs have different levels of compression resulting in different levels of blocking artifacts. The compression techniques used in jpeg involve analysis of the frequency content of the image and the amount of blocking artifacts is somewhat related to the frequency content of the original input. Jpeg is designed for natural images like photographs. Artificial images like line drawings tend to have a lot more objectionable artifacts. And most important the compression in jpegs is done in part by quantizing the frequency levels of each block. This is done by formulae (derived by experiment) to retain the part of the signal that is the most perceptually important. This explains why you can't see the difference even though your calculations show a big difference in the amount of noise removed. Another thing is you are not limited to 3 x 3 filter. A common filter I've seen applied to this problem is [-1 0 5 8 5 0 -1]/16. This filter can be applied very efficiently with adds and shifts by iterating through the columns and rows of the image, resulting in what is effectively a 7X7 filter. I don't know if speed is an issue you are interested in. -jim
> > But I cannot do trial-and-error search for every input, right? Because after > this "training" stage, I don't have source images available any more, right? > My fitler should work adaptively itself...
-----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- http://www.newsfeeds.com - The #1 Newsgroup Service in the World! -----== Over 100,000 Newsgroups - 19 Different Servers! =-----
Reply by Ben Pope October 19, 20032003-10-19
walala wrote:
> "Ben Pope" <spam@hotmail.com> wrote in message > news:bmuqba$qdnh9$1@ID-191149.news.uni-berlin.de... >> walala wrote: >>> Are there any guidelines on how to design good filters that applicable >>> to most(if not all) images? Discussion should be in two cases: a) with >>> source image and can get PSNR to compare results; b) with no source >>> image and only have the reconstructed images, so no PSNR can be >>> computed, how to design a "good" filter fit into this case? >> >> >> You need to first work out your metric... you say you use PSNR, but >> without a source image... so how can you possibly measure PSNR? >> > > Bob, thanks for your posting,
Who is Bob?
> That's the problem. The filter is supposed to work on all images. In this > "design" stage, I have a few source images to test PSNR... later on, the > filter is supposed to adapt to input and do filtering blindly... at that > time, no PSNR can be measured.
Right.
> So what I want to know is exactly how to design a filter that can > adaptively work on images to achieve satisfactory PSNR(although may not > be the best, but al least works for most cases) after the initial > training stage.
Well if you're trying to clear up JPEG artifacts, then the best you can do is some kind of low pass filter. Wavelets can be used to great success as well. But thats whole new story :-) PSNR is not a great measure of perceived image quality anyway. We are more sensitive to brightness than colour, by a factor of about 4, yet PSNR does not model that. Additionally the edges that you see with JPEG artifacts is also not well described with PSNR, human vision is designed to pick out edges, we're very good at it, so although the error might be quite small in terms of PSNR, it looks ugly to us. Go back to my original reply... start defining your problem, are you trying to remove JPEG artifacts from images? Then the biggest issue I have with JPEG images are the discontinuities... so low pass filter it, which is what you have been doing. In terms of the specific best filter... you're fighting a losing battle without a huge training set. If you're images are primary photograph based, then just use Gaussian. Otherwise do a search for papers that have researched JPEG artifact removal - I suspect there are a considerable number of them out there with a wide ranging number of techniques. You can remove many types of noise from an image quite effectively with a simple blur (low-pass filter, guassian being very popular, along with unsharp mask), to increase PSNR by a good 10dB or so. Ben -- I'm not just a number. To many, I'm known as a String...
Reply by walala October 19, 20032003-10-19
"Ben Pope" <spam@hotmail.com> wrote in message
news:bmuqba$qdnh9$1@ID-191149.news.uni-berlin.de...
> walala wrote: > > Are there any guidelines on how to design good filters that applicable
to
> > most(if not all) images? Discussion should be in two cases: a) with
source
> > image and can get PSNR to compare results; b) with no source image and > > only have the reconstructed images, so no PSNR can be computed, how to > > design a "good" filter fit into this case? > > > You need to first work out your metric... you say you use PSNR, but
without
> a source image... so how can you possibly measure PSNR? >
Bob, thanks for your posting, That's the problem. The filter is supposed to work on all images. In this "design" stage, I have a few source images to test PSNR... later on, the filter is supposed to adapt to input and do filtering blindly... at that time, no PSNR can be measured. So what I want to know is exactly how to design a filter that can adaptively work on images to achieve satisfactory PSNR(although may not be the best, but al least works for most cases) after the initial training stage.
Reply by Rune Allnor October 19, 20032003-10-19
"walala" <mizhael@yahoo.com> wrote in message news:<bmuae2$pbe$1@mozo.cc.purdue.edu>...
> You mean all experts do filter design by ad hoc search and trial and error?
Eh... I can only answer for myself and the people I have worked with... but yes, quite a lot is trial and error, some times completely ad hoc, other times based on experience or educated guesses. When presented with a problem, there is usually more than one way of doing things. In my opinion, one part of the job is to know as many such tools and tricks and methods as possible, the other part is to evaluate which methods works how well and when. If you look in the only book I know of that teaches data processing, Yilmaz: Seismic Data Processing SEG, 1987 you will find a lot of more or less standard flow charts for doing various standard processing tasks. But in a seismics processing lab the analysts spend days and weeks trying to fine-tune the processing parameters. Some of these guys are truly artists. Actually, someone I worked with did a test where several different processing teams got the same data set and were assigned the same processing tasks. When the results came back, you almost had to know in advance that these images were supposed to show the same geologigal features. They almost looked like data from different oil fields. Data processing is a very subjective dicipline.
> What I need is pointers/guidelines on systematically designing a filter > based on the input. Since although the source is unknown, yet the > reconstructed input still has some statistical property. If some > mathematical analytical model can be made based on the parameters extracted > from the input, then a filter can be adapted to those parameters for that > particular input.
Hmmm... this depends on the application. Be aware that you, by choosing this analytical model, puts constraints on the resulting images. If you somehow gets an image that does not fit that model, you may be in trouble. If, for instance, you work with medical images and search for soft tissue injuries, a processing algorithm could use known information about e.g. the skeleton, and use deviations from the "expected" image to assess tissue damages. But what if there is severe damages to the bones as well? If the processing algorithm assumes an undamaged skeleton, the automatic processing routine will get lost. But then, if your images are sufficiently regular, like what could be expected in certain quality control tasks on a production line, such variations need not be a problem at all.
> Since it turns out already that the "best" filter for a source is "near > Gaussian" and it is small, 3x3, I believe the "best" filter for anohter > source would also be "near Gaussian" but with adjusted numbers.... So is > there any pointers/guidelines on mathmatical modeling for derivation of such > kind of filter?
Wiener filters come to mind... again, there are plenty filters and tricks available for image processing. Whether you can do stuff completely automatic or need a human operator in the loop, depends on the application. Rune
Reply by walala October 19, 20032003-10-19
> Ok, so the filter must be symetrical - this is not exactly the same as > gaussian. What you haven't yet explained is how you are determining that > one filter works better than another.
The filter is supposed to work on input images and enhance them. I have several images, but not all. So I tried on these a few images on my hand and "Gaussian-like"(as the following one) filter has great PSNR comparing with "averaging" or other filters(e.g. 39dB vs. 10dB)... since I have source images so I can computer PSNR and compare, right?
> > > I finally got: > > > > -0.0496 0.1408 -0.0496 > > 0.1408 0.6355 0.1408 > > -0.0496 0.1408 -0.0496 > > > Its low pass. Its not really gaussian with the negative values. Somewhere > half way beteen an unsharp mask and gaussian. >
Ok, I see.
> > > > I believe my filter should be adaptive, hopefully the numbers in this
filter
> > can change according some underlying properties in the input source.
How to
> > do that? > > > Well, it sounds like you are already doing it by brute force. Maybe if you > explain what the input source is and how you determine that the filter > "works", then some one might suggest ways to optimize your search.
The input images are some jpeg images having block-artifacts. As you know that JPEG images have block-artifacts under low bit rate. Someone else in my group did some experiments to pre-filter the images to try to get rid of the artifacts. But that also introduces other artifacts. So now it is a combined artifact... and to be frankly, all processing techniques looked the same to me on images, because they are nearly same, but in terms PSRN, they have big difference... so I trial and error and found one with both good PSNR and looking(but still ugly though)... But I cannot do trial-and-error search for every input, right? Because after this "training" stage, I don't have source images available any more, right? My fitler should work adaptively itself...