DSPRelated.com
Forums

any good model for quantization error? how to design my quantization matrix?

Started by walala November 7, 2003
Dear all,

I am studying some problem of the quantization error in customed designed
quantization matrix for JPEG(the quantization matrix was designed by me, so
it is worse than standard JPEG in terms of quality). I am trying to design a
filter to remove these error due to quantization. Current the only model I
can use in my prediction is "white noise", with variance to be
(quantization_step_size)^2/12...

But this method did its best to achieve a 1.5dB PSNR
degradation(perfect/standard DCT quantization got 40dB, my design got
38.5dB)...

How can I improve? I guess I need to find a better model for that...

Thanks a lot,

-Walala


Hi,

> I am studying some problem of the quantization error in customed designed > quantization matrix for JPEG(the quantization matrix was designed by me, so > it is worse than standard JPEG in terms of quality). I am trying to design a > filter to remove these error due to quantization. Current the only model I > can use in my prediction is "white noise", with variance to be > (quantization_step_size)^2/12...
> But this method did its best to achieve a 1.5dB PSNR > degradation(perfect/standard DCT quantization got 40dB, my design got > 38.5dB)...
> How can I improve? I guess I need to find a better model for that...
One thing for sure: You cannot remove errors due to quantization. The purpose of quantization is to drop information, and once the information is lost, it is gone forever. What you can do is trying to minimalize the errors with respect to a certain class of images, that is, given a certain model of what "an image" is. The question now is, what are your "free variables": Are you able/willing to tune the quantizer? If not, then you can only work on the reconstruction points of the dequantizer (which is not too much). Second question: What are you going to optimize: Visual quality, or PSNR? They are not identical, the "standard" tables are tuned for optimal visual quality, found out in various experiments. For quantization in general, you might want to read: Robert M. Gray, David L. Neuhoff: "Quantization", IEEE Transactions on Information Theory, Vol. 44, No 6, October 1998. So long, Thomas
"walala" <mizhael@yahoo.com>

> How can I improve? I guess I need to find a better model for that...
In the DCT there is usually an assumption of a laplacian distribution of the coefficients, which tells you something about the quantization errors. There are a variety of papers on this, use google. For the spatial domain you should read : http://eeweb.poly.edu/~onur/publish/sub.pdf Dont get what exactly you are trying to do though ... are you trying to simultaneously optimize the matrix together with post-processing? Marco
"Thomas Richter" <thor@cleopatra.math.tu-berlin.de> wrote in message
news:bonn00$7oo$3@mamenchi.zrz.TU-Berlin.DE...
> Hi, > > > I am studying some problem of the quantization error in customed
designed
> > quantization matrix for JPEG(the quantization matrix was designed by me,
so
> > it is worse than standard JPEG in terms of quality). I am trying to
design a
> > filter to remove these error due to quantization. Current the only model
I
> > can use in my prediction is "white noise", with variance to be > > (quantization_step_size)^2/12... > > > But this method did its best to achieve a 1.5dB PSNR > > degradation(perfect/standard DCT quantization got 40dB, my design got > > 38.5dB)... > > > How can I improve? I guess I need to find a better model for that... > > One thing for sure: You cannot remove errors due to quantization. > The purpose of quantization is to drop information, and once the
information
> is lost, it is gone forever. What you can do is trying to minimalize the > errors with respect to a certain class of images, that is, given a certain > model of what "an image" is.
I guess you are right. Traditionally quantization error is regarded as white noise... but I have two parts of error, one part is the customed designed DCT, comparing with the perfect DCT, I guess it has an error, I guess I should focus on minimizing this error; another part is even for that perfect DCT, it still has quantization error, I guess I won't able to get rid of that, as you've pointed out.
> > The question now is, what are your "free variables": Are you able/willing
to
> tune the quantizer? If not, then you can only work on the reconstruction
points
> of the dequantizer (which is not too much).
I am not very sure that I understand these points. Can you illuminate me a little more in detail?
> Second question: What are you > going to optimize: Visual quality, or PSNR? They are not identical, the > "standard" tables are tuned for optimal visual quality, found out in
various
> experiments. >
I guess "BOTH"...
> For quantization in general, you might want to read: > > Robert M. Gray, David L. Neuhoff: "Quantization", IEEE Transactions on > Information Theory, Vol. 44, No 6, October 1998.
Thanks a lot for your pointer. I am going to read into that journal paper.
> > So long, > Thomas
See you, Walala
"Marco Al" <m.f.al@student.utwente.nl> wrote in message
news:bonnll$26e$1@netlx020.civ.utwente.nl...
> "walala" <mizhael@yahoo.com> > > > How can I improve? I guess I need to find a better model for that... > > In the DCT there is usually an assumption of a laplacian distribution of
the
> coefficients, which tells you something about the quantization errors.
There
> are a variety of papers on this, use google.
I saw those papers on Laplacian reconstruction of DCT coefficents,... I feel it is not every effective, right? Many paper only shows less than 0.1-0.2dB improvement, that actually may largely impacted by rounding-off or something else in their simulation...
> > For the spatial domain you should read : > http://eeweb.poly.edu/~onur/publish/sub.pdf >
Thanks a lot for your pointer. I read into that paper and actually not very understand it... maybe I need more digesting of it...
> Dont get what exactly you are trying to do though ... are you trying to > simultaneously optimize the matrix together with post-processing? > > Marco >
I am quite interested in how does that quantization matrix come out? In my custom design, since I have my own transform matrices, I tried to change that quantization matrix by scaling a little to accommodate my customed designed transform matrix, but unfortunately all my results were worse than that one. So I am curious about how did that quantization matrix come out? You also mention about simultaneoulsy optimize the matrix together wiht post-processing, are there any pointers for that? Thank you very much for the discussion, -Walalla
Hi,

>> One thing for sure: You cannot remove errors due to quantization. >> The purpose of quantization is to drop information, and once the > information >> is lost, it is gone forever. What you can do is trying to minimalize the >> errors with respect to a certain class of images, that is, given a certain >> model of what "an image" is.
> I guess you are right. Traditionally quantization error is regarded as white > noise... but I have two parts of error, one part is the customed designed > DCT, comparing with the perfect DCT,
A DCT by itself doesn't cause "error" because it is a mathematically invertible operation. Thus, you could mean two different things here: i) They implemented a transformation that is close, but not identical to the DCT. That wouldn't be much of a loss if the operation itself remains invertible, i.e. the "error" is also in the decompressor. ii) They introduce ad-hoc round-off errors in a DCT implementation, making it lossy.
> I guess it has an error, I guess I > should focus on minimizing this error;
Is it feasible to fix this error? (Simplest possible approach).
> another part is even for that perfect > DCT, it still has quantization error, I guess I won't able to get rid of > that, as you've pointed out.
You cannot avoid it, but you can make it smaller at the cost of making the compression weaker (finer quantization, longer codestream, better quality).
>> The question now is, what are your "free variables": Are you able/willing > to >> tune the quantizer? If not, then you can only work on the reconstruction > points >> of the dequantizer (which is not too much).
> I am not very sure that I understand these points. Can you illuminate me a > little more in detail?
Well, a (scalar) quantizer takes a input signal x (a "real number") and generates an integer from that indexing a set of intervals that cover R. The easiest splitting of R into intervals (thus, the easiest quantizer) would be to write R as [0,1) U [1,2) U [2,3) ... and so on. Here, the quantization is simply performed by rouding the number to the nearest integer. These intervals are called "buckets". On the decompressor, the bucket index is replaced by a value from that interval, called the "reconstruction point". Both are (usually) free variables of the quantizer: You might choose them within some limits, i.e. choose the intervals, choose the reconstruction points. Typical applications usually split the real axis into intervals of equal size with one possible exception, namely the bucket containing zero. (Often, but not always twice as large as the remaining buckets). Reconstruction points are usually picked in the middle of the interval (though this is not optimal). What can be proven is the following: Given you have a signal with a given statistics p(x) (p is the probability density of the signal x), then o) the boundaries of the quantization buckets must be mid-way between the reconstruction points, regardless of the signal, o) the reconstruction points must be at the "center of mass" of the buckets with respect to the probability density. That is: rec_point = \int_{bucket_lo}^{bucket_hi} x p(x) dx / length of bucket.
>> Second question: What are you >> going to optimize: Visual quality, or PSNR? They are not identical, the >> "standard" tables are tuned for optimal visual quality, found out in > various >> experiments. >>
> I guess "BOTH"...
You cannot optimize both at once. You have to make compromizes. PSNR and visual quality are not controversal, but PSNR says less about visual quality than one might expect. So long, Thomas