DSPRelated.com
Forums

Low-pass filter image data

Started by Tim Wescott 4 years ago11 replieslatest reply 4 years ago294 views

I have some image data that I want to smooth out, but I want it to smooth out nicely to the edges.

Fourier transforms wouldn't do this without windowing -- if you tried, you'd just weird effects where the left edge would bleed into the right, and the top into the bottom.

Are there any known techniques for smoothing a whole image that don't get tripped up by the edges?

[ - ]
Reply by ombzDecember 22, 2020

There is no information from outside the image. Thus a smoother  cannot guess that information to operate perfectly along the edges. (Same holds for filtering 1D signals.) Standard approaches are to use cyclic/mirror boundary cconditions to preserve continuity along the edges. If using DFT/FFT, simply extend the image with zeroes symmetrically (usually for DFT double the size of the input image), the circular convolution and "perfect" interpolation properties of the DFT will take care of the rest. Then extract the central part of the output image of the size of the input image.

For filtering in the image/"time" domain directly (convolution or nonlinear filters such as median, erosion, dilation) extend the image with zeroes, mirror images (basically same as DFT approach above) or repetition of the edge pixels: you'll have different results and choose what you like most or what your application can use best. There is no single "correct" way of doing it.

[ - ]
Reply by rbjDecember 22, 2020

i really dunno diddley, Tim, but isn't the DCT (there are four or five forms) essentially a DFT of 4 times the size (twice in X and twice in Y) where the added image is a mirror image of the original?  so there is continuity in the zeroth derivative.

i thought that was to deal with edges and periodicity.

or is it the LPF filtering that mushes up the edges?



[ - ]
Reply by Tim WescottDecember 22, 2020

I've been going off of this.  Which is nice, but isn't long on expositions on how to use it: http://epubs.siam.org/doi/pdf/10.1137/S00361445983...

[ - ]
Reply by kazDecember 22, 2020

On a different note I just noticed that DCT was invented by a fellow called Nasir Ahmed in 1972 and should have been named after him to be fair.

Just like Costas loop, Thevenin equivalent, Hilbert transform, Fourier Transform, Farrow filter, Goertzel algorithm, Zadoff-Chu sequence and the list goes on.

[ - ]
Reply by Tim WescottDecember 22, 2020

Another post I need to make is good books that include information on the DCT!  Something that's not over-the-top mathematically, yet is still rigorous enough that I can figure out when to use what type.

I'm actually experimenting with using the DCT for this, with good results:

  • Take DCT of the image
  • Multiply the upper left corner of the DCT with 0.5 * (1 + cos(n/N * pi)) for N points followed by zeros
  • Take IDCT of the result

It's not exactly speedy, but it's going to be used in an off-line calibration operation for an optical instrument (I want to separate the effects of pixel-by-pixel processes from those of processes that essentially "tilt and wiggle" the image across the sensor).

I ran across an interesting paper on the DCT that starts by saying that the FFT is what you get by assuming sinusoids and repeating boundary conditions (i.e., x[n + N] = x[n]), while the various forms of the DCT assume either the Dirchlet boundary condition or the -- um -- other one (I gotta brush up on my math).  So for the "jpg DCT" x[-1] = x[N] = 0 (Dirchlet).

[ - ]
Reply by rbjDecember 22, 2020
does this work or did dsprelated just eat my reply to Tim?
[ - ]
Reply by Tim WescottDecember 22, 2020

It seems to work, and if there's some other reply from you, it's gone.

I'm looking for more info because I dislike having something that works dandy in practice but I don't have a theoretical basis for.

(Which is better than having something that works in theory but not in practice -- but still.)

[ - ]
Reply by rbjDecember 22, 2020

this is what i meant to say, but when i hit "Submit Reply", the response was something like "You must be logged in to reply" and it totally lost about 5 or 6 paragraphs.

as best as i can tell (just from reading Wikipedia), the DCT (in one dimension) takes this finite set of data of N samples:

x[0], x[1], x[2], ... x[N-2], x[N-1]

and it extends it to 

x[0], x[1], ... x[N-2], x[N-1], x[N-1], x[N-2], ... x[1], x[0]

and runs a length 2N DFT on that or it extends it as

x[0], x[1], ... x[N-2], x[N-1], x[N-2], x[N-3], ... x[2], x[1]

and runs a length 2N-2 DFT on that data and in both cases removes the redundancy (due to even symmetry) in the X[k] domain.  in the former case, the input sequence is symmetrical about a line 1/2 sample to the left and that offset and linear phase shift is removed in the DCT.  in the latter case, it is already symmetrical about x[0], there is no offset, and only the cosine terms exist.

in both cases the data is made continuous in the zeroth derivative.

this time i am selecting all, and copying text, before hitting Submit Reply.



[ - ]
Reply by aa1wwDecember 22, 2020

Forgive my ignorance but is there any trick that can be done with interpolation that would create extra data (n-1 zeroes) at the edges and in between each actual sample and then maybe you could decimate afterwards?

[ - ]
Reply by JohnEhlersDecember 22, 2020

Here is a super simple way to get the smoothing you want.  Just smooth the pixels from left to right with an EMA.  Then take that result and smooth it from right to left with the same EMA.  You will get double smoothing and the lags are cancelling, leaving the edges intact.  Then smooth from top to bottom with the same EMA.  Finally, smooth that result from bottom to top with the same EMA.  Of course, you will have to format your pixels into rows and columns.

[ - ]
Reply by ombzDecember 22, 2020

While this might work in practice and produce results that might work for your application, it's not common to use EMA (exponential moving averager?) for image processing because it's non-FIR and non-separable. This means that your outcome will depend on the input, and it will also depend on whether you choose to smooth columns or rows first.

But yes, separable filtering using a short separable FIR filter is by far the fastest image filtering method. The most common smoother for that is the Gaussian kernel which is perfectly separable. You won't need bidirectional filtering with an FIR filter because you can simply store the "delayed" output directly. And again, for dealing with the edges: extend the input using zeroes, repetition or mirror copies of the edges -- whatever suits you. The necessary size of extension on every edge is usually half the filter width. For filter widths >8..16 samples (especially for a non-separable 2D kernel) it's usually more common to use FFT, which btw. deals with edges more perfectly because of ideal interpolation (in the sense of Shannon). But for using FFT you typically double the input size (or more: round to next integer power of 2) and fill it with zeros in order to avoid issues with circular convolution.

To Tim: Simply have a look at OpenCV:

https://docs.opencv.org/master/d4/d13/tutorial_py_...

and

https://docs.opencv.org/master/d4/d86/group__imgpr...

-Andy