DSPRelated.com
Forums

Evaluating the "smoothness" of a vector

Started by Luca85 June 21, 2012
Hello everybody.
I'm doing this thing in matlab and some people there told to ask here
where there are many experts in the field.
I have a vector of values of a certain parameter at different
positions. I want to evaluate how intense are the changes of this
parameter on a scale.
I'm not very practical of signal processing.
An example of what I need is the following. I want an operation that
for each point in the vector gives me zero if the vector was
originally convolved with a gaussian kernel of width W.

How can I do something like that??
I think that in signal processing theory there would be plenty of
operation like this one.

I was guessing that I could convolve my vector with a kernel of this
shape: [1 2 -6 2 1]. In this way I would have zero if my point has the
same value of the mean of the pixels of its neighbourhood, distance
weighted.
Is it correct?

Can you suggest me other ways?
I feel that in some way I should be using cross-correlation, but I
don't know how and why.
On Thu, 21 Jun 2012 06:06:14 -0700, Luca85 wrote:

> Hello everybody. > I'm doing this thing in matlab and some people there told to ask here > where there are many experts in the field. I have a vector of values of > a certain parameter at different positions. I want to evaluate how > intense are the changes of this parameter on a scale. > I'm not very practical of signal processing. An example of what I need > is the following. I want an operation that for each point in the vector > gives me zero if the vector was originally convolved with a gaussian > kernel of width W. > > How can I do something like that?? > I think that in signal processing theory there would be plenty of > operation like this one. > > I was guessing that I could convolve my vector with a kernel of this > shape: [1 2 -6 2 1]. In this way I would have zero if my point has the > same value of the mean of the pixels of its neighbourhood, distance > weighted. > Is it correct? > > Can you suggest me other ways? > I feel that in some way I should be using cross-correlation, but I don't > know how and why.
You're not getting many responses because what you're asking for in the title isn't what you're asking for in the text. To evaluate the smoothness of a vector (perhaps a better term would be a time sequence?) you would want to define "smoothness" (probably as the deviation of the signal from the local average), then figure out how to measure that. It's an inexact term, so don't expect that there's any one right way -- just ones that may be good for your particular project. This is kind of what your suggested method does. To evaluate the amount that your vector deviates from one that could have been generated by a Gaussian -- oi, that's different. The transfer function would certainly have a zero space, and with enough mathematical manipulation you could certainly test the vector to see what content it has that lies within the zero space -- but that's about all I can say without hitting the books to figure out how to actually do the calculation. Ultimately one would end up convolving the signal with some filter, but how easy it would be to find that filter, and how long it would have to be is not something that I could say without work. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On 22 Giu, 01:35, Tim Wescott <t...@seemywebsite.com> wrote:

> You're not getting many responses because what you're asking for in the > title isn't what you're asking for in the text.
Gosh, probably it's because I'm not expert in the field. I feel that what I want to do is a very basic thing, so there should be a predefined operation.
> To evaluate the smoothness of a vector (perhaps a better term would be a > time sequence?)
Well... It's not a time sequence. It's just a sequence of points. Actually they represent the _estimated_ distance of the wall of an object from a measuring center in different directions. Pretend you're inside an object that looks like a circle but has some oblungations. You are in the center and you measure the distance from the center to the wall every degree. While variations from perfect circularity happens on a ~15&#4294967295; scale. These distances are calculated by an algorithm that fits a signal from a sensor. Currently I fit each measurement distance independently. Then I prune points where the fit failed and I maybe smooth this vector the remaining distances convolving my vector of points with a gaussian kernel. To improve the results I want to fit all the distances in a single common fit and impose a penalty on solutions where points are too different from their close neighbours. That's my full problem. So I'm looking for an operator that computes the local degree of ""unsmoothness"". Derivative could do it but it's too local.
>you would want to define "smoothness" (probably as the > deviation of the signal from the local average),
Exactly.
>then figure out how to > measure that. &#4294967295;It's an inexact term, so don't expect that there's any one > right way -- just ones that may be good for your particular project.
Which one would be a correct term to indicate what I mean?
> This is kind of what your suggested method does.
Yep! For now I'm going with it. But I invented that idea in half an hour with extremely limite knowledge of the topic. So I tought, if I ask someone expert in the field they would probably have an already made well-defined and good behaving mathematical operator.
> To evaluate the amount that your vector deviates from one that could have > been generated by a Gaussian -- oi, that's different.
Maybe I expressed wrongly my concept. I do not want to evaluate the difference from my vector to a vector of poitns shaped as a gaussian. Here's an example of what I meant. Take a vector V. Convolve it with a gaussian kernel of width "w" and get V_sm in output. I would define V_sm "smooth" on my scale w. The ideal operator I'm looking for gives me 0 at each point when evaluated on V_sm. And gives me an increasing value the more my points deviates from it. Is it well defined now? Or maybe this definition has some flaws? Is there some pre-defined very common operator used in signal processing to do this? Cross-correlation has anything to do with it? Otherwise, if it's too complicate, I will stick to my made-up convolution with a kernel that looks like a gaussian with the central point inverted and just see if it works well enough in the practical application!
On Friday, June 22, 2012 4:35:54 AM UTC-5, Luca85 wrote:
> On 22 Giu, 01:35, Tim Wescott <t...@seemywebsite.com> wrote: > > Maybe I expressed wrongly my concept. I do not want to evaluate the > difference from my vector to a vector of poitns shaped as a gaussian. > Here's an example of what I meant. Take a vector V. Convolve it with a > gaussian kernel of width "w" and get V_sm in output. I would define > V_sm "smooth" on my scale w. > The ideal operator I'm looking for gives me 0 at each point when > evaluated on V_sm. And gives me an increasing value the more my points > deviates from it. > Is it well defined now? Or maybe this definition has some flaws? > Is there some pre-defined very common operator used in signal > processing to do this? Cross-correlation has anything to do with it? > Otherwise, if it's too complicate, I will stick to my made-up > convolution with a kernel that looks like a gaussian with the central > point inverted and just see if it works well enough in the practical > application!
I'm going to jump out on a limb here, and ask Tim for a bit of help. Tim, does it sound like Luca85 gets a _value_ based on measurement distance, then wants to filter that value (convolve) with a filter that has a _gaussian shape_, then measure the ecluidian distance between the original and the filtered versions? Maurice
On 22 Giu, 15:05, maury <maury...@core.com> wrote:

> Tim, does it sound like Luca85 gets a _value_ based on measurement distance, then wants to filter that value (convolve) with a filter that has a _gaussian shape_, then measure the ecluidian distance between the original and the filtered versions?
Interesting! I don't "want" to do this. I was thinking, naively, for other kinds of operators. Nonetheless I think that what you say could be a very nice way to evaluate what I call "unsmoothness", until someone tells me the right term (if there is one). I could compute the distance between my vector and a smoothed version of itself (with the smoothing kernel providing the scale of variations that I do not want to accept). Using Euclidean distance or other definitions is a matter of taste (even if, of course, Euclidean would be my first try)
On Fri, 22 Jun 2012 06:05:47 -0700, maury wrote:

> On Friday, June 22, 2012 4:35:54 AM UTC-5, Luca85 wrote: >> On 22 Giu, 01:35, Tim Wescott <t...@seemywebsite.com> wrote: >> >> Maybe I expressed wrongly my concept. I do not want to evaluate the >> difference from my vector to a vector of poitns shaped as a gaussian. >> Here's an example of what I meant. Take a vector V. Convolve it with a >> gaussian kernel of width "w" and get V_sm in output. I would define >> V_sm "smooth" on my scale w. >> The ideal operator I'm looking for gives me 0 at each point when >> evaluated on V_sm. And gives me an increasing value the more my points >> deviates from it. >> Is it well defined now? Or maybe this definition has some flaws? Is >> there some pre-defined very common operator used in signal processing >> to do this? Cross-correlation has anything to do with it? Otherwise, if >> it's too complicate, I will stick to my made-up convolution with a >> kernel that looks like a gaussian with the central point inverted and >> just see if it works well enough in the practical application! > > I'm going to jump out on a limb here, and ask Tim for a bit of help. > > Tim, does it sound like Luca85 gets a _value_ based on measurement > distance, then wants to filter that value (convolve) with a filter that > has a _gaussian shape_, then measure the ecluidian distance between the > original and the filtered versions? > > Maurice
I don't think that's what he articulated the first time out, or even what he wants to do at root. It's what he's doing now, however, and if that works... -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On Fri, 22 Jun 2012 02:35:54 -0700, Luca85 wrote:

> On 22 Giu, 01:35, Tim Wescott <t...@seemywebsite.com> wrote:
>>> snip <<<
>>then figure out how to >> measure that. &nbsp;It's an inexact term, so don't expect that there's any >> one right way -- just ones that may be good for your particular >> project. > > Which one would be a correct term to indicate what I mean?
Actually, I think smoothness is about the best word you're going to find -- just be sure that when you talk about it, you specify just what you mean. There isn't an exact term yet in our finite language for everything in our infinite world. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On Fri, 22 Jun 2012 02:35:54 -0700, Luca85 wrote:

> On 22 Giu, 01:35, Tim Wescott <t...@seemywebsite.com> wrote: > >> You're not getting many responses because what you're asking for in the >> title isn't what you're asking for in the text. > > Gosh, probably it's because I'm not expert in the field. I feel that > what I want to do is a very basic thing, so there should be a predefined > operation. > > >> To evaluate the smoothness of a vector (perhaps a better term would be >> a time sequence?) > > Well... It's not a time sequence. It's just a sequence of points. > Actually they represent the _estimated_ distance of the wall of an > object from a measuring center in different directions. Pretend you're > inside an object that looks like a circle but has some oblungations. You > are in the center and you measure the distance from the center to the > wall every degree. While variations from perfect circularity happens on > a ~15&deg; scale. > > These distances are calculated by an algorithm that fits a signal from a > sensor. Currently I fit each measurement distance independently. Then I > prune points where the fit failed and I maybe smooth this vector the > remaining distances convolving my vector of points with a gaussian > kernel. > To improve the results I want to fit all the distances in a single > common fit and impose a penalty on solutions where points are too > different from their close neighbours. > > That's my full problem. > So I'm looking for an operator that computes the local degree of > ""unsmoothness"". > Derivative could do it but it's too local.
(I'm breaking up this long question-response a bit, to try to keep the individual posts short) What you are doing now with your filter is -- more or less -- taking the response of a symmetric low-pass filter and subtracting out the DC response at the center point. This is probably as good as you're going to get with a linear filter. The only thing that you're not doing, which I think you should, is to normalize the low-pass filter portion to a DC gain of one, which you do by making sure the sum of the taps is 1. So, for instance, if you want to do this with a 5-point moving average (which isn't what I recommend, I'm just choosing something with easy math), you'd use: Low pass: {0.2 0.2 0.2 0.2 0.2} High pass: {0.2 0.2 -0.8 0.2 0.2} So, there's a structured way to do what you want. There's a wealth of parent low-pass filters that you could use: if you stick with this general technique you probably want to choose one that does the best job of following what you think is the "real" signal. What you are trying to do with your _data_ is to identify and reject the outliers. The process is one that is inherently nonlinear (which is why I stressed the fact that the above filters are linear). I think that the method you're trying now is intuitively sound, but do note that as soon as you toss an outlier you've made the overall filter nonlinear. If you toss your outliers then interpolate the values for the removed outliers from their neighbors, then sweep the whole thing again with your low-pass filters to remove plain old random noise, and you may be OK. You may also want to Google on the term "median filter". I haven't used one in anger, but I've heard them suggested for just this sort of thing. As far as I know they're most highly developed to remove "speckle" from images -- but speckle in images are just pixels with outlying values, so a median filter may be a well-developed way to do the job. If it works, it also has the advantage that when you tell someone what you're doing you don't have to go through a long song and dance and then have to convince them -- you can just toss the term "median filter" at them, and they'll either know what it is and agree, know what it is, disagree, and maybe give you good advise on fixing it, or just know the term and be intimidated out of criticizing your choice of filter. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On Friday, June 22, 2012 11:24:53 AM UTC-5, Tim Wescott wrote:
> On Fri, 22 Jun 2012 02:35:54 -0700, Luca85 wrote: > > > On 22 Giu, 01:35, Tim Wescott <t...@seemywebsite.com> wrote: > > > >> You're not getting many responses because what you're asking for in the > >> title isn't what you're asking for in the text. > > > > Gosh, probably it's because I'm not expert in the field. I feel that > > what I want to do is a very basic thing, so there should be a predefined > > operation. > > > > > >> To evaluate the smoothness of a vector (perhaps a better term would be > >> a time sequence?) > > > > Well... It's not a time sequence. It's just a sequence of points. > > Actually they represent the _estimated_ distance of the wall of an > > object from a measuring center in different directions. Pretend you're > > inside an object that looks like a circle but has some oblungations. You > > are in the center and you measure the distance from the center to the > > wall every degree. While variations from perfect circularity happens on > > a ~15&#4294967295; scale. > > > > These distances are calculated by an algorithm that fits a signal from a > > sensor. Currently I fit each measurement distance independently. Then I > > prune points where the fit failed and I maybe smooth this vector the > > remaining distances convolving my vector of points with a gaussian > > kernel. > > To improve the results I want to fit all the distances in a single > > common fit and impose a penalty on solutions where points are too > > different from their close neighbours. > > > > That's my full problem. > > So I'm looking for an operator that computes the local degree of > > ""unsmoothness"". > > Derivative could do it but it's too local. > > (I'm breaking up this long question-response a bit, to try to keep the > individual posts short) > > What you are doing now with your filter is -- more or less -- taking the > response of a symmetric low-pass filter and subtracting out the DC > response at the center point. This is probably as good as you're going > to get with a linear filter. The only thing that you're not doing, which > I think you should, is to normalize the low-pass filter portion to a DC > gain of one, which you do by making sure the sum of the taps is 1. > > So, for instance, if you want to do this with a 5-point moving average > (which isn't what I recommend, I'm just choosing something with easy > math), you'd use: > > Low pass: {0.2 0.2 0.2 0.2 0.2} > High pass: {0.2 0.2 -0.8 0.2 0.2} > > So, there's a structured way to do what you want. > > There's a wealth of parent low-pass filters that you could use: if you > stick with this general technique you probably want to choose one that > does the best job of following what you think is the "real" signal. > > What you are trying to do with your _data_ is to identify and reject the > outliers. The process is one that is inherently nonlinear (which is why > I stressed the fact that the above filters are linear). I think that the > method you're trying now is intuitively sound, but do note that as soon > as you toss an outlier you've made the overall filter nonlinear. If you > toss your outliers then interpolate the values for the removed outliers > from their neighbors, then sweep the whole thing again with your low-pass > filters to remove plain old random noise, and you may be OK. > > You may also want to Google on the term "median filter". I haven't used > one in anger, but I've heard them suggested for just this sort of thing. > As far as I know they're most highly developed to remove "speckle" from > images -- but speckle in images are just pixels with outlying values, so > a median filter may be a well-developed way to do the job. If it works, > it also has the advantage that when you tell someone what you're doing > you don't have to go through a long song and dance and then have to > convince them -- you can just toss the term "median filter" at them, and > they'll either know what it is and agree, know what it is, disagree, and > maybe give you good advise on fixing it, or just know the term and be > intimidated out of criticizing your choice of filter. > > -- > My liberal friends think I'm a conservative kook. > My conservative friends think I'm a liberal kook. > Why am I not happy that they have found common ground? > > Tim Wescott, Communications, Control, Circuits & Software > http://www.wescottdesign.com
Luca85, if you are trying to rid yourself of outliers, then Tim is absolutely correct, in that you should look at the median filter. I've used them in the update feedback for adaptive filter to eliminate disruptive outlining signals. The one thing to be careful about is the order of the filter, i.e., the number of elements you include in the filter.
On 22 Giu, 18:24, Tim Wescott <t...@seemywebsite.com> wrote:

>&#4294967295;The only thing that you're not doing, which > I think you should, is to normalize the low-pass filter portion to a DC > gain of one, which you do by making sure the sum of the taps is 1.
You're right. I can do that.
> What you are trying to do with your _data_ is to identify and reject the > outliers.
I'm not trying to do that. Not anymore :p What I want to do is to fitting all the points alltogether adding a penalty in the objective function to avoid that any of the points start to behave wildly. In this way either all the points go the wrong way or I won't get any outlier. (hopefully). And a linear filter can do that very good. Another thing that I could do, thinking naively, is to smooth all my parameters after each iteration of my minimizer. That sounds very clever but sadly it turns out that it does not work well and affects the convergence in impredictable ways (it either doesn't do anything or does too much) I know median filters a bit. They work greatly for removing plain outliers! What about what I said in reply to "maury". I went with the idea of smoothing my vector with the kernel I want and then compute the distance between the two vectors. This quantity necessarily tends to zero if my original vector was already smooth and increases as its "unsmoothness" increase. In this way I can also shape my kernel much more easilly and with all the shapes I want! Thanks very much for your help!!