On 3 Sep, 18:12, Tim Wescott <t...@seemywebsite.com> wrote:
> I'm not sure what restrictions Rune is placing by saying "not quite
> DSP".=A0If it's a signal, and you're processing it, and you're doing it
> numerically, then it's DSP to me.
What we usually think of as DSP is only a subset of the
methods available for data analysis. Kalman filters may
be bordeline, but tools like ARIMA analysis, median
filters and so on, are not covered by DSP.
You don't need to go further than the 'academic sibling'
of DSP, image processing, to see that one uses methods
like morphology, that have no counterpart in DSP.
As for the data, DSP usually covers anthropogenic data:
The devices listen for signals that somebody deliberately
emitted, be it for radar/sonar type of remote sensing,
or for communication purposes. There are exceptions, like
SETI or earthquake monitoring, but even these types of
applications listen for the same *types* of data; data
that propagate as waves into a world where they may or
may not be expetcted to exist.
To see what I mean, try to use, say, a Butterworth filter
to clean up navigation data, the noisy position estimates
from some vehicle that follows a meandering track. These
types of data have fundamentally different characteristics
from comm data or remote sensing data, so the methods
designed for comm or remote sensing applications will fail.
The OP has data that are sufficiently different from the
data one usually deals with in DSP, so he should seek
advice from people who deal with data with the same
characteristics as his.
Rune
Reply by Tim Wescott●September 4, 20092009-09-04
On Thu, 03 Sep 2009 15:25:45 -0500, chrah wrote:
>>chrah wrote:
>>> Hi,
>>> I need to downsample a bunch of signals, all of which have very
> different
>>> properties (the Nyquist criteria will not be fulfilled after the
>>> downsampling). My question is how to proceed in the best possible way.
> All
>>> processing is off-line but has to be fairly fast.
>>>
>>> Case 1: An analogue signal (continuous amplitude and time) has been
>>> sampled and needs to be downsampled. I have no problems here, just
> apply un
>>> antialias filter and resample properly.
>>>
>>> Case 2: An analogue signal with discontinuous jumps. Ripples
>>> introduced
> by
>>> the antialias filter makes the downsampled signal useless. Please
> help.
>>>
>>> Case 3: An analogue signal which contains constant segments. Antialias
>>> filters introduce ripples at the edge of each constant segment. Can
> this be
>>> avoided in a clever way?
>>>
>>> Case 4: An enum signal (a few discrete amplitude levels and continuous
>>> time). I guess the best approach here is to, kind of, just pick the
> sample
>>> which is closest to the new sampling time (nearest neighbour
>>> interpolation).
>>>
>>> Case 5: Noisy enum signal. Using a linear filter will introduce lots
> of
>>> ripple since there are discontinuous jumps every time the signal
> changes
>>> from one amplitude state to another. My approach would be to use a
> median
>>> filter followed by the case 2 approach, but I guess there must be a
> better
>>> way?
>>>
>>> I hope you can help
>>>
>>> Best regards
>>> Christer
>>>
>>>
>>Christer,
>>
>>Let's try to break this down into a few fundamental things:
>>
>>By "downsample" we generally mean bringing a passband signal down to
>>baseband with quadrature samples. This usually involves reducing the
>>sample rate at the same time.
>>
>>By "decimate" or "sample rate reduction" we generally mean reducing the
>>sample rate but leaving the signal spectrum location the same.
>>
>>I think you mean to decimate here / to reduce the sample rate.
>>
>>When there are discontinuities in the analog signal then you "should"
>>filter before sampling so as to meet the Nyquist criterion.
>>
>>When there are discontinuities in a sampled signal then you "should"
>>filter before sample rate reduction.
>>[I say "should" because the degree, etc. becomes subjective to a
> degree.]
>>
>>Ripple at the discontinuities is caused by using a rectangular spectral
>>window as in a "perfect" brick wall lowpass filter. Applying any filter
>>causes convolution in the time domain. A brick wall filter has a sinc as
>>its temporal response. The convolution with a sinc causes the ripples at
>>transient edges.
>>
>>The solution to the ripples is to use a filter whose transition from
>>passband to stopband is more gradual. There are even optimum filters
>>that have monotonic temporal transitions - with *no* ripple and still
>>have minimum rise time. This is as good as it gets.
>>
>>So, you pick a lowpass filter that has "nice" characteristics for your
>>application and use that as a pre-decimation filter. You may want to
>>decimate in stages using half-band filters - that's one option.
>>
>>With any lowpass filter you're going to have a transient in the impulse
>>/ unit sample response as the filter "fills up" with the next change in
>>the signal. So, you may be motivated to ignore the ends of a filtered
>>record and perhaps even to ignore the information around a step change -
>
>>although that really isn't necessary. The signal, post-decimation is
>>what it is, transients and all.
>>
>>Obviously you can't lowpass filter and *then* expect to identify exactly
>
>>where the step changes have occurred ... at least not more accurately
>>than the lower bandwidth allows. In general, the temporal resolution
>>for this sort of thing is the reciprocal of the filter bandwidth. You'll
>>be stuck with that.
>>
>>This ignores any fancy nonlinear processing one might do.
>>
>>Fred
>>
>>
>
> Please tell me more about the fancy nonlinear processing. As I already
> mentioned, bilateral filtering works pretty good in the case where it is
> important to preserve the edges of a jump in the signal. Weighted median
> filters should also be able to do the trick fairly well. Any other
> suggestions?
> /Christer
For the case of finding a discontinuity you are faced with the case of
deciding where an event of a known nature happened.
Contrast this with the general anti-alias -> decimate -> upsample ->
reconstruct case. In that case, you are reconstructing a signal of
unknown nature that is known to have been bandlimited. The _only_ thing
you can say _in general_ is that information has been tossed out -- but
you have no clue what that information might be.
On the other hand, say that you know you have nothing but straight-line
data with discontinuities. For the sake of making the ASCII art easy,
say that you've filtered it with a simple running average filter. So
you'll get something like this:
O O O
O
O O O
Because it's a simple average filter, it's easy to fill in the output of
the running average filter:
O---O---O.
\
\
O
`--O---O---O---
From _that_ it's easy to see that the actual discontinuity happened early
in a sample interval, letting the running average filter run out for a
while before the sample happened. Do a bit of math, and you can name the
exact time.
--
www.wescottdesign.com
Reply by Mark●September 3, 20092009-09-03
On Sep 3, 8:10�am, Jason <cincy...@gmail.com> wrote:
> On Sep 3, 8:04�am, "chrah" <christer.ahlst...@vti.se> wrote:
>
> > >If the downsampled data are useless, there is no point of
> > >downsampling.
>
> > Ok, useless was perhaps a strong word, sorry. The DSP related question
> > would be. Is there a good way of doing 1D egde preserving filtering (or
> > rather smoothing) to avoid ripples around discontinuities as much as
> > possible? Bilateral filtering could be a useful approach, but something
> > less CPU needy would be preferable.
>
> If you're wanting to use a linear filter to bandlimit the signal
> before you decimate (or for that matter, an antialiasing filter before
> you sample), then you are going to have ripples near discontinuities;
> nothing you can do about that. Google for "Gibbs phenomenon." You can
> compress the ripples in time by widening your filter bandwidth, but
> their amplitude won't decrease.
>
> Jason
Jason,
that is true only if the filter is "brick wall" i.e. the high
frequencies are truncated suddenly. If instead the filter has a
more gradual rolloff in frequency, the Gibbs ringing can be reduced
and eliminated.
Mark
Reply by Fred Marshall●September 3, 20092009-09-03
chrah wrote:
>> chrah wrote:
>
> Please tell me more about the fancy nonlinear processing. As I already
> mentioned, bilateral filtering works pretty good in the case where it is
> important to preserve the edges of a jump in the signal. Weighted median
> filters should also be able to do the trick fairly well. Any other
> suggestions?
> /Christer
I'm not sure I know what to suggest... not my area of expertise.
Fred
Reply by jim●September 3, 20092009-09-03
chrah wrote:
>
> >chrah wrote:
> >> Hi,
> >> I need to downsample a bunch of signals, all of which have very
> different
> >> properties (the Nyquist criteria will not be fulfilled after the
> >> downsampling). My question is how to proceed in the best possible way.
> All
> >> processing is off-line but has to be fairly fast.
> >>
> >> Case 1: An analogue signal (continuous amplitude and time) has been
> >> sampled and needs to be downsampled. I have no problems here, just
> apply un
> >> antialias filter and resample properly.
> >>
> >> Case 2: An analogue signal with discontinuous jumps. Ripples introduced
> by
> >> the antialias filter makes the downsampled signal useless. Please
> help.
> >>
> >> Case 3: An analogue signal which contains constant segments. Antialias
> >> filters introduce ripples at the edge of each constant segment. Can
> this be
> >> avoided in a clever way?
> >>
> >> Case 4: An enum signal (a few discrete amplitude levels and continuous
> >> time). I guess the best approach here is to, kind of, just pick the
> sample
> >> which is closest to the new sampling time (nearest neighbour
> >> interpolation).
> >>
> >> Case 5: Noisy enum signal. Using a linear filter will introduce lots
> of
> >> ripple since there are discontinuous jumps every time the signal
> changes
> >> from one amplitude state to another. My approach would be to use a
> median
> >> filter followed by the case 2 approach, but I guess there must be a
> better
> >> way?
> >>
> >> I hope you can help
> >>
> >> Best regards
> >> Christer
> >>
> >
> >Christer,
> >
> >Let's try to break this down into a few fundamental things:
> >
> >By "downsample" we generally mean bringing a passband signal down to
> >baseband with quadrature samples. This usually involves reducing the
> >sample rate at the same time.
> >
> >By "decimate" or "sample rate reduction" we generally mean reducing the
> >sample rate but leaving the signal spectrum location the same.
> >
> >I think you mean to decimate here / to reduce the sample rate.
> >
> >When there are discontinuities in the analog signal then you "should"
> >filter before sampling so as to meet the Nyquist criterion.
> >
> >When there are discontinuities in a sampled signal then you "should"
> >filter before sample rate reduction.
> >[I say "should" because the degree, etc. becomes subjective to a
> degree.]
> >
> >Ripple at the discontinuities is caused by using a rectangular spectral
> >window as in a "perfect" brick wall lowpass filter.
> >Applying any filter causes convolution in the time domain.
> >A brick wall filter has a sinc as its temporal response.
> >The convolution with a sinc causes the ripples at transient edges.
> >
> >The solution to the ripples is to use a filter whose transition from
> >passband to stopband is more gradual. There are even optimum filters
> >that have monotonic temporal transitions - with *no* ripple and still
> >have minimum rise time. This is as good as it gets.
> >
> >So, you pick a lowpass filter that has "nice" characteristics for your
> >application and use that as a pre-decimation filter. You may want to
> >decimate in stages using half-band filters - that's one option.
> >
> >With any lowpass filter you're going to have a transient in the impulse
> >/ unit sample response as the filter "fills up" with the next change in
> >the signal. So, you may be motivated to ignore the ends of a filtered
> >record and perhaps even to ignore the information around a step change -
>
> >although that really isn't necessary. The signal, post-decimation is
> >what it is, transients and all.
> >
> >Obviously you can't lowpass filter and *then* expect to identify exactly
>
> >where the step changes have occurred ... at least not more accurately
> >than the lower bandwidth allows. In general, the temporal resolution
> >for this sort of thing is the reciprocal of the filter bandwidth.
> >You'll be stuck with that.
> >
> >This ignores any fancy nonlinear processing one might do.
> >
> >Fred
> >
>
> Please tell me more about the fancy nonlinear processing. As I already
> mentioned, bilateral filtering works pretty good in the case where it is
> important to preserve the edges of a jump in the signal. Weighted median
> filters should also be able to do the trick fairly well. Any other
> suggestions?
Maybe you just need data compression. Is your task to store as much of
the original data in a small amount of space.
-jim
> /Christer
Reply by chrah●September 3, 20092009-09-03
>chrah wrote:
>> Hi,
>> I need to downsample a bunch of signals, all of which have very
different
>> properties (the Nyquist criteria will not be fulfilled after the
>> downsampling). My question is how to proceed in the best possible way.
All
>> processing is off-line but has to be fairly fast.
>>
>> Case 1: An analogue signal (continuous amplitude and time) has been
>> sampled and needs to be downsampled. I have no problems here, just
apply un
>> antialias filter and resample properly.
>>
>> Case 2: An analogue signal with discontinuous jumps. Ripples introduced
by
>> the antialias filter makes the downsampled signal useless. Please
help.
>>
>> Case 3: An analogue signal which contains constant segments. Antialias
>> filters introduce ripples at the edge of each constant segment. Can
this be
>> avoided in a clever way?
>>
>> Case 4: An enum signal (a few discrete amplitude levels and continuous
>> time). I guess the best approach here is to, kind of, just pick the
sample
>> which is closest to the new sampling time (nearest neighbour
>> interpolation).
>>
>> Case 5: Noisy enum signal. Using a linear filter will introduce lots
of
>> ripple since there are discontinuous jumps every time the signal
changes
>> from one amplitude state to another. My approach would be to use a
median
>> filter followed by the case 2 approach, but I guess there must be a
better
>> way?
>>
>> I hope you can help
>>
>> Best regards
>> Christer
>>
>
>Christer,
>
>Let's try to break this down into a few fundamental things:
>
>By "downsample" we generally mean bringing a passband signal down to
>baseband with quadrature samples. This usually involves reducing the
>sample rate at the same time.
>
>By "decimate" or "sample rate reduction" we generally mean reducing the
>sample rate but leaving the signal spectrum location the same.
>
>I think you mean to decimate here / to reduce the sample rate.
>
>When there are discontinuities in the analog signal then you "should"
>filter before sampling so as to meet the Nyquist criterion.
>
>When there are discontinuities in a sampled signal then you "should"
>filter before sample rate reduction.
>[I say "should" because the degree, etc. becomes subjective to a
degree.]
>
>Ripple at the discontinuities is caused by using a rectangular spectral
>window as in a "perfect" brick wall lowpass filter.
>Applying any filter causes convolution in the time domain.
>A brick wall filter has a sinc as its temporal response.
>The convolution with a sinc causes the ripples at transient edges.
>
>The solution to the ripples is to use a filter whose transition from
>passband to stopband is more gradual. There are even optimum filters
>that have monotonic temporal transitions - with *no* ripple and still
>have minimum rise time. This is as good as it gets.
>
>So, you pick a lowpass filter that has "nice" characteristics for your
>application and use that as a pre-decimation filter. You may want to
>decimate in stages using half-band filters - that's one option.
>
>With any lowpass filter you're going to have a transient in the impulse
>/ unit sample response as the filter "fills up" with the next change in
>the signal. So, you may be motivated to ignore the ends of a filtered
>record and perhaps even to ignore the information around a step change -
>although that really isn't necessary. The signal, post-decimation is
>what it is, transients and all.
>
>Obviously you can't lowpass filter and *then* expect to identify exactly
>where the step changes have occurred ... at least not more accurately
>than the lower bandwidth allows. In general, the temporal resolution
>for this sort of thing is the reciprocal of the filter bandwidth.
>You'll be stuck with that.
>
>This ignores any fancy nonlinear processing one might do.
>
>Fred
>
Please tell me more about the fancy nonlinear processing. As I already
mentioned, bilateral filtering works pretty good in the case where it is
important to preserve the edges of a jump in the signal. Weighted median
filters should also be able to do the trick fairly well. Any other
suggestions?
/Christer
Reply by Fred Marshall●September 3, 20092009-09-03
chrah wrote:
> Hi,
> I need to downsample a bunch of signals, all of which have very different
> properties (the Nyquist criteria will not be fulfilled after the
> downsampling). My question is how to proceed in the best possible way. All
> processing is off-line but has to be fairly fast.
>
> Case 1: An analogue signal (continuous amplitude and time) has been
> sampled and needs to be downsampled. I have no problems here, just apply un
> antialias filter and resample properly.
>
> Case 2: An analogue signal with discontinuous jumps. Ripples introduced by
> the antialias filter makes the downsampled signal useless. Please help.
>
> Case 3: An analogue signal which contains constant segments. Antialias
> filters introduce ripples at the edge of each constant segment. Can this be
> avoided in a clever way?
>
> Case 4: An enum signal (a few discrete amplitude levels and continuous
> time). I guess the best approach here is to, kind of, just pick the sample
> which is closest to the new sampling time (nearest neighbour
> interpolation).
>
> Case 5: Noisy enum signal. Using a linear filter will introduce lots of
> ripple since there are discontinuous jumps every time the signal changes
> from one amplitude state to another. My approach would be to use a median
> filter followed by the case 2 approach, but I guess there must be a better
> way?
>
> I hope you can help
>
> Best regards
> Christer
>
Christer,
Let's try to break this down into a few fundamental things:
By "downsample" we generally mean bringing a passband signal down to
baseband with quadrature samples. This usually involves reducing the
sample rate at the same time.
By "decimate" or "sample rate reduction" we generally mean reducing the
sample rate but leaving the signal spectrum location the same.
I think you mean to decimate here / to reduce the sample rate.
When there are discontinuities in the analog signal then you "should"
filter before sampling so as to meet the Nyquist criterion.
When there are discontinuities in a sampled signal then you "should"
filter before sample rate reduction.
[I say "should" because the degree, etc. becomes subjective to a degree.]
Ripple at the discontinuities is caused by using a rectangular spectral
window as in a "perfect" brick wall lowpass filter.
Applying any filter causes convolution in the time domain.
A brick wall filter has a sinc as its temporal response.
The convolution with a sinc causes the ripples at transient edges.
The solution to the ripples is to use a filter whose transition from
passband to stopband is more gradual. There are even optimum filters
that have monotonic temporal transitions - with *no* ripple and still
have minimum rise time. This is as good as it gets.
So, you pick a lowpass filter that has "nice" characteristics for your
application and use that as a pre-decimation filter. You may want to
decimate in stages using half-band filters - that's one option.
With any lowpass filter you're going to have a transient in the impulse
/ unit sample response as the filter "fills up" with the next change in
the signal. So, you may be motivated to ignore the ends of a filtered
record and perhaps even to ignore the information around a step change -
although that really isn't necessary. The signal, post-decimation is
what it is, transients and all.
Obviously you can't lowpass filter and *then* expect to identify exactly
where the step changes have occurred ... at least not more accurately
than the lower bandwidth allows. In general, the temporal resolution
for this sort of thing is the reciprocal of the filter bandwidth.
You'll be stuck with that.
This ignores any fancy nonlinear processing one might do.
Fred
Reply by chrah●September 3, 20092009-09-03
Rob,
>Christer, it sound like what you want isn't downsampling
As Tim commented on, there is a lot of context missing in my post. To
start with, this is a large project where most of the decisions are made
way over my head. Data from many different sources with different sample
rates, some even irregularly sampled, will eventually end up in an SQL
database where everything is synchronized and "pretty" so that questions
such as 'select x from y where z>q and...'. My task is basically to put the
data into the database with as little data loss as possible. I can do
nothing about the data acquisition phase and nothing about the database,
all I can do is to insert the time series with a predetermined (even)
sample rate.
Tim,
>I also assume that the anti-aliasing filter you mention is in the analog
Well, not really. Even though many of these signals are analogue to start
with, they are already digital when fetched from the CAN bus. I am talking
about anti-aliasing filters that have to be used in the resampling process
to avoid introducing (additional) aliasing. In a previous project, the "old
me" just picked out every tenth sample and that was the resampling process.
You can imagine what the output looked like.
All,
thanks for your answers, I have learned a few new "DSP" tricks :o)
Reply by Tim Wescott●September 3, 20092009-09-03
On Thu, 03 Sep 2009 04:59:31 -0500, chrah wrote:
> Hi,
> I need to downsample a bunch of signals, all of which have very
> different properties (the Nyquist criteria will not be fulfilled after
> the downsampling). My question is how to proceed in the best possible
> way. All processing is off-line but has to be fairly fast.
>
> Case 1: An analogue signal (continuous amplitude and time) has been
> sampled and needs to be downsampled. I have no problems here, just apply
> un antialias filter and resample properly.
>
> Case 2: An analogue signal with discontinuous jumps. Ripples introduced
> by the antialias filter makes the downsampled signal useless. Please
> help.
>
> Case 3: An analogue signal which contains constant segments. Antialias
> filters introduce ripples at the edge of each constant segment. Can this
> be avoided in a clever way?
>
> Case 4: An enum signal (a few discrete amplitude levels and continuous
> time). I guess the best approach here is to, kind of, just pick the
> sample which is closest to the new sampling time (nearest neighbour
> interpolation).
>
> Case 5: Noisy enum signal. Using a linear filter will introduce lots of
> ripple since there are discontinuous jumps every time the signal changes
> from one amplitude state to another. My approach would be to use a
> median filter followed by the case 2 approach, but I guess there must be
> a better way?
>
> I hope you can help
>
> Best regards
> Christer
How are cases 2 through 4 different, really -- I don't mean conceptually,
I mean if you just look at a picture on a scope?
I assume that this data is given, i.e. that you don't have influence over
it's acquisition? If you do, then some other form of compression than
simple sub-sampling is probably vastly better.
I also assume that the anti-aliasing filter you mention is in the analog
domain, before the original sampling? If you're bound and determined to
use the sub-sampled data then this will actually help you to find the
times of the discontinuities better, but it's kind of a nutty way to do
it if you don't have to.
For a signal that is just straight lines and discontinuities, then with
sufficient time between the discontinuities you should be able to not
only determine the level of the straight parts, but you should be able to
infer the timing of the discontinuity with pretty fair accuracy. It
won't be with a linear filter, though -- you'll need to decide that an
edge has happened, then deduce when it happened.
For a signal that's straight lines and discontinuities _and_ noise, then
you should be able to extend your above result AS LONG AS the noise is
moderate compared to the discontinuity size -- basically your
discontinuity detector will have to be tuned to discriminate between
plain old noise and real actual discontinuities.
If case 3 is an analog signal with _some_ straight lines and _some_ curvy
ones, then an "onset of straightness" and an "onset of curviness"
detector could be used similar to the discontinuity detector I describe
above, although it's operation will be much more smoky.
But for cases 2, 4 and 5, it looks like you could get most of your data
back for all cases where the discontinuities are spaced sufficiently far
apart and are sufficiently large in comparison to the noise.
--
www.wescottdesign.com
Reply by Tim Wescott●September 3, 20092009-09-03
On Thu, 03 Sep 2009 07:04:35 -0500, chrah wrote:
> First of all, my apologies for posting a question that is only partly
> related to DSP. However, I know that many of you are very good at
> tweaking and fiddling with signals, not only in a strict DSP sense.
>
(I wish you'd leave the context in. This is USENET, no matter how hard
Google Groups tries to make it look like another web forum, and with
USENET it's nice to leave context in.)
I'm not sure what restrictions Rune is placing by saying "not quite
DSP". If it's a signal, and you're processing it, and you're doing it
numerically, then it's DSP to me.
--
www.wescottdesign.com