Forums

Best resampling approach for different types of data?

Started by chrah September 3, 2009
Hi,
I need to downsample a bunch of signals, all of which have very different
properties (the Nyquist criteria will not be fulfilled after the
downsampling). My question is how to proceed in the best possible way. All
processing is off-line but has to be fairly fast.

Case 1: An analogue signal (continuous amplitude and time) has been
sampled and needs to be downsampled. I have no problems here, just apply un
antialias filter and resample properly.

Case 2: An analogue signal with discontinuous jumps. Ripples introduced by
the antialias filter makes the downsampled signal useless. Please help.

Case 3: An analogue signal which contains constant segments. Antialias
filters introduce ripples at the edge of each constant segment. Can this be
avoided in a clever way?

Case 4: An enum signal (a few discrete amplitude levels and continuous
time). I guess the best approach here is to, kind of, just pick the sample
which is closest to the new sampling time (nearest neighbour
interpolation). 

Case 5: Noisy enum signal. Using a linear filter will introduce lots of
ripple since there are discontinuous jumps every time the signal changes
from one amplitude state to another. My approach would be to use a median
filter followed by the case 2 approach, but I guess there must be a better
way?

I hope you can help

Best regards
Christer

On 3 Sep, 11:59, "chrah" <christer.ahlst...@vti.se> wrote:
> Hi, > I need to downsample a bunch of signals, all of which have very different > properties (the Nyquist criteria will not be fulfilled after the > downsampling).
A *very* bad start, what DSP is concerned. On the other hand, DSP might not be the method of choise for working these data.
> My question is how to proceed in the best possible way. All > processing is off-line but has to be fairly fast. > > Case 1: An analogue signal (continuous amplitude and time) has been > sampled and needs to be downsampled. I have no problems here, just apply un > antialias filter and resample properly. > > Case 2: An analogue signal with discontinuous jumps. Ripples introduced by > the antialias filter makes the downsampled signal useless. Please help.
Then you will have to decide what means more to you: The (presumably useful) original data, or the storage space. If the downsampled data are useless, there is no point of downsampling. Or you will have to come up with a very detailed description of both the process that generates your data, as well as the purpose of your processing. There might be non-DSP methods that can be used, but making such a judgement requires very detailed insights in the problem.
> Case 3: An analogue signal which contains constant segments. Antialias > filters introduce ripples at the edge of each constant segment. Can this be > avoided in a clever way?
Nope. DSP filters work by convolution with waveforms. This inevitably produces ripples near transients. But again, non-DSP methods might be relevant, provided one has the proper insights etc.
> Case 4: An enum signal (a few discrete amplitude levels and continuous > time). I guess the best approach here is to, kind of, just pick the sample > which is closest to the new sampling time (nearest neighbour > interpolation).
This kind of thing can be represented as (start_time, level) pairs. Which is outside the realm of DSP.
> Case 5: Noisy enum signal. Using a linear filter will introduce lots of > ripple since there are discontinuous jumps every time the signal changes > from one amplitude state to another. My approach would be to use a median > filter followed by the case 2 approach, but I guess there must be a better > way?
If you *know* that the signal is supposed to be a number of constant segments, you could try and detect the transitions (might be possible if the noise amplitudes are small compared to level changes) and estimate the levels in each segment. The nature of your signals seem to be rather different from the types of signals one usually encounter in DSP. You might want to consult fora where general time series analysis problems are discussed. And as you understand by now, you should also be prepared to come up with very detailed descriptions of your data and what kinds of information you want to extract from them. Rune
First of all, my apologies for posting a question that is only partly
related to DSP. However, I know that many of you are very good at tweaking
and fiddling with signals, not only in a strict DSP sense.

>A *very* bad start, what DSP is concerned. On the other >hand, DSP might not be the method of choise for working >these data.
Yepp, but it is a huge dataset of time series so there is not much I can do about that. Anyhow, the majority of the information in the signals is of course left untouched.
>If the downsampled data are useless, there is no point of >downsampling.
Ok, useless was perhaps a strong word, sorry. The DSP related question would be. Is there a good way of doing 1D egde preserving filtering (or rather smoothing) to avoid ripples around discontinuities as much as possible? Bilateral filtering could be a useful approach, but something less CPU needy would be preferable.
On Sep 3, 8:04&#2013266080;am, "chrah" <christer.ahlst...@vti.se> wrote:
> >If the downsampled data are useless, there is no point of > >downsampling. > > Ok, useless was perhaps a strong word, sorry. The DSP related question > would be. Is there a good way of doing 1D egde preserving filtering (or > rather smoothing) to avoid ripples around discontinuities as much as > possible? Bilateral filtering could be a useful approach, but something > less CPU needy would be preferable.
If you're wanting to use a linear filter to bandlimit the signal before you decimate (or for that matter, an antialiasing filter before you sample), then you are going to have ripples near discontinuities; nothing you can do about that. Google for "Gibbs phenomenon." You can compress the ripples in time by widening your filter bandwidth, but their amplitude won't decrease. Jason
On 3 Sep, 14:04, "chrah" <christer.ahlst...@vti.se> wrote:
> First of all, my apologies for posting a question that is only partly > related to DSP. However, I know that many of you are very good at tweaking > and fiddling with signals, not only in a strict DSP sense. > > >A *very* bad start, what DSP is concerned. On the other > >hand, DSP might not be the method of choise for working > >these data. > > Yepp, but it is a huge dataset of time series so there is not much I can > do about that.
It's not about time-domain data or not. When I say talk about 'DSP data' and 'non-DSP data', I mean that the signals dealt with in DSP are of the types encountered in either Radar/Sonar type applications, or in communications applications. Such data have certain properties in common with each other, which is why DSP techniques might be useful in several such applications. The properties of your data (and problem statement) seem to be quite different from the problem domains where DSP is known to be useful. Which is why I am a bit sceptical about using typical DSP techniques Again, you might get more useful responses if you can describe the data and the analysis in more detail. Rune
>Again, you might get more useful responses if you can >describe the data and the analysis in more detail.
Most of the signals comes from the CAN bus on a car. These are perhaps not the most intriguing examples, but they are rather illustrative: Case 1: Acceleration Case 2: Lateral position on the road. Every time you cross a line you get a jump in the signal. Case 3: Velocity where cruise control is suddenly activated. Case 4: The gear that is currently used.
On 3 Sep, 15:04, "chrah" <christer.ahlst...@vti.se> wrote:
> >Again, you might get more useful responses if you can > >describe the data and the analysis in more detail. > > Most of the signals comes from the CAN bus on a car. These are perhaps not > the most intriguing examples, but they are rather illustrative: > > Case 1: Acceleration > Case 2: Lateral position on the road. Every time you cross a line you get > a jump in the signal. > Case 3: Velocity where cruise control is suddenly activated. > Case 4: The gear that is currently used.
Hmmm.... classical DSP methods might not be very useful with such types of data. I would suggest you asked somebody who knows their way around Kalman filters for help. Check with the maths department at your local university. Rune
>Hmmm.... classical DSP methods might not be very useful >with such types of data. I would suggest you asked somebody >who knows their way around Kalman filters for help.
Kalman filters are either to slow around the discontinuities or not smooth enough. It works nicely when complemented with a change detector (fast filter when jump, slow when nothing special is happening), but it is a little too computationally intensive.
On 3 Sep, 15:53, "chrah" <christer.ahlst...@vti.se> wrote:
> >Hmmm.... classical DSP methods might not be very useful > >with such types of data. I would suggest you asked somebody > >who knows their way around Kalman filters for help. > > Kalman filters are either to slow around the discontinuities or not smooth > enough. It works nicely when complemented with a change detector (fast > filter when jump, slow when nothing special is happening), but it is a > little too computationally intensive.
I think this is a case where the choise stands between an expensive Kalman filter that actually works, and simpler filters that don't. You might be better off spending efforts on speeding up the Kalman filter. Depending on what software your KF is implemented in, it might be possible to speed things up a couple of orders of magnitude. If you use one of the R&D packages (matlab, labview, mathematica) you (well, somebody who knows both their Kalman filters and their C(++)) might be able to speed things up by a factor 10-100. Rune

chrah wrote:
> Hi, > I need to downsample a bunch of signals, all of which have very different > properties (the Nyquist criteria will not be fulfilled after the > downsampling). My question is how to proceed in the best possible way. All > processing is off-line but has to be fairly fast.
[...] This problem is not about downsampling, but about the extraction of the "important" part of data. So it is similar to the lossy compression. You have to decide what is "important" and what is not; that entirely depends on your problem. There could be many options. Just for example: 1. Non-uniform sampling: you transmit only the "important" samples and the time of sampling. 2. Some type of regression (Kalman, Polynomial, LPC...). Transmit only the coefficients and the prediction error. 3. Some type of transform (KLT, FFT, wavelets, Walsh-Hadamard, filterbank...). Transmit only the important coefficients. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com