On Sun, 28 Oct 2012 16:19:23 -0500, WaveRider wrote:
>>On Sun, 28 Oct 2012 10:59:02 -0500, WaveRider wrote:
>>> Hi everyone,
>>> I'm new to the arena of DSP and have arrived at a bit of a
>>> sampling-based conundrum. I'm wondering if anyone here can help me out
>>> with possible solutions, or can simply set my understanding straight.
>>> I have an application where my signal of interest has a bandwidth of
>>> At Nyquist, I'd be sampling at 1 kHz. Of course, I could oversample by
>>> this by some factor and filter it down in digital, but at the end of
>>> day I have a bandwidth of 500 Hz.
>>> I have a constraint on my system: I must produce a result from my DSP
>>> within 300 ms of a change to the input. At worst case, then, I assume
>>> that it must be within 300ms of getting a new sample. However, if I
>>> sample and subsequently clock the DSP system at 1 kHz, my clock period
>>> is 1 ms. If I were to then, say, collect 1024 samples for an algorithm
>>> I've already taken over a second! Now, I could pipeline my samples
>>> through my DSP path. However, I still pay the price in latency. There
>>> will still be a delay from the change in signal to the final output
>>> would be unacceptable.
>>> So I suppose my question is, if I were to massively oversample my
>>> signal, would I be able to avoid this latency cost or am I ultimately
>>> bounded by the bandwidth of the signal of interest? ie. Will my
>>> oversampled processing be "useless"? Could I sample at nyquist and run
>>> the actual processing at a faster rate?
>>> One thing to consider is that part of my DSP path is the discrete
>>> wavelet transform, which itself deals with rate conversion. Do my
>>> samples have to be at nyquist to take advantage of the wavelet
>>> transform? It would seem so.
>>> I'd appreciate any insight into this. Thanks!
>>First, as SteveU pointed out, you're conflating sampling rate and
>>bandwidth. They _are_ related, they _are not_ the same thing.
>>Second, you're making a common error about Nyquist rates, which is going
>>to lead you to severe aliasing if you don't change your ways:
>>Third, a bandwidth of 500Hz suggests that all the interesting stuff can
>>be settled and done with in 2ms on a good day. So what are you doing
>>that's going to take 300ms?
>>Forth, I would expect that your algorithm is going to take the sum of
>>whatever real-world time it was going to take anyway, plus one or two
>>sample times. If you need 1000 samples to do something meaningful at a
>>sampling rate of 1kHz, that's because you need one second's worth of
>>-- not because 1000 samples taken at a sampling rate of 1MHz would be
>>I deduce that your problem is of detection and estimation. Perhaps if
>>you told us more about your _actual issue_ we could help you. I can
>>already tell you what it's going to boil down to: either your signal is
>>going to give you worthwhile information within 300ms of whatever event
>>you're trying to detect, or it won't. In the first case you have a
>>chance of doing something useful. In the second case, you're out of
>>Since you're not letting us know what you're _actually_ trying to do,
>>there's not much help that I (or, I suspect) the rest of us can give
>>unless you give more info.
>>Control system and signal processing consulting www.wescottdesign.com
> Hi Tim,
> I'm trying to do some simple myoelectric-based control.
> To restore the spectral content of the signal that has been attenuated
> by human skin, I'd like to implement a whitening filter based on Burg AR
> estimation. To converge to a better estimate of the AR process affecting
> my signal, this typically needs more samples to work on than less. This
> module will then update a pre-whitening FIR's coefficients with its
> estimates. If I do it in a periodic "burst" fashion I suspect I may be
> able to get around this timing problem of 1 second, especially since I
> don't expect the estimate to vary greatly.
I don't see why this has to be bursty, or why it -- by itself -- would
cause problems with delaying your signal.
Just run multiple threads: one process estimates the whitening filter,
another one applies the current whitening filter and does the estimation.
> On the other hand, I'll also be trying to detect features using the
> Discrete Wavelet Transform and 256-pt FFT followed by several estimator
> modules that work on windows of samples. I'm doing this all in hardware
> (Verilog) and so I can leverage true parallelism.
Something isn't matching up here. I alluded to this earlier, but you're
not speaking to it at all.
You are measuring a real signal, and trying to pull out some real
feature. But you speak of using a fixed-size FFT with an as-yet
undetermined sampling rate. How can this be? Shouldn't you be talking
about an FFT over some defined real time interval, with as many points as
it needs to have?
> So I get that my sampling rate can be much, much lower than my
> processing rate. However, that means anything working on the samples
> themselves will do no meaningful work until the next sample arrives,
> right? In that case I'll need to come up with an enabling scheme I
> suppose so that the various elements in the chain power-down once
> they've done their number crunching until the next valid sample arrives.
Possibly. I'm not up on the most current technologies, but back when I
was paying attention it wasn't always obvious how to get low power from
an FPGA -- basically, if you clocked the thing at all, you lit up a whole
lot of logic that then consumed power.
If you're re-using hardware (which is a wise thing to do at such low
sampling rates), then the way to save power with an FPGA may well be to
just throttle the clock down until you're finishing with sample n-1 right
about the time that sample n comes in the door.
Better yet, you may want to use a processor to do this.
> The wavelet transform somewhat troubles me here, however. If I
> oversample this 500 Hz bandwidth signal by some factor, and then filter
> it down in digital, my initial detail coefficient levels will be junk,
> right? Unless the low-pass/high-pass filters for the first iteration
> aren't half-band but instead are designed to segment my real signal band
> within the much larger oversampled band in half. Am I getting this
> right? This is probably my most important question.
I'm not really up on all this new-fangled wavelet stuff (it doesn't help
much with control systems). So your question is a bit jargon-rich.
I wouldn't call sampling faster than 1000Hz "oversampling" in any moral
sense, as in "you shouldn't be doing that" or "that's extravagant". I'd
call it "adequate sampling" instead.
Yes, the faster that you sample the less that your higher frequency bins
in your FFT (or whatever passes for higher frequency wavelets) will tell
you more about noise than about signal. Given sufficient processing
power (and assuming that you're using a windowed FFT and calling it
"wavelets"), I would be inclined to just do a longer FFT and discard the
If you absolutely positively feel that you must limit your sampling rate
to 1000Hz at some point, then sampling generously fast in the analog
domain and going through some sort of decimation process in the digital
domain is probably a good idea. It's easy to make well-behaved filters
in the digital domain that are practically impossible to make in the
analog domain. Were I doing this, I would start by investigating whether
a CIC filter was adequate.
One thing that concerns me, however, is that you seem to be falling into
the same trap with the wavelet transform that I see done with Kalman
filtering and fuzzy logic: namely that your statements make almost as
much sense with the word "magic" substituted for "wavelet". If you
really know what you're doing with the wavelet transform, then many of
these questions should answer themselves. If you're under the impression
that you're wielding a _magic_ transform, then progress is going to be
slow and -- at best -- random.
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?
Tim Wescott, Communications, Control, Circuits & Software