Hi, I have been playing around with the discrete wavelet transform (dwt) as a preprocessor for an adaptive system (for system identification and time-series prediction). The problem is however that the DWT is anti-causal and suffers from boundary value problems (due to the signal extension). I've tried the first thing that I could think of - a dwt with an expanding window where I only keep the last sample from each transform. The results were utter rubbish (due to the boundary distortions I suspect). Does anybody know an approach that would work - to enforce causality so that the dwt doesn't use future samples? Any pointers would be appreciated, Denoir
DWT anti-causality
Started by ●August 8, 2006
Reply by ●August 9, 20062006-08-09
lucas.denoir@gmail.com wrote:> Hi, > > I have been playing around with the discrete wavelet transform (dwt) as > a preprocessor for an adaptive system (for system identification and > time-series prediction). The problem is however that the DWT is > anti-causal and suffers from boundary value problems (due to the signal > extension). > > I've tried the first thing that I could think of - a dwt with an > expanding window where I only keep the last sample from each transform. > The results were utter rubbish (due to the boundary distortions I > suspect). > > Does anybody know an approach that would work - to enforce causality so > that the dwt doesn't use future samples? > > Any pointers would be appreciated, > DenoirIn case of DWT with FIR filters, causality can be ensured by appropriately delaying the signal. -- Jani Huhtanen Tampere University of Technology, Pori
Reply by ●August 9, 20062006-08-09
> In case of DWT with FIR filters, causality can be ensured by appropriately > delaying the signal.Yes, but for more complex signals, that may require a fairly high number of decomposition levels, that isn't really an option. As the signal is downsampled in each pass, the non-causal region doubles for each level. This limits its usefulness in practice.
Reply by ●August 9, 20062006-08-09
lucas.denoir@gmail.com wrote:>> In case of DWT with FIR filters, causality can be ensured by >> appropriately delaying the signal. > > Yes, but for more complex signals, that may require a fairly high > number of decomposition levels, that isn't really an option. As the > signal is downsampled in each pass, the non-causal region doubles for > each level. This limits its usefulness in practice.So you're running the wavelet transform for a real-time stream? As is the case with pretty much every transform, you have to settle for the delay. Use low order filters, decompose as little as possible and compensate the anti-causality with the delay. I don't think there is much choice. (Well, there is but all effectively add delay to the signal). Whether or not this is useful in practice depends on your requirements. -- Jani Huhtanen Tampere University of Technology, Pori
Reply by ●August 9, 20062006-08-09
lucas.denoir@gmail.com wrote:>> In case of DWT with FIR filters, causality can be ensured by appropriately >> delaying the signal. > > Yes, but for more complex signals, that may require a fairly high > number of decomposition levels, that isn't really an option. As the > signal is downsampled in each pass, the non-causal region doubles for > each level. This limits its usefulness in practice.That's true of all filters, digital or analog. That's why we can't extend the bandwidth of a Hilbert transformer down to DC. The delay of a symmetric FIR with N taps is (N - 1)/2. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●August 9, 20062006-08-09
Jani Huhtanen wrote:> So you're running the wavelet transform for a real-time stream?Well, sort of. I have all the past data of the signal, but I am for T=t_n trying to model T=t_n+d. So it needs to be causal. Now, the problem is as I see it that the DWT uses downsampling to be able to reuse the same FIRs for each pass while capturing other frequencies. This is terrible in terms of causality as the required delay increases exponentially with the decomposition level. I'm thinking something along the lines of rather than downsampling the data, changing the filter for each pass. Sure, there would still be a delay, but it would increase in a linear fashion with the decomposition level, rather than exponentially. Or am I missing something?
Reply by ●August 9, 20062006-08-09
lucas.denoir@gmail.com wrote:> Jani Huhtanen wrote: >> So you're running the wavelet transform for a real-time stream? > > Well, sort of. I have all the past data of the signal, but I am for > T=t_n trying to model T=t_n+d. So it needs to be causal. > > Now, the problem is as I see it that the DWT uses downsampling to be > able to reuse the same FIRs for each pass while capturing other > frequencies. This is terrible in terms of causality as the required > delay increases exponentially with the decomposition level. > > I'm thinking something along the lines of rather than downsampling the > data, changing the filter for each pass. Sure, there would still be a > delay, but it would increase in a linear fashion with the decomposition > level, rather than exponentially. Or am I missing something?The filter delay is built into the problem; low frequencies at high sample rates make for long filters. Downsampling the data is simply an efficient way to downsize the filter. Using a different filter that enables you to avoid downsampling won't decrease your delay. Put differently, you can either downsample the data or increase the filter length for each pass. That increases memory use but doesn't change the delay. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●August 9, 20062006-08-09
Jerry Avins wrote:> The filter delay is built into the problem; low frequencies at high > sample rates make for long filters. Downsampling the data is simply an > efficient way to downsize the filter. Using a different filter that > enables you to avoid downsampling won't decrease your delay. > > Put differently, you can either downsample the data or increase the > filter length for each pass. That increases memory use but doesn't > change the delay.Right - I'll end up doubling the filter length for each level and end up with the same situation. Ok, scratch that idea. How about my initial one, that didn't work - simply using an expanding window? For each sample you transform it and the previous samples. You then only keep the last sample from each transform. I know it is computationally atrocious, but still, it should eliminate the delay, at least explicitly. It didn't work for me (i suspect) because of the boundary problems due to padding, but I'm thinking it's possible to work around the problem by using such window lengths that no padding is necessary. In that situation, there will be a delay, but it will be constant for all decomposition levels.
Reply by ●August 10, 20062006-08-10
lucas.denoir@gmail.com wrote:> Jerry Avins wrote: > >> The filter delay is built into the problem; low frequencies at high >> sample rates make for long filters. Downsampling the data is simply an >> efficient way to downsize the filter. Using a different filter that >> enables you to avoid downsampling won't decrease your delay. >> >> Put differently, you can either downsample the data or increase the >> filter length for each pass. That increases memory use but doesn't >> change the delay. > > > Right - I'll end up doubling the filter length for each level and end > up with the same situation. Ok, scratch that idea. > > How about my initial one, that didn't work - simply using an expanding > window? For each sample you transform it and the previous samples. You > then only keep the last sample from each transform. I know it is > computationally atrocious, but still, it should eliminate the delay, at > least explicitly. It didn't work for me (i suspect) because of the > boundary problems due to padding, but I'm thinking it's possible to > work around the problem by using such window lengths that no padding is > necessary. In that situation, there will be a delay, but it will be > constant for all decomposition levels.Low frequencies at high data rated demand long filters. Long transforms are needed to get to low frequencies with the proposed scheme. It's true that aside from startup and ending transients there is a sample out for each sample in (self evident, really), but you need to ask how long before an input sample significantly affects the output. If your transforms have a few kilosamples of history in them, and every sample matters, there's your delay. FFT or FIR; the computations are different but equivalent. The delays are precisely the same. BTW, a sliding FFT, which is what you just described, needn't be inefficient. Google it. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●August 10, 20062006-08-10
Denoir wrote:> Hi, > > I have been playing around with the discrete wavelet transform (dwt) as > a preprocessor for an adaptive system (for system identification and > time-series prediction). The problem is however that the DWT is > anti-causal and suffers from boundary value problems (due to the signal > extension). > > I've tried the first thing that I could think of - a dwt with an > expanding window where I only keep the last sample from each transform. > The results were utter rubbish (due to the boundary distortions I > suspect). > > Does anybody know an approach that would work - to enforce causality so > that the dwt doesn't use future samples?Perhaps your problem is how to transform a linear-phase filter into a minimum-phase filter? I can't really tell what your problem is, and why you feel that the DWT is a solution. Time-series prediction using linear methods does require any bandsplitting. You can directly compute the linear predictor for any subband that you wish in the frequency domain. Just some thoughts. Regards, Andor