I would like to present to you a note on an algorithm for time-reversed (truncated) IIR #Filtering. The method described therein yields filters with lower latency than block processing schemes, and unlike the IIR tail cancellation method, it does not introduce unstable modes. Since I have not found this concept mentioned in the literature, I was wondering if it was actually something new? I have sucessfully implemented linear phase Linkwitz-Riley crossovers and a #Hilbert filter, and although CPU load is somewhat higher compared to straight #IIR implementations, the concept is fairly simple and numerical condition is not an issue. It would be really surprising if this had not been discovered before.
I've never seen that - interesting - complexity nicely between TIIR and plain FIR
Any interest in writing a blog post about this?
Hello Martin. I'd like to understand the details of your filtering scheme. (Looking briefly at your block diagrams, your scheme is reminiscent of a cascaded integrator-comb (CIC) filter implementation that contains no integrators!)
In any case, before I dive into your z-domain equations I have a dumb question for you. Is your filtering scheme solving the same problem (using efficient IIR filters while eliminating their nonlinear phase) that’s solved by the off-line (block) processing shown in the following diagram?
Yes, the scheme in my note provides a stable implementation of time-reversed IIR filters for a continuous data stream. If applied in sequence with the straight (not time-reversed) filter, you get linear phase and a squared magniture transfer funcion.
To my knowledge there are two other methods that can do that: block processing and tail cancellation.