Artifacts in Time Varying IIR FiltersStarted by 5 years ago●19 replies●latest reply 5 years ago●762 views
I'm currently playing with some audio processing using LFO driven filters. Right now, I have been using direct form I biquad filters as my basis for the filters and calculating the coefficients as a function of the output of the LFO. I 'feel' like there are artifacts as a result of modulating the filter coefficients, but I haven't tested (outside the target) it to find out for sure. I am currently planning on working on this more this weekend and in the following week, but I figured it couldn't hurt to post a couple questions ahead of time to lean on the expertise on this forum.
First off, is it reasonable to expect artifacts as a result of varying the coefficients in DF-I? The closest I could find was that 'certain' topologies have issues with stability and artifacts, but I couldn't find anything about DF-I specifically. I could certainly see how that would be the case, since the output delay line is dependent on the coefficients.
Second, has anyone tried doing something like this with an FIR implementation? If I used the coefs to find an impulse response, I could use that as a time varying FIR filter. This seems like the most robust implementation, as by definition there would be no artifacts and the filter would always be stable. However, it seems like this will end up being too computationally expensive to be a reasonable implementation.
Third, is there any advantage to using a state-variable filter (SVF) in this application? I found a paper that proposed using an SVF in this manner with good results, but it didn't make intuitive sense to me (at least not at first glance).
Finally, if anyone has any other input or references I could look at for this application, I would greatly appreciate it. Thanks in advance!
The bi-quadratic filter, contrary to its popular usage, is not a good candidate for very narrow bandwidth filters. By narrow bandwidth, I mean a large ratio of sample rate to BW. Under this condition, it is very sensitive to finite arithmetic... are you implementing the filter in fixed point? The problem is, the inner loop, the a1 coefficient, is close to -2 and places one root outside the unit circle and leaves the other at the origin. The outer loop, the a2 coefficient, brings the two roots towards each other.. they meet (coalesce) just inside the circle and separate. When the roots are very close together, as they are for a low bandwidth, their position is very sensitive to small changes in the coefficients. A sensitivity analysis shows that the change in root position (due to a change in coefficient value) is inversely proportional to the distance between them.
There are other, less sensitive architectures, the standard state space version is not one of them! One option is called the normal filter. The versio I particularly like uses a partial fraction expansion of the 2-nd order filter to form a pair of first order filters with complex conjugate roots and complex conjugate residues... their sum is real or simply twice the real part, so we only have to build a single 1-tap filter with complex weight and residue. This filter has the lowest sensitivity to coefficient changes. I can send you some code I've written that does this...
I also suggest you look up two of my papers.
"An Improved Architecture for Implementing NarrowBandwidthLowPass Recursive Filters"
"Implementing Recursive Filters with Large Ratio of
Sample Rate to Bandwidth"
best regards and happy new year
Am I correct that your complex 1st-order filter is similar to (or the same as) a 2nd-order state-space with a state transition matrix of something like [b * sin th, b * cos th; -b * cos th, b * sin th]?
I've kind of intuitively cooked up something like that, but only used it once or twice, for implementing notch filters in control loops.
I (and others for sure) would love to have the example code where you decompose a biquad into the 1 tap filter.
You might like to refer to the AES paper "The Time-Varying Bilinear Transform" by Able & Berners.
I've never used this technique, so let me see if I understand the approach... I might be all wrong. The idea is to use a low frequency signal to adjust some parameter of the filter, such as its bandwidth. The filter is continually processing the signal, and every so often you update the filter parameters, correct? On each update you measure the amplitude of the low frequency signal, convert that into the desired bandwidth of the filter, calculate what coefficients are needed for the new bandwidth, and then load the new coefficients into the constantly running filter, correct? Fred gave some comments on narrow band filters... is that what you are using, or just lowpass and highpass?
Thanks everyone for the responses so far, it is greatly appreciated.
Unfortunately, I do not have an AES membership, so I’m still a bit on the fence about the cost of the download, but I’ll probably bite the bullet eventually. Thanks Probbie!
Fred, the filter is implemented using single precision floating-point data. I couldn’t find links to your papers, but will keep looking, I’d love to read them.
Steve, you got it. The idea is pretty common in guitar modulation effects, going all the way back to the UniVibe used by Hendrix, which near as I could tell was a type of phaser effect, where you use time varying all pass filters to created a time varying phase response which would create an interference pattern when summed with the input signal.
Right now, the effect is still in flux, and I’m trying all kinds of variations of parameters, so I can’t really say that it’s low or high bandwidth. I think I’ve mostly been staying with a Q anywhere from 1/8 to 8. Additionally, I’ve been using several different filter types (lpf, notch, etc), just playing around to see what I like.
Nice... Hendrix was decades ahead of his time. Here's a couple of links on some filters you might like to try.
On your question about IIR versus FIR... I think you are exactly correct, either would work but FIR may have a problem with computational power. Say you are operating at 44,000 samples per second, and want your filter to extend down to about 100 Hz. That's going to need an FIR filter kernel of about 1,000 points. So the processing load is in the range of 88 mips. Something like a 600 Mhz DSP will do it, but I don't know the platform you are working with.
The other question about artifacts is more interesting... and difficult to answer. Mathematically there is a discontinuity in the output signal each time you update the coefficients... what occured before the change is statistically different that what comes after. But the more I think about it, either for FIR or IIR, any transients produced by this seem insignificant, assuming the coefficient changes are slight. It would be interesting to me, and probably the group, if you find an example where the transient is noticeable.
The DSP Guide was actually the document that got me interested in DSP. I've read it cover to cover a couple of times, it's great.
The whole notion of artifacts is fascinating. No doubt there could be a discontinuity if you change the filter kernel, at least if the in-band phase/amplitude characteristics change. The best definition of an artifact that I've found would be the difference between the signal with the new state and the actual system with the new state following the change. I'd have to run some tests to figure out how much difference there actually is.
I'd be happy to make some recordings (to the best of my ability) of the target so you all can hear what I'm talking about, or at least what I think I'm talking about.
Dan, send my your e-mail address (fjharris [at] eng.ucsd.edu) and I'll send you the papers,
"Fred, the filter is implemented using single precision floating-point
data. I couldn’t find links to your papers, but will keep looking, I’d
love to read them."
"Q anywhere from 1/8 to 8"
Hmm. Maybe enough precision for 8-bit data. Assuming 44kHz-ish sample rates, 5kHz-ish filter frequencies, and Q = 8, you need about 12 bits of precision in the filter above and beyond the precision in the input data, plus some overhead (2 bits is good, more is better) so that your filter isn't the worst contributor to quantization noise.
My my. You've received some high-powered advice! Both Fred harris and Steve Smith are holders of "Black Belts" in the field of DSP.
Rick (Dr Lyons?), indeed, I always appreciate the input on this forum. I actually came across your Streamlining book while searching the topic. Looks like it’s full of good wisdom.
Hi dszabo. I am the Editor of, and a contributor to, that "Streamlining DSP" book. It's a collection of articles, by many authors, that appeared in the "DSP Tips & Tricks" column of the IEEE Signal Processing Magazine over a period of roughly nine years. And yes, there's some interesting DSP material in that book.
You have recieved so much great theoretical advice, for sure you will get it working fine.
I would like to add some advice on the practical side. Many years ago we did this for some Hypersignal customer-specific applications and we found that using Direct Form II biquads, if we changed the coefficients in several small steps, artifacts were negligible. For example if you have coefficient set n in place, and due to some condition in your system now you need to replace them with set n+1, if for each coefficient you divide the total change by 10 and implement that over the next 10 samples, then we could not detect noticeable output glitches.
We did it that way as our options were limited. Because Hypersignal generated optimized asm language source code for several DSP devices of the day (TI C5x, Analog Devices 21xx, Motorola 56xxx, AT&T DSP32C, etc), we couldn't change the biquad form or otherwise make changes that would affect the generated source code.
PS. For anyone interested, legacy Hypersignal-Macro software is available at:
There are basically two things that you're dealing with here: first, the suitability of biquads for IIR filters in general; and second, how you might make a time-varying filter to do what you want.
IIR Filter Topology:
I think this one has been treated fairly exhaustively here, although there's probably someone out there who could write an entire monograph on the subject and still not beat it entirely to death. Fortunately, it's also a common problem so there's a lot of literature floating around on the subject. I know that I touch on it in my book on control systems design, and I'm as sure as I can be without actually looking that it's in Rick Lyon's book on DSP techniques. Doing web searches that combine the phrase "IIR filter" with either "quantization" or "precision" should get you a lot of material, from rule-of-thumb guidelines like the one I gave, to good methods of doing the analysis.
(Just as a side note -- my preferred technique, because I come from control systems, uses block diagram analysis. Make a block diagram of your filter, be it Direct Form 2, State Space, or whatever. Then calculate the transfer function from each summing junction in the block diagram to the output. The maximum gain will tell you how much overhead you need to bake into your filter to deal with quantization noise.)
This has not gotten much attention here -- probably rightly so, because IIR filters are common, but time-varying ones aren't.
As long as your parameters vary slowly enough, you don't need to do anything special. However, as the frequency of the parameter variation gets within an order of magnitude or so of the cutoff frequencies of your filter, weird things will start happening, and the filter topology will start to matter.
The way I think of this (what little thinking I've done) is that as you change the parameter values, the states (or maybe the states corresponding to modes) either do or don't get "squeezed", and as a result do or don't immediately affect the output. A quick (and probably inaccurate) way of seeing this is if you have a filter topology where the output comes directly from a state variable, or some linear combination of state variables, and if the gains from state variable to output doesn't change, then the output cannot jump with a jump in the filter parameter. The converse is, obviously, true. I think of this as the parameters getting "squeezed", but I highly doubt that's an acceptable technical term.
I can't write intelligently on this subject beyond that statement, because I haven't been there. However, what I would do if I were tasked to be in your shoes would be, first, to see if I could get my hands on those papers, even up to joining that society if I could afford it. Second, if there are actual electronic circuits whose sound you want to replicate, or to use as an inspiration, try to get your hands on them, and make filters that simulate their action, right down to represent their internal states (probably all capacitor voltages, but maybe some inductor currents) in the states of your filter. That way, when the states get "squeezed" by the parameter values changing, it'll be the same way as the prototype.
Thank you everyone for the replies. While I was planning on doing some tests this weekend, I have instead been going back to paper designs to establish what it is that I am even trying to achieve. There was so much valuable input, it's going to take me a while to sort through it all.
Tim, I can't thank you enough for the level of detail you put into your reply, it got me thinking about the problem on a different level than I otherwise would have. To be honest, I have historically dealt with fixed point implementations, and was taking my floating point unit for granted, not even considering the effect it's precision would have on filter performance. That's a big enough topic that I don't think this thread merits getting deeper into it, but thanks all the same.
Regarding your thoughts on time-varying filters, that really got me thinking. As a core value, I have been trying to explicitly avoid trying to recreate existing signal processors (after all, what's the point in trying to compete directly with something you can already get?). That being said, my strategy has been based around creating implementations of various DSP and software development techniques I've learned over the years. Where I landed on the subject was, as I said, essentially trying to recreate and time varying FIR filter with an IIR topology.
I (sort of) took your advice and created some discrete time models of simple electronic circuits. That got me rethinking what it was that I was trying to achieve. If you were to have an RC low-pass filter, and half way through charging following a step input you changed the resistor, you wouldn't expect the voltage at the output to change instantaneously, just that the rate at which it is charging would change. If I instead changed the impulse response, the output voltage would change instantly. I'm not trying to say that one or the other is "correct", but I think it is an important distinction.
Thanks again! -Dan
There is also Vadim's book on ZDF filters, which have become quite popular in music DSP applications. This topology specifically addresses modulation aspects, i.e. time varying filters, as opposed to the much better known LTI systems.
NI just posts books like this on their website? How did I not know that!? You are my hero, thank you thank you thank you
I do lots of work on compressors/expanders and NR systems, so get involved with filters that are either time varying or are in situations where the signals are modulated somehow (similar in a general sense.)
I have some warnings for you -- be careful anytime that you modulate (multiply) an audio signal with a signal spectrum that includes the near audio and audio frequency ranges. There are ways to mitigate some of the problems, but almost always (including fast attack/decay compressor/expanders) your signals are going to manifest intermodulation products that you don't intuitively expect. I am just trying to make you aware of the matter.
Here is an example -- fast attack limiter. The attack times chosen are sometimes instantaneous or sometimes in the usec or msec ranges. When you do a gain control with such attack (or decay) characteristics -- you'll get intermod products. A perfectly simple experiment that demonstrates the effect is to test with a relatively fast attack/decay compressor/expander, an simply supply the device with quickly varying, say, 100Hz signal. Now, on the output of that compressor/limiter, filter out the frequency range of your expected signal... Even though you WOULD expect noise glitches for the basic signal -- the necessary intermodulation, more than likely you'll notice even worse intermodulation effects when using a time varying device like a compressor limiter. So, instead of a bit of splatter -- you'll get even more splatter. This splatter doesn't know much about Nyquist, so will happily produce products that wrap around and become even more ugly.
*A hint about 'limiters', is that if you can -- sometimes instantaneous attack is better than the naturally smooth 1msec or so attack -- because it concentrates the ugliness within a short time frame, and just MIGHT be less audible (it all depends.) ALL OF THIS IS MAGIC, and really does require that one put their 'thinking cap' on.
All of the time varying stuff along with the incoming signals, and yes -- even the sampling can/will produce sidebands/intermodulation/modulation products that you don't expect.
My best suggestion is to be aware that the effect exists. Everytime that you take a signal and apply a nonlinear operation on it (including multiplication for gain control or parameter variation), you friend 'distortion products' starts appearing with his/her evil intent :-). The idea of the spectrum wrapping (Nyquist) makes the problem even more troublesome.
(I hope that this is the general subject that you were asking about -- if it isn't we can ask that this post be removed... Don't want to waste peoples time.)