Multiband dynamic processing

Started by Badoumba 6 years ago12 replieslatest reply 5 years ago1244 views

Hi everyone,

This is my new post in this community.

Interested in acoustics in general, I am now playing a bit with C++ playing with the code and trying to implement different filters and sound processes. I am now focusing on multiband dynamics and wonder about the process to build it. I imagine that for a 3 band filter, I have to duplicate my signal 3 times and run in parallel a low-pass, band-pass and high-pass filter, apply a compression on each channel, and then sum the 3 processed signals and the end.

My quesions are simple:

- is this approach correct?

-what kind of filter should I use for this (second order, biquad,...) Seems that there are many different ones, but I have no idea about their respective usage.

Thanks for any tip!


[ - ]
Reply by dszaboApril 14, 2017

I've not designed a multi-band compressor, but I have used them quite a bit, so take this with a certain level of skepticism.

The kind of filter you use is entirely up to you as the designer.  In addition, a biquad filter can be a second order filter.  That being said, you probably want your filters to be complementary, such that the sum of the outputs, given unity gain (no compression/makeup gain), matches the input.

One criteria you would most likely want to impose is that for a biquad band pass filter, you should have 20dB/decade roll-off above and below the center frequency, so you would probably want to use first order low/high pass filters.

I've given some thought to this in the past, and what I've thought about doing is something along these lines: take the input stream and create a copy of it, low pass filter it on one and then subtract it from the original to create the second (NOTE: this could be done with proper filter coefficients using a high-pass filter, but I believe this would be more efficient to guarantee that the sum of the two outputs equals the input, especially for higher order low-pass filters).  This process could then be repeated as many times with increasing cutoff frequencies as needed to create the subsequent bands.  From there, as you described, run your dynamics control on the different audio bands to get the desired effect.

That's my 0.02 anyway, best of luck!

[ - ]
Reply by BadoumbaApril 15, 2017

Thanks dszabo!

I found everything I needed for the filters you mentioned here:

I put 2 filters in cascade to get a -24db/octave one as suggested.

I didn't think about the subtracting trick but this might be exactly what I need. I am still trying to implement it as I am working with vectors and the std::transform function reports some errors. But I'll get through soon.

Thanks again.

[ - ]
Reply by BadoumbaApril 15, 2017

It looks like just subtracting the low-pass buffer from the initial track is not working. This was however the method suggested in this post:

[ - ]
Reply by dszaboApril 17, 2017

I'm certain you could get satisfactory results simply by using that source code library you linked to by setting the parallel filters with the correct parameters.

The whole subtracting idea is provided without warranty, and I don't have a ton of time to try it on my end right now, but I bet you can manage without it for now.  If you start to run out of CPU, you may want to reconsider it, but until then it would probably be wise to keep it as simple as possible.  Especially if you haven't even gotten to code your dynamics control yet, that'll be the fun part!

Are you implementing this as a VST plug-in, or something like that?

[ - ]
Reply by BadoumbaApril 17, 2017

I am just developing an offline processing exe for now to add some effects on top of an input track. I intended to use CLAM or Marsyas but the documentation is rough for me with very few examples (not a c++ programmer) and I found very difficult to build a skeleton app :(

I am not sure I can go far by my own like this but at least I am learning something on my way!

[ - ]
Reply by xaqmusicAugust 23, 2018

I stumbled on this thread while looking for ways to improve a multiband compressor my team and I are building...

Regarding the subtraction technique, it works.  However, you have to be satisfied with phase distortion where the high and low sides of the bandpass signal you are subtracting with interacts with the original signal.  The effect is not unpleasant to the ear and can yield some very cool tone effects.  I liked using it for de-essing and controlling voice inside a mix during the mastering process.

I prototyped the concept in Synthedit.  You can download a VST (Windows) here if you want to hear for yourself (the Xaq Dynamic Eq):   


[ - ]
Reply by dszaboAugust 25, 2018

Right on!  I’m glad you were able to make something out of it.  I’d love to hear a demo

[ - ]
Reply by probbieApril 18, 2017

If you use a parallel filter bank, the biggest issue will be the phase relationship between the bands when you sum them together again. If you use linear phase FIR filters (with equal latency values) then all will be well. Equally, you can use use biquads if you select the correct type of filter 'shape'. Filters used for loudspeaker crossovers have similar requirements. Check out 4th order (24dB/Octave) Linkwitz-Riley.

[ - ]
Reply by BadoumbaApril 18, 2017


Thanks for the filter tip.

In a DAW, we very often split an initial signal into 2 sub-branches and apply different filters. I never had any phase issue with this process. Is it different or maybe the signal is not split this way behind the scene.

As for CLAM or Marsyas, anybody has any experience/feedback?


[ - ]
Reply by dszaboApril 18, 2017

With respect to using a DAW, an engineer is typically going to be looking at time delay and inversion on a waveform.  Similarly, the latency through your signal routing and plug-ins can create a relative delay between a split audio signal.  This can create very distinct phasing effects that you would usually try to avoid.

In DSP, filters are typically analyzed in terms of their z-transform, which has both a magnitude and phase response as a function of frequency.  A graphical EQ will (or at least should) give you the magnitude response as part of the UI.  But it does not give you the phase response.  Because the phase response of a given filter in the DAW is not necessarily known, you would want to exercise caution in assuming that just because the magnitude plots should sum to unity, that they actually sum to unity.

That being said, I personally haven't encountered issues with this.  For example, if I use the Channel Strip plug in on a pair of busses in pro tools, and set one to be a LP filter at 1kHz, and one to be a HP filter at 1kHz, both having the same roll-off, I know from experience that they will sum to unity.  The point is that this isn't explicitly the case.

The wisdom would be to make sure to have some verification that the filters you design sum to unity, either by deriving the transfer function analytically, or by testing it with an impulse.

[ - ]
Reply by johndyson10April 25, 2018

Well -- I know that my reply is late/old, but I think that I have a lot of info/experience to help you with.  MY current project is a DolbyA decoder, but my current on-hold project is a multiband compressor/limiter complex, and also I have an in-progress expander project.  Each of these projects are working pretty well, and I might be able to help you with some of your potential approaches.

I am going to restrict my comments to compressor/limiter matters -- expanders (and the DolbyA decoder) are similar in some ways, but the design tradeoffs are different.

For a multi-band compressor -- I have a few hints that will help, and also I can talk about some of the decisions that you might need to make.

The two most important (initial) choices are the frequency band count/choices and how to detect the audio.  The detection scheme choice is incredibly important, and can almost make/break the project.

For the audio bands, of course, you are talking multi-band, so do you want to choose 2,3,5-6 or more bands?  I have some suggestions from experience, and lets just focus on 2 or 3 bands for now.  If you go much beyond that -- the choices have different criteria.

For 2 or 3 bands, most important is to split the bass from the treble.  So, where to split the bass?  My suggestion is make the transition between 160 and 320Hz for the bass.  This will cover most of the traditional 'bass' sound and help to keep that from modulating the middles and highs.  For three bands, it is a good thing to keep the speech frequencies together, so I wouldn't split the treble below 2.5kHz, and would rather choose 3-5kHz.  If the speech frequencies are chopped up, it can be noticeable without careful management.  So, the decision about the highest frequencies has been made:  (3-5kHz to 20.5kHz.)  Note that I put a topside on the highest frequencies -- there is a reason for that, but we might not get into that here.  Suffice to say that constraining the spectrum is a good thing.  Let's choose our bands now -- I am going to make the decision for my new 3 band:  (20-240Hz, 240-3kHz, and 3kHz-20.5kHz.)  For a 2 band: (20-240Hz, 240Hz-20.5kHz.).

How to detect the audio?  There are two general schemes: an RMS-like detector or a Linear-like detector.  For smooth sounding general purpose compressors, I suggest an RMS-like detector.  This means thus:  signal -> square -> avg/peak-detect-with-averaging -> filter -> square root -> raw signal level value.

For a linear detector (not recommended except for special purposes or slow gain control), one filters in the linear domain instead of the 'squared' domain, like this: signal -> avg/peak-detect-with-averaging -> filter -> raw signal level value.

Linear gain control makes for very responsive gain following, but it is too responsive and higher compression levels tend to give a sense of level depression -- it is really odd and should be used with care or for compander applications.

So -- RMS detection is only part of the choice for the detector.  The simplest is to use an averaging scheme for the filter.  The simplest filter is a straight RC time constant (or digital equivalent) and then further nonlinear filtering.  I use a really sophisticated (and difficult to ad-hoc explain) detector scheme that helps mitigate intermodulation effects.  Offline or in the future, I can explain the whole gamut of my set of detector and gain control C++ classes (always changing and always being improved), but a good first cut would be a 1msec attack time, and a 100msec decay time in the detection filter above.  After that, then a subsequent dynamic filter that varies in the decay time depending  upon the dwell time of the signal.  The dynamic filter can be both (or either) in the linear or dB domain.  If you use the dB domain for the larger scale gain changes, the compressor will sound very similar at 3dB or 30dB of compression.  If you use subsequent linear filtering only, then the sound will become more compressed sounding as you increase the compression depth.

Next choice:  compression ratio. COmpression ratio is usually best specified as a unitless dB/dB ratio. That can be calculated in slightly different ways depending on the kind of flexibility that you want...  In the most general case, I actually use either the C/C++ pow function or multiply in the dB domain, depending on where I do the calculation.  If you never go into the dB domain, you can just use the pow function or use explicit sqrt type opertions to effect the compression ratio.

But, how do you actually 'wire-in' the compression ratio... Well, assuming a feedforward gain control (much easier on computers than trying to do feedback gain control), you simply take the processed level, apply the nonlinear transfer function that does the compression ratio calculation, then multiply this dynamic result with the signal itself.  For inf:1 compression, you take the level calculated/processed above, mathematically invert it, and then multiply that by the audio signal. (I am not saying 'divide' for historical reasons, I always multiply.)  If you only want 2:1 compression, you take that inverted 'level', sqrt it, and then multiply by the signal.  Basically, in the general case, the signal multiplier(gain) can be caulculated by: gainval = pow(inv-level, crpower);, were crpower = (1.0 - (1.0 / cr));  Or, if you don't want to do the inversion, then you can make the pow function do all of the compression ratio work:  gainval = pow(level, -1.0 * crpower);  The -1.0 multiplier in the power function does the inversion for you.

One thing that I didn't mention is that this 'measurement' and 'gain' is done individually for each audio band.

After the signal * gain calculation is done for each channel, one can (often) just sum the results together.  There are times where a simple sum isn't the best choice, and that can be a subject for future discussion.  Basically -- think of this (put your EE hat on) -- whenever you multiply a signal by a gain and that gain is changing, what do you get?  Sidebands!!!  Because the gain change is nonlinear, the sidebands are not nice and clean, but fast gain changes can cause lots of splatter all over the place... So, in some cases, if done carefully, some post filtering can be used to improve the sound quality of a basic compressor/expander, and in fact be much better than a REAL FET compressor.  (Opto compressors have a big advantage that their gain changes are slow, which cause less intense far-away sidebands/distortion.  That is one reason why Opto compressors are so desirable.)

Lets not worry about the 2nd order matters like 'optimum quality', and just get a good/plausible compressor working -- so the post filtering can be deferrred for now.

Oh well -- hope that this helps to get you started...  I have LOTS more info and experience on these conceptually simple things (but are infinitely more subtle than one might initially guess!!)


[ - ]
Reply by johndyson10September 26, 2018

I have some C++ code that could be relatively easily adapted to be multiband.  This compression is pretty clean (a few problems with sampling, which means that a little more careful crafting might be in order.)  However, the dynamics and sound of this compression are both pretty good (other than the HF effects whichi I mentioned.)  Might need an LPF on the gain control signal.  I have written much more complex compressors based upon this general technique -- and does model approximatly the dbx RMS approach.  Some support subroutines might need to be filled in line 'Vmax' which in my library is a 'max' routine that works with everything, including my long vector package variables.  Likewise, dB2lin and lin2dB are similar -- they work with vector & scalar operations -- however, for the purposes of this code, just normal subroutines would work fine.

Again -- not 100% complete, but the code runs as is in my C++/Audio development environment and support routines (my working environment, that I also use for my DolbyA compatible decoder and various compressors/expanders also supports mulithreading, message passing across threads, very wide ranging vector math support, automatically using the fancy vector instructions on the CPU, etc.)  I simplified the code to work without vectors, and the 'Asample' class is simply a floating point left and right signal value (item0 and item1.)  It is roughly equivalent to using --encoding=floating-point --bits=32 in sox.

dBfilterSIMPLE is pretty much a raw implementation of dbx style signal detection.  The two parameters used in the declaration of the class are the dB/sec values for attack/decay.  (Technically, it isn't quite dB/sec, but is dB/sec per dB of signal level change.)  In some cases, a fixed decay rate vs. the calculated decay might sound better.

(I am making available a compilable/runnable version of the demo also -- the technique used sounds very good for such simple code.  PLEASE IGNORE THE SIZE OF THE INFRASTRUCTURE -- the compressor itself is only perhaps 100 lines of code -- with the support stuff being much larger. With my vector library, it IS very easy to extend to multiple bands, all you need are the appopriate front-end band pass filters, and learn to use the vector math lib.  For my DolbyA decoder, I use carefully crafted IIR filters, with some dynamic characteristics that model the feedback loop.  For the compressor, a few simple IIR filters will suffice -- I suggest using simple 2nd order HPF, and then add/subtract to get the bands you want.  You can also use more formal crossover filters, but bandpass leakage for this compressor would not be fatal. 

Look for the file "" in the repo below.  I could not attach the file, so unless I can figure out how, the repository has the program and source code.  It also has an ancient and broken version of the DolbyA 'compatible' decoder -- it is far, far from being accurate enough for pro use, but can improve normal consumer DolbyA material.  The more accurate (REALLY GOOD IN FACT) one will be mentioned at AES - in a talk about tape restoration.


(SORRY FOR THE SOURCE NOT BEING EASIER TO READ/BETTER DOCUMENTED!!!  I can help intepret if someone is really all that interested.)

Using the compressor -- a --thresh=9 and a --cratio=0.50 is a good start.  --cratio=0.50 is 2:1, --cratio=0.66 is 3:1, etc.  use the program like this:

comp-win --cratio=0.50 --thresh=9 <infile.wav >outfile.wav

Program also works realtime with sox tools, like the sox command and the sox based 'play' command.

The runnable version of the compressor is a windows .exe, but will build under Linux with proper Makefile changes (I do all development on Linux) -- there is a HUGE amount of unnecessary infrastructure for the demo, but I cobbled the source together from my DolbyA decoder (removing all of the proprietary parts), and then added the simple compressor.  Note that the actual compressor class is proccomp starting at about line 642 in the audioproc.cpp source file.  The call to the class is in the "THinput" thread (which really was a part of a long string of threads in the DolbyA decoder, but on this the other threads are nulled out.)  Simply, the thread calls the class in a loop, processing the samples one by one.

For the 'RMS' filter (which is a misnomer, but this technique is an extension of what THATcorp and DBX call 'RMS') -- it is the class 'VfilterdB', and it really does all of the work.  I wrote a simple scalar version of the filter called 'dBfilterSIMPLE' which does a simpler version of what the more sophisticated VfilterdB does.  dBfilterSIMPLE is much closer to the original 'RMS' technique.

I have a rather large header file containing all kinds of attack/release and detector techniques for audio processing programs, and I left a few examples in the audioproc.cpp and smfilt.h files.  Most of the support class files have been minimized for this application. Whenever possible, my programs uses the SIMD instruction set -- refer to the top of the audioproc.cpp file for a description, and the real work of abstracting the SIMD instructions is done in varray.h.  The transcendental math work is done by a library in the vecmathlib directory.  The vecmathlib licensing allows relatively free use, and refer to the documentation in that file for how the library works.  My varray.h and vecmath.h headers do the adaptations to be able to use the vecmathlib relatively easily in my code.

I have posted this code example for easier reading without downloads/etc, but more strongly suggest trying out the binary executable in the file from the repository ""  This code example, doesn't really need much support.  For best quality, remember an LPF on the gain signal would help tremendously.

static constexpr int dBNCHANS = 2;
struct dBfilterVAL
    float v[dBNCHANS];

class dBfilterSIMPLE
    dBfilterVAL curval;

    float atk, dcy;

    dBfilterSIMPLE(float atkA, float dcyA) :
        curval(), atk(atkA), dcy(dcyA)

    const dBfilterVAL&
    next( dBfilterVAL in)

        static const float deadzone = 0.0;
//        static const float deadzone = 1.0;
        for (int i = 0; i < dBNCHANS; i++)
            float ldiff = in.v[i] - curval.v[i];
            if (ldiff > deadzone)
                curval.v[i] += ldiff * atk;
            else if (ldiff < -1.0 * deadzone)
                curval.v[i] += ldiff * dcy;

        return curval;
class proccomp

    dBfilterSIMPLE sfilt;

    proccomp(void) :
        sfilt(20.0f / aspec.inSRATE, 20.0f / aspec.inSRATE)


    next(const Asample& src)

// Calculate the signal^2
        float lvlL = src.item0() * src.item0();
        float lvlR = src.item1() * src.item1();

// Hard threshold at sqrt(0.003) of full scale
// this threshold establishes the max gain
// the lower, the more maximum gain.
        lvlL = Vmax(lvlL, 0.003f);
        lvlR = Vmax(lvlR, 0.003f);
        float dBlvlL = lin2dB(lvlL);
        float dBlvlR = lin2dB(lvlR);
// Calculate instantaneous dB level
// using trivial dBfilterSIMPLE class and
// class IN/OUT value 'dBfilterVAL'

        dBfilterVAL intmp, outtmp;

// dB version of SQRT of signal^2
        dBlvlL *= 0.50f;
        dBlvlR *= 0.50f;

// Add in pivot location (3.0 will work fine)
        dBlvlL += 0.0f;
        dBlvlR += 0.0f;
        dBlvlL += ac.thresholdlval;
        dBlvlR += ac.thresholdrval;
        intmp.v[0] = dBlvlL;
        intmp.v[1] = dBlvlR;
// Do the actual filter operation
        outtmp =;

// Make sure that we choose the loudest channel.
// we don't want the channel gains to be different.
        if (outtmp.v[0] > outtmp.v[1])
            outtmp.v[1] = outtmp.v[0];
// Change the signal level into gain
// and multiply by compression ratio.
        //static constexpr float cratio = 0.50f;
        static constexpr float cratio = 0.10f;
        float dBgain[dBNCHANS];
        for (int i = 0; i < dBNCHANS; i++)
            dBgain[i] = outtmp.v[i] * -1.0f;
            dBgain[i] *= ac.cratio;

        float LINgain[dBNCHANS];
        for (int i = 0; i < dBNCHANS; i++)
            LINgain[i] = dB2lin(dBgain[i]);

        Asample rtval {src.item0() * LINgain[0], src.item1() * LINgain[1]};

// return gain controlled signal.
        return rtval;