I recently read (somewhere) that you can get a better cutoff response by running you data through a filter multiple times. I am using cascading butterworth filters so that I have a maximally flat response. What I am currently attempting to achieve is a 3000Hz low pass and a 300Hz high pass that is as close to a brick wall as possible. What I am doing seems close but I still have minute residual audio below 300Hz, even using 20+ order filter.
I forgot to add that I actually need the separate low and high pass filters for different purposes.
Does running the data through the filter multiple times filter more effectively without having to resort to higher order filters? I also thought I read that you have to run the data into the filter backwards then forwards to ensure there isn't a phase shift. I am speaking out of naivete but does this matter and more so, does it matter for speech audio?
I just began reading about window-sinc filters and the ability to have have sharp cutoffs using a hamming or blackman window. Am I better off utilizing a window-sinc filter if I can accept the speed penalty. I've only ready about the performance but I do not know what the reality of its usage.
You may be getting some rounding errors that keep you from getting the brick wall response you are seeking. Try adding a simple two sample average that will absolutely put a zero in the transfer response at Nyquist with a smaller chance of rounding errors.
The rounding errors are a strong possibility but unfortunately, I am constrained to single precision floats. I will look into adding a two sample average.
In my opinion phase shift (or phase separation of various frequency components) is not very critical when it comes to voice audio quality.
Either FIR or IIR can get you any arbitrary pass/rejection criteria you want, however IIR will deliver that with a lower order filter (typically fewer computations per sample). Running data through a filter multiple times just applies the filter frequency transfer function (attenuation at a particular frequency etc.) that number of times. So if a filter has 6db attenuation at 100Hz, running your 100Hz audio data through it twice will attenuate it by 12db.
Running data through a filter forwards then backwards cancels phase separation of different frequencies (only matters if you are using IIR), but I don't think that's at all important for voice audio.
Instead of running your data through the same filter multiple times, I would recommend designing a single filter (probably IIR) of high enough order to meet your passband/stopband requirements with a single pass through. If that filter consumes too many computation resources then you have to get fancier.
Keep in mind that a mathematically pure filter will not completely suppress an undesired frequency (unless you put a transfer function zero right at that frequency), it will only attenuate it by a specified amount. Of course if the resulting attenuation drops it below your resolution floor then you will probably not see it.
One of the issues I had encounter is that a single filter was insufficient due to limited precision. I was fortunate enough to read a Nigel's blog that explained how to obtain a sharp corner and still remain maximally flat using butterworth filters. This is done by cascading filters with very specific Q values.
After getting excellent results, I still needed a bit more and I added multiple passes. The attenuation was far superior after having passed the data through multiple times. It now seems that I just need to now reduce my number of cascades to eliminate the overkill. The cascading combined with the multiple passes is doing what I need but I could reduce my computational load to only what is required to get the job done.
Think in terms of FFT:
The real spectrum of your audio is infinite, but sampling (let's say 48kS/s) will limit it to Fs/2 (-> 0..24kHz).
FFT transforms the linear range into equally wide bins, depending on your transformation:
N=160 would give 160 bins of 300Hz width.
So you might use bins (2..10) and (159..151, bc. complex/symmetric)
and zero out all others bins.
Retransformation would give your brickwall filter result.
However, consider that all bins are leaking, it's like smeared contents between the bins. To get more abrupt filters (as you want), you have to increase the number of bins. You can do that, as long as you have enough calculation power (in terms of precision and time).
Essentially, FFT requires that you have the whole signal present at the same time.
It's the same if you want to send data forward and backward through a filter.
If it comes to streaming, you have to split your stream in sequential chunks and deal with the problems at the connections between the chunks.
One point to consider: zeroing out some bins means that you need not calculate portions of the FFT algorithm (therefore the butterfly approach). Good software would do that for you.
You'll probably chose filters instead of FFT, but you do basically the same.
So if you want less content from below 300Hz, you have to increase your calculation efforts. As a rule of thumb: IIR require (much) more precision, FIR require a (much) bigger number of calculations at a time.
- I'd propose that you start with contemplating the FFT scheme, because it gives you probably the best understanding what is achievable and what it costs.
- Then, as a next step, try to get figures to number your problem. Define the transition region, the minimum stop band attenuation, the maximum pass band ripple which is acceptable. Don't define what you would like to have, but that what is really necessary - it makes the difference.
- Be aware that the stability of a high pass filter is not trivial, while a low pass filter is usually behaving well.
If you speak of 20th order (IIR) high pass filter, I suspect that you did not really check its behavior.
Or you have a very high precision processing engine.
And: the higher the order of the IIR filter, the more the group delay / phase issues will affect you.
Problems arise at certain phase conditions since IIR filters have a feedback. FIR filters are safer.
>>Be aware that the stability of a high pass filter is not trivial, while a low pass filter is usually behaving well.
I've been learning about the high pass stability the hard way.
However, I am using the above method to achieve the precision without any visible negative effects.
I plan to implement an FIR filter to see the differences in outcomes versus computational requirements. I will also do more research on how to apply FFT.
If you want to stick with IIR, I would encourage you to look at Lattice Wave Digital Filters that have very good stability properties, and very narrow transition band.
If FIR filtering is more appealing to you, I think that a multi-filter approach is better: large transition band with very deep rejection, interpolated FIR for sharp transition and good rejection (and low processing power).
You can also downsample and upsample to reduce processing requirements by having good rejection filter at a low rate, and upsample with shorter filters (large transition band).
I am looking forward to checking out Lattice Digital Filters. I just skimmed some information and ready to look deeper.
In my experience with telecom, where we often use similar filters for telephone bandwidth speech, we typically use a set of elliptic IIR filters. One HP and one LP.
The high pass is usually a 3rd order high pass notch, with the notch at 60 Hz. We try to avoid leaving any 60 Hz components in the output for obvious reasons. This 3rd order filter puts nulls at 60 Hz and 1 at DC. Typically we can achieve at least 30 - 40 dB rejection. However, our pass-band to the HP is usually at about 180 - 200 Hz. For you wanting a PB to start at 300 Hz, you could probably do better.
Of course, I don't know your application, so this might not be useful for you.