I am looking at different ways to implement frequency analysis of various speaker sytstems. I can take a log sine sweep and extract an impulse response, and perform a dft to extract frequency and phase information.
However I am looking at trade offs between resolution and computational speed and being able to window the impulse.
I would like to achieve constant Q throughout the frequency display, so am thinking that filter banks are the way to go. I can successfully achieve this by pre-synthesising the filters, and multiplying in the frequency domain. This can be quite computational. Whilst I am not making realtime measurements, I would like to display processed results without too much lag.
However what I would like to achieve is to time window the impulse, so that I can try and filter out some of the room response from the speaker measurement. I can window my impulse response, but then performing an FFT on this, results in a high passed frequency response dependant on the window length. What I would like to achieve is two or three different length windows on the impulse, and the results 'glued' together to give a smooth spectrum response. However I do not know how to do this.
This has led me towards sub band analysis, where the signal is split into several bands, with downsampling performed to act as a low pass, and the length of the window changed, although I would need to be careful to retain the correct amplitude of the signal here. Computationally this seems like the better way to achieve what I would want, but still leaves me unsure on if I can successfully view magnitude and phase without a large error in the overlap between bands.
Can anyone shed some light on which avenue I should look to my needs. I have come across this great document, which has given me some great ideas.
For your application, a DFT is a 1D to 1D operation. Sub Band coding and filter banks, as I understand it, are 1D to 2D operations. They give you a time-frequency analysis, where the DFT would just give frequency. I have seen tools not unlike what you are describing that provide some kind of time-frequency representation of an impulse response. But you should understand that these two things are totally different ways of analyzing a signal.
You could try interpolating the DFT results by padding the impulse response with zeroes at the end. That’s probably as fair as anything
This sounds like where I could be going wrong. The original signal is a good 700ms long, so long enough to contain the full audio spectrum I am wanting to analyse. Performing a DFT on all of this contains enough resolution in the low frequencies. Performing a DFT on the windowed impulse, I still keep the entire length of the original signal (it just contains a lot more zeros now), so I have the resolution I need in the high end, but not a long enough window for the lower frequencies.
I have managed to implement wavelet analysis which is great for frequency-time analysis, however this is not an easy view to read for tuning an audio system. (Well it is, but more for acoustic properties of the room than the system), whereas I would like to window out the room as much as possible, but whilst retaining the low end frequency data, meaning I do need different window lengths.
I think it sounds like I do need to look at either sub band analysis or filterbanks. I hadn’t thought about a DFT being 1 dimensional. But it makes sense as it’s just over a fixed window. I think my issue is if I divide my original signal into several sub bands, how do I put the results back together to view both phase and magnitude without a broken spectrum? (Ie unless it’s a rectangle window where I could choose sharp edges, I would need to overlap band results)
If your sample length didn’t change, then the resolution of the DFT didn’t change either. Windowing in the time domain is mathematically equivalent to filtering in the frequency domain. That is most likely causing your loss of resolution, but I couldn’t say without seeing some plots or something like that. If that’s the case, there’s no more low frequency information there beyond what you already see
If I look at filterbanks for audio analysis:
I want to use two bands so I create two versions of my impulse response, one filtered with a short time window, and one with a longer time window for lower frequencies.
The time window has the effect of high passing the signal, sampling rate has the effect of low passing. However I have read a bit more on filter banks with an analysis filter and a synthesis filter for reconstruction. (Whilst I do not want to reconstruct the signal, I do want to make sure there is minimal attenuation to my analysis signal.
If my understanding is correct, I then run each windowed impulse through the relevant filterbank, and then perform a dft on this.
I could do this through a series of downsampling to filter the signal, however using uniform filters I can keep the sample rate the same, which surely would mean I could just add up the outputs of each dft bin.
I could do this the other way, and run the filterbanks on the entire signal, and then window the relevant bank, however as the section of time is so small relative to the potential input signal (ie, I capture 1 second of audio, but the actual impulse from the audio source may last 300ms), it seems like a lot of wasted calculations filtering all of the data to then window most samples to zero.
I am voicing these thoughts now, but am now doing a bit of trial and error in Matlab to see if I can properly analyse the signal, using a second of generated pink noise, I want to see if I can create the ideal impulse, and then window this signal, put through the filter banks, perform a dft, and then view the results to see if I have the correct method.