Hey guys, first time poster.
My colleague showed me this technique in which we perform an fft on a signal X(f) = fft(x(t)) (eg fs=20MHz) and extract a region of interest (eg 2-7MHz) by simply cropping X(f) and then performing an inverse fft on this new data to give y(t). Now, we say that y(t) has a new sample rate of 5MHz and it has been base-banded about 0Hz in the frequency domain. The amplitude is diminished however we can adjust for that easily enough and when we do this it seems to actually work.
I want to know if this is actually mathematically valid? Does this method go by another name as I cannot find any mention of somebody else actually doing this? Or if it is not a valid way to downconvert a signal, what issues might this method introduce?
From what I can understand from your statement, it would seem that nothing has been violated mathematically. As a caveat, I've never done this in any of my work. Since you go from 20 to 5 MHz, that would imply you now only use 1/4 of the coefficients around the region of 2-7 MHz (a 5 MHz span). You could then use an IFFT that is 1/4 the length of the original and it should work OK, assuming you don't have a lot of bleed-through within that range and the noise present is "strictly" Gaussian. That resembles some of the "exercises" I've seen in books. That should be no different than extracting one bin and doing an IFT one that one bin (again, depending on spectral purity and how well the frequency is centered in the bin due to "Gibbs energy"). It's obviously easy to "simulate" that to verify. Harder is to see how much deviation to expect if there are issues with band/bin alignments and non-Gaussian noise (ignoring quantification artifacts).
Thanks for your response!
Indeed, simulating it was easy - and so far in situations where I have used it it appears that it has been a success. Verification has been my struggle, but I'm glad that it is at least mathematically valid.
(apologies to @artmez, when I started writing this there were no other replies and it was 2 days old. We must have both posted at the same time)I was hoping one of the experts might kick in on this as I know I would flub a correct answer. Partially because fred harris' new (2nd Edition) multi-rate signal processing book is still in the "to be read pile". I'm 99% sure your answer is in that book.
My *guess* is for your application going to the frequency domain might not be as computationally efficient as one of the alternative filter/down covert methods covered in fred's book.
I can suggest as general guidance for frequency domain processing that when doing this type of processing of continuous data you have to account for a finite length FFT. The reference materials always show in -> fft() -> ifft() -> out (same as in) , but if you start doing processing in the middle this isn't always true as the FFT assumes the samples repeat every block.
The solution is one of the overlap methods, you "overlap" the data (usually 50% but depends on what accuracy you really need) window the data, fft(), process, ifft(), and then reconstruct from the overlapped blocks. There's a number of references here and on the 'net, search for overlapp add fft.
What you proposed in the frequency domain processing would be a brickwall filter, which might have time domain implications that would affect your results.
Thankyou for this, I will have a look into the overlapp add fft methods.
You are definitely right about the computational efficiency - doing a big fft and then an ifft chews time and resources but the method described is very simple to understand - probably why my coworker showed this to me.
The spectral product you describe with a brick wall filter has an interesting effect on the time domain versions of the spectrum. The two time functions are circularly convolved. One of the time functions is the sinc function which is the IFFT of the ideal rectangle you formed when you did the spectral product. What you are trying to do is accomplish linear convolution. Didn't quite get there.
An important rule in DSP is: never abruptly turn things on and off in the time domain or in the frequency domain. Think of the extra bandwidth we use in a nyquist pulse to make the pulse be finite duration.
We use the overlap and add or the overlap and discard techniques to perform what you are doing as a spectral product in the frequency domain that results in the circular convolution in time... we convert the circular convolution to linear convolution with overlap of the successive time series.
You can do the same thing but start with the impulse response of the desired filter which will have a desired time domain length and an acceptable transition BW in the frequency domain. The overlap in the fast convolution process will wipe out the boundary effects in successive processing blocks.
A second approach is use a polyphase filter bank with a single set of phase rotators that will extract the particular Nyquist zone from the down sampled and aliased spectra. See development in attached PDF
Very interesting, I've not heard of circular convolution.
I have heard of polyphase filters before, and I was secretly hoping that the method I had described would be as good as this - though I knew that it was too simple to be. Perhaps that is the approach I should take instead.
I think you are looking for linear convolution and overlap. If you need to do "non linear" spectral processing, look for "weighted overlap add" like in MP3 sound processing and noise reduction algorithms use.
You can model that and see directly.
So you got x(t) sampled at 20Msps. You do fft to get frequency domain data x(f) then you discard all bins except those of 2-7MHz. You then apply ifft and get those bins in time domain. If you then centre it on dc you will get + 2.5MHz (Fs still 20Msps) then you can downsample it directly to 5Msps so the frequency range becomes + 2.5/4
As such it works and I don't expect aliasing issue of direct downsampling as fft/ifft acts as the filter.