I am writing software to collect data from several different Software defined radios. Typically they provide a packet of data that contains Real and Imaginary samples (IQ). Some of them provide data in packets of 16384 data points and some in 8192 and some 4096.
Now typically I perform an FFT of order 14 (16384) on the packets of 16384 in size. For the other size packets, what would be the best solution for the FFT? Should I collect enough packets(or data) until I get to 16384 samples or would it be better to pad the packets with 0 out to 16384 samples?
As I said, a very basic question, I'm still learning.
Thanks, Tom
If the packets are contiguous samples, i.e., there's no time gap between the last sample of a packet and the first sample of the next, then you can combine them until you have sufficient samples for whatever FFT size you want. I think the main barrier to not being able to do that would be if the packets aren't contiguous for some reason.
I like where you're going with 'contiguous', and let's say a deep ring buffer can fill from one end and empty from the other. For analysis, I get the greatest benefit from a start, window, slide processing architecture.
The window is the amount windowed and transformed, and the slide is allowed to be shorter than the FFT, giving you a Short-Time Fourier Transform (STFT) in the time-frequency sense.
As Robert Wolfe said it seems application dependent.
I am assuming you are applying a window before your FFT.
You might also consider using overlapping windows. Ex. 50% overlap for Hamming, Hanning(Hann), .... This would be dependent on the samples being contiguous as mentioned by Slartibartfast.
Seems application dependent. But if you're going to do the 16384 FFT, wouldn't it be best with actual data (collect enough packets). Else you're processing the input data at a reduced rate.
Thanks everyone. I will combine them since the data is contiguous.
Tomb18, I'm trying to understand what you wrote. You say you are writing some software but that some unspecified "they" provide packets of data of various sizes.
Then you say that you typically perform FFTs. But you don't say why. There are a number of different reasons to compute DFTs. One reason is as an aid to compute convolutions rapidly. For that, padding with 0s is necessary. Another possible reason for FFT computation is to estimate the power spectrum of a set of such packets, a statistical process. For example, you might want to know a signal to noise ratio. That's a complicated subject, but in common with all such methods, the resolution is degraded by a "Gibbs phenomenon" which introduces significant artifacts when the edges of the packet are not multiplied by a window function. A third reason for FFT computation is that certain kinds of recognition are easier to do when the data is converted to a frequency domain, such as speech recognition.
As a side note, you will get more attention here if you use a better title (= more targeted) for your question. "FFT batch size tradeoffs" is much better than "A basic DSP question" --- the notification system of this site only sends the title, without any content.