I measured the SNR of an input signal with full precision and 16-bit fixed-point precision FFT in matlab.
The full precision resulted in 69dB SNR, but the fixed-point results in 61dB. Is this expected?
I have maintained 32-bit results for internal multipliers and 33-bits for adders. I scale the output finally by 2 on every stage before rounding off to 16-bits.
What could be going wrong here?
I'm sure MatLab is OK, but you really haven't provided enough information. What was the level of the "16 bit signal"? How many points in your FFT? Was the signal at the FFT's fundamental frequency? Was the signal coherently sampled (i.e. a true subharmonic of the sampling rate). If you did a simple sine lookup to generate a sine, that generally has a bias, especially for integer values. It's best to do a 0-45 deg mini table and repeat in segments (in the right sequence, of course) to make a perfectly symmetric sine. And for a full scale sine, base this on a 2^16 - 1 magnitude to make sure you don't run into the 2's complement asymmetry.
Single precision floating point has a 23 bit mantissa so its dynamic range is (23 * 6) + 12 = 150 dB. The maximum dynamic range from a 16 bit ADC is 15 bits or: (15 * 6) + 12 = 102 dB. By saying that your floating point version's SNR is 69 dB, you are most likely not coherently sampling. Windowing is needed then and it has the effect of smearing the signal's bandwidth.
Ideally you should use a single coherently "sampled" sine wave at the FFT's fundamental and that will show you that the limit in SNR here is probably due to the original 16 bit data (presumably, unclipped!). If it's NOT coherently sampled, then that is a BIG problem. If it's clipped, that is a big problem.
Thanks for the reply. My input data is from output of ADC including the thermal noise. Also, the tone is placed on bin. So, the SNR is not really SQNR here. It is dominated by the thermal noise floor.
Regarding the amplitude, my input tone is approx. 7dB below FS. and I am using a 128-point FFT.
Also, when I say the FFT with full precision is 69dB, it is just the internal datapath of FFT. The input is still the same. (16-bit quantized with thermal noise)
Thanks a lot again Artmez. Waiting for your comments.
Then I suspect that your input either your input is corrupted and/or the ADC ENOB is much less than 16 at the frequency you are using it. However, if your noise has lots of large spikes, especially harmonically related to the signal or any of its aliases, then there could be clipping (do an IFFT and plot to see) or you have power supply noise (PSN) and/or sample clock jitter. If CSN, then it would follow what a scope probe on it would reveal (including aliases). If clock jitter, then that depends on the jitters characteristics. If just random, then the noise spikes in the data should move around randomly and change in amplitude randomly. Some types of clock noise may present itself in data as a rise in the noise floor. What happens when you remove the signal? Does the noise remain or does it diminish?
Anecdotally: On my first DSP project back in 1989, I was trying to use an Intel uC's timer to generate my sample clock. Everything was "by the book" but my data was badly smeared (jittered). I double checked my sample clock with a frequency meter and it was rock steady and perfect, but when I scoped the sample clock, I saw why -- the clock was dithering to generate the sample frequency. Nothing in the datasheet would indicate this, so a call to Intel's rep and a couple weeks later I got my answer: the datasheet was wrong and I was right. The processor had to "average" a counter internally to get my sample rate. That also explained why the frequency meter was rock steady and perfect.
I expect the ADC ENOB to be ~12bits. To model the thermal noise, I have just used rand function with dc offset removed.
I am concerned about why the fixed-point FFT shows lesser than the fft function in MATLAB. Is the difference of ~8dB expected?
I'm not sure what you meant by removing the DC offset of the RAND function. It should be zero mean if constructed correctly.
The reduction can be close to that as each bit is 6 dB. If the noise is truly Gaussian (which includes zero mean), then on average, the noise washes out to some extent, especially if you average successive values (like a window average if you want to track the signal's changes). A DFT or FFT process is inherently averaging and so it tends to be transparent to Gaussian noise (in fact, the FFT of a time series of Gaussian noise is Gaussian noise and vice versa)
Also, if you do zero-mean rounding (e.g. when exactly 0.5, then round up if it's even or round down if it's odd, or vice versa, and that will reduce the noise floor due to calculation noise a "little"). Note that with Gaussian noise, on "average", the SNR of the signal of interest improves 6 dB for each power of two of the averaging window length.
The rand() function in matlab returns only positive values from (0,1). Right? or Am I missing some other function which could generate zero mean random noise.
So, do you suggest changing the rounding mode between the stages to see if the SNR improves? I will try that.
Use "randn" instead of "rand" to get zero mean. By the way do you mean "matlab fft function" when you say full precision and do you mean your own scripting when you say fixed point?
Yes, thats correct.
Can we share your fixed point code? I am interested because I am just migrating an Altera fpga design from 32 bits floating point(single precision) to fixed point of 18 bits input/output and twiddle of 16 bits. With these settings the outputs correlate well but I haven't quantified the difference.
Altera does have floating and fixed point model for their implementation but they are in matlab mex format.
What are the internal bit-formats?
I have no idea about how the fft ip core works internally
I just finished reading through the thread, I had a few questions. First, just to be clear, your input data is from an ADC, so I'm assuming the it's already quantized, as in no additional processing has been done on the sampled data. Is this correct?
Second, you said you are using a 128pt FFT. Does this seem awefully small to anyone else? Maybe it's fine for your application, but that seems like a small data set for trying to characterize noise.
Third, have you plotted the data and inspected the spectra visually? Any clues there?
Fourth, have you done any verification or testing with the fixed point implementation? In terms of what's expected, I would certainly assume that using 16 bit math in an FFT would introduce quantization noise, but non-linear stuffs is hard to analyze. Have you tried feeding in known data sets (sine waves) and comparing the actual result to the expected result
Thanks for your reply.
Yes, input data is from ADC. I think 128-point should be sufficient to find the SNR unless you want to find some spurs, in which case I believe we might have to do a lot of averaging over multiple FFTs.
I did plot the data from fixed-point FFT model and the full-precision model. I didn't check in detail if there are any clues in it.
Regarding the 4th question, I am actually not sure what to expect from the fixed-point. I have the result but not sure if that is what the fixed point FFT can give me or is there something better that I can be doing.
Waiting for your inputs.
It wouldn't be so much that you wanted to find spurs. A short length will result in more variation in your SNR measurement. Regardless, if you are using the same input data for both your cases the sample length is irrelevent.
With regards to what to expect from the fixed point implementation, it's not a trick question. Have you tested the fixed point implementation with trivial data sets to see if it actually works?
I didn't get what you meant by "Regardless, if you are using the same input data for both your cases the sample length is irrelevent."
Also, what is trivial data? I am using a pure sine tone.
As mentioned in the earlier posts, with full precision FFT computation the SNR is ~8dB more than with my fixed-point computation.