Hi all,
I am working on an application which involves averaging of waveform records in
FPGA.
Averaging of waveforms can be done by
1) storing intermediate presum and then finally dividing it by N (total number
of waveforms added)
For example if we were to average 5 waveforms and 0.1, 0.3, 0.1, 0.4 are the
values of the first point in each waveform set.
The average of first point from all waveforms
Presum = 0.1+0.3+0.1+0.4
Average = trunc(.99/4) =0.25
In this approach, bit width of intermediate will be growing by log2(N) bits when
we use fixed point calculations. This is a problem in terms of storage
2) The approach is to have a running average by truncation
In the example above
Running Average after acquiring waveform 2 = trunc (0.1+ 0.3/2) = 0.2
Running Average after acquiring waveform 3 = trunc((0.2 *2 +0.1)/3)= 0.2
Running Average after acquiring waveform 4 = trunc((0.2*3+0.4)/4))=0.25
I was wondering what would be the effects of truncating the intermediate average
in approach 2? How much does the result vary from approach1?
How do we estimate and prevent rounding errors for approach 2? What type of
truncation should be chosen (truncate, overflow, saturate)
Any help is appreciated. I would be happy to answer any questions.