Forums

averaging two stream frames in time domain versus frequency domain

Started by kaz 5 years ago3 replieslatest reply 5 years ago93 views

In 3GPP (LTE) an uplink channel (PRACH) sent by handset has to be processed at baseband. The Prach signal is extracted from its given location and fft (e.g. 2K) applied to get 839 carriers which are separated by 1.25KHz. Without going into details, in some formats the signal is inserted across two/three contiguous subframes. The actual intention is to repeat it twice for better signal quality. This repeat is a bit confusing to me as what is best way to manage it. 

1) I have seen one implementation decimating the two 2k signal frames into one 2k in time domain then applying 2k fft.

2) For my current design I prefer not to decimate but do fft on each then average the result.

The question is which way is better for signal quality? are they in any way equivalent.

option1 is very hard to implement if multiple formats are to be processed with one fft engine while option2 just needs extra memory and basic averaging.

Thanks



[ - ]
Reply by SlartibartfastJuly 14, 2018

That's something that you should be able to evaluate in a simulation pretty easily, I'd think.   Synchronization errors might affect one more than the other, and it might affect other parts of the system if one requires better/faster synchronization.

Otherwise, since the FFT is a linear transform the order wouldn't be expected to matter much, but details like synchronization and precision, etc., etc., can always get in the way.



[ - ]
Reply by kazJuly 14, 2018

Thanks Slartibartfast,

No, simulation is not available yet until late in project development.

What specifically I wanted to know is the difference between the two cases:

1) decimation then one frame fft. Decimation of two frames means (to me) the two frames are still separate but chopped off. So it is not exactly averaging in time domain. When followed by fft I assume both frames contribute to all fft frame. Now a kind of averaging.

2) fft of two frames independently then apply basic averaging on bins.

The order does not matter, fair enough but the processes are different and not just swapped around.

[ - ]
Reply by SlartibartfastJuly 14, 2018

Yes, if the decimator isn't simply averaging the two frames in time, sample by sample, then the processes won't be equivalent.   Decimation implies a filter is being applied, and the characteristics of the filter will matter, obviously.

Since it is part of a defined standard, simulating it should be early in product development rather than later, I'd think, otherwise you don't know what to build.

Disclaimer:  I'm not familiar with the details of the LTE PRACH, but have worked with standards a lot.