## Simulating multipath fading channel in Python

Started by 6 months ago●2 replies●latest reply 6 months ago●232 viewsHello!

I'm trying to figure out how to simulate a multipath fading channel (I'd like to have both a Rayleigh and a Rician channel, no Doppler to simplify things) in Python.

Below is my understanding of the topic so far:

- in a single-path, flat fading channel I just have to sample the channel tap from the corresponding distribution and that's my channel response. I keep the tap value according to the coherence time and, therefore, if the coherence time is exactly one sample period long, I will have to get a vector of length N (where N is the number of samples of the signal I want to alter), and then I multiply element-by-element the signal by the channel response. (at least, this is what I've been seeing in the few, more or less buggy codes I've been able to find on the net).
- if I want to integrate the effect of the multipath, I have to take into account the power delay profile (let's say I have P paths impinging on my receiver), and so I have to sample P values from the desired distribution and weigh them (according to the power delay profile). This channel response will then be valid only for the samples that fall within the coherence time of the channel --- i.e., if it is exactly one sample period, then I have to repeat the process for each sample, and multiply the sub-block (of size P) of neighboring samples of the current sample with the newly-sampled weighted channel coefficients (which, in my eyes, looks reasonable only by using sparse matrices, otherwise I would explose the memory and computational capabilities at my disposal).

Is my interpretation correct? (I am sorry if this looks trivially wrong, but I've spent countless hours searching the web, and I've just been able to find extremely complex academic articles on the topic, whose practical implementation looks totally impossible to me...)

Thanks for your help and have a nice day!

Best,

Rob

It sounds like you have the basics right. If you want to simulate a channel for longer than the coherence time, don't create a step discontinuity between channel instances. In other words the transitions between one model and another should be smooth, and how smooth is dictated by the coherence time constant.

Also there are observations that can be made about how channels transition in the real world. e.g., with a two-ray model the taps aren't random, they move in a way that represents how the reflectors may change over time. A channel notch may move up or down frequency in a channel, it doesn't randomly appear in one place or another.

Cool, thanks a lot! :)