Extracting narrow-band ZigBee signals from a wide-band WiFi signal?

Started by cogwsn 6 years ago10 replieslatest reply 6 years ago227 views


I am planning the following.

First sample the 20 MHz WiFi channel (WiFi channel-1 in figure).

Put band pass filters (5 MHz wide) around each of the ZigBee center frequencies (11, 12, 13, 14).

Re-sample the chunks to 4 MHz.

Is there anything wrong with this approach ?


[ - ]
Reply by Tim WescottJanuary 17, 2017

Presumably you're trying to do a dual WiFi/Zigbee receiver?

I don't see anything wrong with it in principle; the devil is always in the details, however.

[ - ]
Reply by cogwsnJanuary 17, 2017

Yes, I am trying to do ZigBee and WiFi simultaneously in real-time. 

So far I found successive interference cancellation the most suitable one. 

[ - ]
Reply by rt45aylorJanuary 17, 2017

There's nothing "wrong" with it, given what you're asking is fairly vague. If you're wanting to keep the WiFi data for future processing then there would be some differences in design. If you're purely looking for the ZigBee channels then something like a polyphase filter (think channelizer) given your sampling may introduce aliasing of the wifi into the Zigbee channels which you'll have to account for.

Again, if you're purely keeping the zigbee channels, couldn't you narrow your filters at the time of collect and then collect each channel separately? I don't believe zigbee spreads its data across multiple channels.

[ - ]
Reply by cogwsnJanuary 17, 2017

No actually I have to decode both WiFi and ZigBee at the same time. 

So during interference, assuming most of the time WiFi will be strong, I can detect the WiFi and decode it first. 

Now from the decoded WiFi data(bits) and channel estimates, I can recreate the estimated WiFi signal and subtract it from the mixed signal(ZigBee + WiFi). 

This will give me a signal which will be relatively clean from WiFi interference and hence making ZigBee easy to detect/decode. 

Could you please elaborate your following saying 

"If you're wanting to keep the WiFi data for future processing then there would be some differences in design. "

[ - ]
Reply by Tim WescottJanuary 17, 2017

Wow.  10 years ago I'd say "that's wacky enough to be a good thesis topic", but I have this sad feeling that technology has progressed to the point where it may actually be practical in the real world.

Even if it isn't practical, there should be some papers or even a thesis in there someplace.

[ - ]
Reply by cogwsnJanuary 17, 2017
Hahaha, I agree with you Tim. 

But this is more than that :D (Devil is in the details :D)

Following to this, I have to generalize it for other standards where there are re-usable blocks during the transmission and reception chain(WiFi + LTE). 

Development of an architecture for simultaneous operation of different wireless standards using GPU based SDR. 

The difficulty is there in the RF part where same RF front end has to be used for all the standards :) 

[ - ]
Reply by Tim WescottJanuary 17, 2017

If you're not there already I'd say to start with just making it work independently.  Anywhere that you can set a WiFi receiver next to a totally separate ZigBee receiver and have them both work, your SDR should (assuming you have enough processing power) work, too.  My gut feel is that your receive, regenerate and subtract scheme will, at the very best, give you about a 20dB reduction in the interfering WiFi signal -- in radio terms, that's enough to write home about, but it's not enough for a parade.

And yes, at least a good part of the challenge will be sampling the RF cleanly enough and fast enough.

[ - ]
Reply by cogwsnJanuary 17, 2017

As of now my WiFi and ZigBee Rx are working in real time independently. 

Yes I will try your suggestion also, to make them working independently in vicinity of each other. 

Yes the sampling and AGC requirements are still a challenge and we are working on it. 

Just for info, I am developing with Openairinterface and USRP B210. 

[ - ]
Reply by dgshaw6January 23, 2017

Hi Tim,

I realize that you used the term gut feeling, but I would like to know why your gut believes that 20 dB is the best you could get?


[ - ]
Reply by Tim WescottJanuary 23, 2017

I can't even put it into words!  25 years of doing this stuff for pay, dodging bullets, and having to clean up after other people who said "OK, boss!", or worse "sure, pay me up front and I'll do it!"

In general, it's a problem that involves saying that \( y = x - \hat{x} \).  It's only going to work as good as the estimate \( \hat{x} \).  To make it work you must generate \( \hat{x} \) from the message, the carrier frequency, the carrier phase, the bit timing of the message, and the amplitude of the message.  I count five different ways that it can go horribly wrong right there.  

And, the more time you spend finding the message, the longer your delay is on the zig-bee.  I don't know if that's an injury, but it's sure an insult.

The rule of thumb that I was given by an old gray-haired guy, and which has proven to be more or less correct, is that if it's easy and you're reasonably diligent you can do one of these "a - b" things and get about 40dB of cancellation.  If it's easy and you sweat bullets over it, you can maybe get 60dB of cancellation.  If it's not easy -- 20dB, and sweating over it will just get your lab bench salty.  This doesn't look easy to me.

To add to my low estimate, it's always a lot easier to make this sort of thing work well in the lab than it is in the field.  I understand that this is all going to happen inside of an FPGA, and that makes it easier, but stories abound in the high-tech world of people who build "lab queens" that only work if they stay out of the sun and have had a good long soak at one temperature before they're turned on -- in my experience, a good way to make a lab queen is to depend too heavily on one of these "a - b" cancellations.