I read an extract from the Zigbee specification whereby 4 bit symbols were spread into 32 chip codes. Ive also read that other waveforms use similar ratios. I would like to understand the benefit of using multiple Walsh codes when there is no multi-data requirement (as it does in #CDMA), rather than just using a single Walsh spreading code.
Lets assume 4 bit frames spread with a 16 chip code. e.g. each 4 bit is associated with a fixed code.
I would spread all 16 combinations of bits and chipping code. Assume the bit value corresponds to the index of code in the #Hadamard matrix. I would then have a table of 16 spread bit patterns 0000 to 1111.
On the transmit side:
- Substitute 4 bits to be sent with the previously spread code.
- Expand the spread code into samples and send them.
On the receive side, assuming I have already synchronised:
- Correlate the incoming samples with all 16 spread patterns.
- Pick the biggest correlation peak and output the associated bit pattern.
I wrote a quick simulation (Ill add it
when I get back home at the weekend) but found that with the above method I couldn’t
hit the same performance as when just spreading and despreading with
a single code.
I presume I’m missing something...?
Edited to add: I would guess that
when a single code is used to spread, any errors propogate through
but dont, unlike where multiple codes are used, cause an incorrect
pattern to be substituted. Obviously the probability of this happening
is inversely proportional to the code length (which was observed when
using #Hadamard matrix of 8 vs 32).
i.e. 8 chip code does not perform well.
Any hints as to techniques that are used to more accurately determine the data?
I did a really quick look at the ZigBee protocol, and it appears that you are misinterpreting what you're seeing. Based on the diagram in the section "2450 Phy Layer" in this page: http://www.rfwireless-world.com/Tutorials/Zigbee-p..., each 4-bit symbol is NOT being turned into a 5-bit Walsh code -- it's being turned into a 32-bit Walsh code. I'm not a ZigBee maven in the least, but it appears that this is combining spread-spectrum with massive forward error correction.
I'd have to spend study time to tell you more...
Let me ping @Tim Wescott and @Slartibartfast in case they can help.
Regarding your problem, I was wondering, are you comparing apples to apples? If you modulate a 4 bit value using 16 ORTHOGONAL codes, the compare should be to a single bit spread using a single 4 bit code. The total chip rate is the same so the same noise can be used to compare the results.
If you compare your 4 bit value spread with 16 codes, there should be about 6dB difference since your data rate is 4 times higher.