Reply by Vladimir Vassilevsky February 4, 20072007-02-04

Randy Yates wrote:


> If you look at the datasheet for the part howy mentioned,
First, this part does not look like any good choice for the performance radio link. I'd say there is no point to bother about multipath, FEC and such.
> you'll see > that their main concern is that the PLL bandwidth isn't so high that > it starts to track your data.
The PLL on the transmit side acts as the 2-nd order HPF over the modulating signal. That's why the DC is going to be lost.
> If you set the bandwidth low enough, > then a string of 20 zeros shouldn't bother it. If a string of greater > than 20 zeros happens once a month, then so fricking what?
Not so fast, Randy. On random bitstream decoded by a comparator, the highpassing is equivalent to the loss of SNR. If you want to keep it under 3dB, then the cutoff frequency of the PLL should be ~1000 times lower then the bitrate. Therefore you may want a specially designed signal with the spectrum limited from the low side, rather then just a random signal. BTW, this topic is closely related to the data storage systems and the RLL coding that they are using. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by Randy Yates February 3, 20072007-02-03
Vladimir Vassilevsky <antispam_bogus@hotmail.com> writes:

> Randy Yates wrote: > > >>>I said that your suggestion is freaking nonsense and you'd better >>>think before saying anything that you don't have any clue about. Is >>>that clear? >> Do you show your asshole in public as much as you do here? > > But what I said is true, isn't it?
What is nonsense about it? If you look at the datasheet for the part howy mentioned, you'll see that their main concern is that the PLL bandwidth isn't so high that it starts to track your data. If you set the bandwidth low enough, then a string of 20 zeros shouldn't bother it. If a string of greater than 20 zeros happens once a month, then so fricking what? What I'm assuming is that an occasional bit error is acceptable. If it isn't, then this won't work. The fact that howy mentioned multipath implies that either the occasional bit error isn't a problem, or there's going to be some sort of FEC. Tell me where you think this reasoning is flawed. -- % Randy Yates % "I met someone who looks alot like you, %% Fuquay-Varina, NC % she does the things you do, %%% 919-577-9882 % but she is an IBM." %%%% <yates@ieee.org> % 'Yours Truly, 2095', *Time*, ELO http://home.earthlink.net/~yatescr
Reply by Vladimir Vassilevsky February 3, 20072007-02-03

Randy Yates wrote:


>>I said that your suggestion is freaking nonsense and you'd better >>think before saying anything that you don't have any clue about. Is >>that clear? > > > Do you show your asshole in public as much as you do here?
But what I said is true, isn't it? VLV
Reply by Randy Yates February 3, 20072007-02-03
Vladimir Vassilevsky <antispam_bogus@hotmail.com> writes:

> Randy Yates wrote: > >>>>>Any kind of probabilistic compression does not guarantee against the >>>>>DC bias as well as all zero or all one sequences at the output. It can >>>>>only reduce the probability of that. >>>> >>>>Correct, and if the probability is, say, 0.000001? >>> >>>This would be too impractical to achieve. >> A binary source with maximum entropy has a uniform pdf. Thus the >> probability of a string of 20 zeros (or ones) is about 0.000001. Is >> a string of 20 zeros "too much" DC? > > How about the strings of 19 zeroes? Or 18 zeroes?
If 20 zeros is not too much, then neither should 19 or 18 be too much.
>>>What does matter is the probability that the DC is going to exceed X >>>over the window size of N bits. The probabilistic coder produces a >>>random stream of bits, and the distribution of the DC can be assumed >>>to be Gaussian with the dispersion of ~ sqrt(N). Now run the numbers. >> I have no idea what you just said. It looks like English, but it >> doesn't pass my interpreter. > > I said that your suggestion is freaking nonsense and you'd better > think before saying anything that you don't have any clue about. Is > that clear?
Do you show your asshole in public as much as you do here? -- % Randy Yates % "With time with what you've learned, %% Fuquay-Varina, NC % they'll kiss the ground you walk %%% 919-577-9882 % upon." %%%% <yates@ieee.org> % '21st Century Man', *Time*, ELO http://home.earthlink.net/~yatescr
Reply by Vladimir Vassilevsky February 3, 20072007-02-03

Randy Yates wrote:

>>>>Any kind of probabilistic compression does not guarantee against the >>>>DC bias as well as all zero or all one sequences at the output. It can >>>>only reduce the probability of that. >>> >>>Correct, and if the probability is, say, 0.000001? >> >>This would be too impractical to achieve. > > > A binary source with maximum entropy has a uniform pdf. Thus the > probability of a string of 20 zeros (or ones) is about 0.000001. > > Is a string of 20 zeros "too much" DC?
How about the strings of 19 zeroes? Or 18 zeroes?
> >>What does matter is the probability that the DC is going to exceed X >>over the window size of N bits. The probabilistic coder produces a >>random stream of bits, and the distribution of the DC can be assumed >>to be Gaussian with the dispersion of ~ sqrt(N). Now run the numbers. > > > I have no idea what you just said. It looks like English, but it > doesn't pass my interpreter.
I said that your suggestion is freaking nonsense and you'd better think before saying anything that you don't have any clue about. Is that clear? VLV
Reply by Randy Yates February 3, 20072007-02-03
Vladimir Vassilevsky <antispam_bogus@hotmail.com> writes:

> Randy Yates wrote: > >>>>>I am transmitting FSK using a MICRF505 transceiver chip. The FSK >>>>>modulator in this chip requires a bit encoding scheme to reduce the DC >>>>>content of the bit stream to a manageable level. >>> >>>>It sounds like you have a binary source with an entropy of less than 1 >>>>bit. If you know the probabilistic model of the source, you can >>>>simultaneously remove the DC and _REDUCE_ the bitrate by >>>>source-encoding, e.g., by using a Huffman code. >>> >>>Any kind of probabilistic compression does not guarantee against the >>>DC bias as well as all zero or all one sequences at the output. It can >>>only reduce the probability of that. >> Correct, and if the probability is, say, 0.000001? > > This would be too impractical to achieve.
A binary source with maximum entropy has a uniform pdf. Thus the probability of a string of 20 zeros (or ones) is about 0.000001. Is a string of 20 zeros "too much" DC?
> What does matter is the probability that the DC is going to exceed X > over the window size of N bits. The probabilistic coder produces a > random stream of bits, and the distribution of the DC can be assumed > to be Gaussian with the dispersion of ~ sqrt(N). Now run the numbers.
I have no idea what you just said. It looks like English, but it doesn't pass my interpreter. -- % Randy Yates % "Though you ride on the wheels of tomorrow, %% Fuquay-Varina, NC % you still wander the fields of your %%% 919-577-9882 % sorrow." %%%% <yates@ieee.org> % '21st Century Man', *Time*, ELO http://home.earthlink.net/~yatescr
Reply by Jerry Avins February 3, 20072007-02-03
Vladimir Vassilevsky wrote:

   ...

> then map the 6-bit code symbols into 8 bit bytes.
_Bytes?_ Jerry -- Engineering is the art of making what you want from things you can get. &macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;
Reply by Vladimir Vassilevsky February 3, 20072007-02-03

howy wrote:

> Although I think the main point may have been missed here. Stated in a > different way: > If I /have/ to double the bitrate in order to guarantee no DC (not > actually true since some DC is tolerated), what's the "best possible" > way to do it? > > For example maybe I can remap the Golay codes by adding 10 to 20% more > dummy bits to insure certain low frequency response characteristics. > Then, when decoding, I essentially unmap the "bloated" Golay code back > to the original code. This way I may actually have gained some RF > performance rather than degrading it.
That way you can gain some performance as well as you can loose it. It depends upon what is your channel error rate and what is the desired error rate after the decoder. It is difficult to get much coding gain on the compressed audio, because the audio itself can tolerate the error rates as high as up to 1%. Golay code is too weak to beat the overhead due to the increased bit rate. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by howy February 3, 20072007-02-03
> It sounds like you have a binary source with an entropy of less than 1 > bit. If you know the probabilistic model of the source, you can
The data is already severely compressed audio. Although I think the main point may have been missed here. Stated in a different way: If I /have/ to double the bitrate in order to guarantee no DC (not actually true since some DC is tolerated), what's the "best possible" way to do it? For example maybe I can remap the Golay codes by adding 10 to 20% more dummy bits to insure certain low frequency response characteristics. Then, when decoding, I essentially unmap the "bloated" Golay code back to the original code. This way I may actually have gained some RF performance rather than degrading it. -howy
Reply by Vladimir Vassilevsky February 3, 20072007-02-03

Randy Yates wrote:

>>>>I am transmitting FSK using a MICRF505 transceiver chip. The FSK >>>>modulator in this chip requires a bit encoding scheme to reduce the DC >>>>content of the bit stream to a manageable level. >> >>>It sounds like you have a binary source with an entropy of less than 1 >>>bit. If you know the probabilistic model of the source, you can >>>simultaneously remove the DC and _REDUCE_ the bitrate by >>>source-encoding, e.g., by using a Huffman code. >> >>Any kind of probabilistic compression does not guarantee against the >>DC bias as well as all zero or all one sequences at the output. It can >>only reduce the probability of that. > > > Correct, and if the probability is, say, 0.000001?
This would be too impractical to achieve. What does matter is the probability that the DC is going to exceed X over the window size of N bits. The probabilistic coder produces a random stream of bits, and the distribution of the DC can be assumed to be Gaussian with the dispersion of ~ sqrt(N). Now run the numbers. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com