## Single-bit long-range low-power communication

Started by 4 weeks ago●7 replies●latest reply 3 weeks ago●168 viewsSuppose I want to send a single bit (like an SOS) from some remote location using a low-power RF transmitter. The transmitter emits a pure tone at some frequency, and let's assume state-of-the-art frequency stability. The protocol is simple---if a tone is detectable at the predefined frequency, then the bit is 1, otherwise 0.

The receiver now has the challenge of detecting the bit. What are the limitations on detection?

In the spherical cow model, the receiver can keep increasing the integration time around the bit frequency (i.e. longer and longer FFTs) until they can be confident a peak would rise above the noise if the bit were 1; i.e. an infinitely long sinusoid is an infinitely tall spike in frequency space.

So where does the spherical cow model break down first?

Eventually Planck and Boltzmann weigh in and the noise overcomes the signal. You can keep making the integration time longer or increasing the transmit power, but at some point practical limits will be reached.

Time varying channels due to Doppler and multipath will be a big limitation.

If there are no other temporal variations (such as Doppler from movement or channel variations), then Tx and Rx phase noise from the local oscillator references will be a limitation. Phase noise is non-stationary, so ultimately for such a long detection integration the receiver will drift off and no longer be able to correlate to the incoming bit. So if we mean using atomic clocks in both the transmitter and receiver for "state-of-the-art" frequency stability, then this integration time can be quite long up to the flicker-floor of the atomic clock (for example the Microchip 5071A takes about 6 months to reach the flicker floor) with a fractional frequency accuracy of 5E-15.

Ultimately even with the use of the atomic clock, the power spectral density of the transmitted signal as received at the receiver will need to sufficiently overcome the thermal noise plus noise figure of the receiver; so will be limited by Tx power, antenna pattern and channel characteristics. This can be significantly degraded by other 1/f noise sources if the detection is attempted at baseband.

Thank you for this detailed breakdown. That Microchip 5071A stability...whoa.

Doesn't the PSD of the transmitted signal keep increasing the longer the transmitter is left on? Even with some finite phase noise, doppler, and/or multipath effects, there are still bounds on the frequency drift, correct? If so, won't the band within these bounds continue to accumulate energy on the receiving end? (I'm imagining a receiver that integrates the incoming signal in a band, not at only the bit frequency)

No the power spectral density will remain constant if the transmitter transmits at a constant power the same waveform (since the PSD is the power spread over frequency).

There is no bound on frequency drift: consider that frequency is the derivative of phase, so at the limits of thermal noise (a white noise process, where both the phase modulation or PM and amplitude modulation or AM is also white) the integration of an independent noise process is a random walk process, which drifts without bound. So for that reason, unless we have the ability to track it exactly, we can't accumulate energy without bound.

Further integration itself is a low pass filtering process, which can be heterodyned to any frequency as a bandpass filtering process: however the longer you integrate, the tighter the bandwidth. We can integrate over wide bandwidths when we de-spread waveforms, such as we do with GPS, but ultimately that is translating the wideband process to a narrow band process, and the limit on that integration time is the 50 Hz data rate (unless we know the transmitted message in which case we can do data-aided integration for even longer times, up to the limits of the other factors I mentioned).

Often this requirement can be sensitively satisfied with synchronous detection methods, but as I presume the source and detector are not synchronously related, a possible solution is to use a GPS time reference at both ends, and integrate at the detector for a specified period at known time points when the transmitter sends. Boxcar integrators come to mind for this.

Practical implementations of week signal processing can be found here, it also handles the different kinds of noise and attenuation https://wsjt.sourceforge.io/index.html

The real question is "what is the maximum useful carrier energy" to communicate a single bit in noise, noting that energy is power x time. The comments on stability and motion are the limiting factors and the points made are all valid. Integration is just bandwidth reduction and the limits to it are practical. Aside from stability and motion you have to state your acceptable probability of false alarm and your required probability of detection. The theory covering this was first worked out by J.I. Marcum in the famous "Rand Memo" produced during the war and long ago declassified. Markum dealt with a non-fluctuating signal, that is, a constant sinusoid in white noise. Basically it is a statistical hypothesis test(Neyman-Pearson test). Later, Peter Swerling expanded the ideas to include fluctuating targets. These ideas are treated in most academic books on radar design. The automobile radar interest has moved a lot of radar detection theory into a Matlab Toolbox.

Longer integration time implies narrower filtering. If you excite a narrow filter with white noise you produce a narrowband sinusoid. Hence the detection threshold has to increase to control false alarm probability. Pfa determines the detection threshold. For a given Pfa, Pd depends only on SNR. For SNR=0, Pd=Pfa. Hence the noise density determines signal power and SNR, and the practical time of integration is limited by stability factors which have been discussed in earlier responses.