Randy Yates wrote:
> Bernhard Holzmayer <holzmayer.bernhard@deadspam.com> writes:
>
>> Randy Yates wrote:
>>
...
>
>> Still without considering noise, the best solution would probably
>> compare the signal with levels '0' and '1' and find the nearer.
>>
>> This in mind, I'd say, that the same approach should work with
>> noise applied.
>> Averaging would help to reduce noise influence, but as soon as we
>> average over t0, it makes things worse.
>>
>> Using averaging from the left (until close to t0), comparing
>> signal with '0' should show high correlation.
>> Averaging from the right (until close to t0), comparing signal
>> with '1' should show high correlation.
>
> Two things: 1) we don't know where t0 is!,
obvious, but causes another idea ...
Why not just assume a step at a certain point tx, subtract it from
the signal, and do this with different values of tx, until best
fit?
> and 2) I'm presuming a
> causal system, so we can't really "average from the right" until
> we get to that "right" point in time.
Not really, ok.
However, there's an implicit knowledge about the signal (that it is
a step).
Using this knowledge we can "average from the right", because
a) average value of white noise is zero
b) average value of step signal from right is '1' as long as t>t0
So, just assume an average value of '1' and go on averaging.
At the same time keep on averaging with the real 'old' values in
parallel.
As soon as averaging on the '1' provides ( picking Martin's idea)
lesser standard deviation than averaging on the other values,
we have t>t0.
Now, let's do this from both sides, starting with constant average
values, which results in 4 average processes:
arbitrary number N (of samples), n is current sample
1) A = average(samples n-N+1, n-N+2,..., n)
2) B = average(samples n-N+1, n-N+2,..., n)-1
3) C = average(samples n-2N+1, n-2N+2, n-N)
4) D = average(samples n-2N+1, n-2N+2, n-N)-1
N must be big enough to reduce noise to a level which is sufficient
for the level detection, but small enough for the required
precision of t0. If such an N exists, task is solvable.
As soon as sample n arrives, we know, that t0 = t(n-N).
We know that sample n has arrived, if both values B,C reach their
minimum and A,D their maximum.
(Evaluation of B,C should be sufficient.)
>> Then there's the t0-zone, where both evaluations reveal bad
>> correlation, with worst conditions at the point t0.
Assuming no noise, A is '0' if t<t0 and '1' if t>t0
and D is '0' if t>t0 and '1' if t<t0
if we're averaging, A is 1/2 at t0, so is D.
Product will be 0*1=0 everywhere except in the region of t0,
where it is near 1/4.
That's what I meant. The same should hold with noise, though not so
easily visible.
A couple of years ago, I built a discriminator based on a DSP, which
tried to retrieve a digitally coded signal (DCF, phase modulation
+/-5Hz) on a noisy IF signal (voice modulation==noise).
I gathered the samples, placed them in a cyclic buffer and compared
the expected carrier signal with the measured.
Because I knew, that steps could occur only at certain moments
(exactly one second after the other) and with distinct phase jumps,
this semantic knowledge made it easy to detect the code.
A very naive approach, yet it worked.
That's, where my idea comes from.