Started by April 5, 2004
```This has been a question I've had for a long time. I have
simplified it to its basic components.

You have a signal x(t) composed as

x(t) = u(t-t0) + n(t),

where u(t) is the unit step function, t0 is unknown, and n(t) is
stationary white Gaussian noise. You need to be able to detect the
signal "reliably" (I'll leave the definition of that open for now)
within t1 seconds of t0.

Now the obvious thing to do to increase reliability is to average.
However, if you average, the unit step becomes a ramp. So in order to
detect the averaged signal within the same time constraint, t0+t1, the
threshold of the detector must be lower. But a lower threshold

So, is it worth it? Is the increase in reliability due to averaging
more than the decrease in reliability due to the reduced threshold?
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
randy.yates@sonyericsson.com, 919-472-1124
```
```Randy Yates <randy.yates@sonyericsson.com> writes:

> This has been a question I've had for a long time. I have
> simplified it to its basic components.
>
> You have a signal x(t) composed as
>
>   x(t) = u(t-t0) + n(t),
>
> where u(t) is the unit step function, t0 is unknown, and n(t) is
> stationary white Gaussian noise. You need to be able to detect the
> signal "reliably" (I'll leave the definition of that open for now)
> within t1 seconds of t0.
>
> Now the obvious thing to do to increase reliability is to average.
> However, if you average, the unit step becomes a ramp. So in order to
> detect the averaged signal within the same time constraint, t0+t1, the
> threshold of the detector must be lower. But a lower threshold
>
> So, is it worth it? Is the increase in reliability due to averaging
> more than the decrease in reliability due to the reduced threshold?

Hi Randy,

There is a book by Basseville and Nikiforov called "Detection of
Abrupt Changes" that is available in PDF form:

http://www.irisa.fr/sigma2/kniga/

The algorithm you're suggesting (averaging) looks like the CUSUM
algorithm discussed in the early part of the book.  Unfortunately, I
don't have time to decypher it, but section 5.2 of the book
("CUSUM-type Algorithms") appears to have some optimality results on
the stopping time for such an algorithm.

Ciao,

Peter K.

--
Peter J. Kootsookos

"I will ignore all ideas for new works [..], the invention of which
has reached its limits and for whose improvement I see no further
hope."

- Julius Frontinus, c. AD 84
```
```"Randy Yates" <randy.yates@sonyericsson.com> wrote in message
news:xxp1xn2fh53.fsf@usrts005.corpusers.net...
> This has been a question I've had for a long time. I have
> simplified it to its basic components.
>
> You have a signal x(t) composed as
>
>   x(t) = u(t-t0) + n(t),
>
> where u(t) is the unit step function, t0 is unknown, and n(t) is
> stationary white Gaussian noise. You need to be able to detect the
> signal "reliably" (I'll leave the definition of that open for now)
> within t1 seconds of t0.
>
> Now the obvious thing to do to increase reliability is to average.
> However, if you average, the unit step becomes a ramp. So in order to
> detect the averaged signal within the same time constraint, t0+t1, the
> threshold of the detector must be lower. But a lower threshold
>
> So, is it worth it? Is the increase in reliability due to averaging
> more than the decrease in reliability due to the reduced threshold?

Randy,

You did say "unit" step function - so the amplitude of u(t) is known.  Let's
call it 1.0 volt just to assign some dimensions.

I should think that one key criterion is where to set the detection
threshold.  I believe you'll find that the optimum detection threshold for a
unit step is 0.5 volts.  But that might depend on how long you have to
detect and with what method.
For example, if you have only an instant to "decide", then 0.5 volts would
be the right threshold.  If you have t1 seconds to decide, then you might
require that the threshold be exceeded "most of the time" for 0.5*t1 seconds
or 0.9*t1 seconds or .....  I don't know what the optimum is but it isn't
zero and it's not t1.
This is all for a detecting a unit step.

At some noise amplitude level, there will be no reliable detection.  So, you
need a Receiver Operating Curve (ROC) to yield probability of detection as a
function of SNR - whatever SNR definition you wish to use.  Let's say
1.0/nrms so the SNR is zero dB when the rms noise is 1.0.  This notion can
be modified to include the time t1 - "what is the probability of detection
within t1 at various SNRs".  There will be a different curve for each
different "receiver" - or filtering method.

Another curve can be developed that measures the false detection rate.  As a
function of SNR, what is the probability that a unit step is detected
erroneously?  A different curve for each "receiver".  This would be part of
"reliability".

As you know, averaging is just another type of filter.  In your description,
you defined an averager for all time such that it yields a ramp output to a
step input.  That's an integrator.

With unit gain, the integral of u(t) is t, a unit ramp - so the output is
1.0 after 1 second.  If t1 >> 1 second, then I suppose you could keep the
threshold at 0.5 - but that seems unlikely.  There's no reason to relate the
threshold of one receiver to the threshold of another - that I can think of
anyway.  The question is, what is the optimum threshold for this new

I venture to guess that the optimum threshold for the new receiver is
something slightly less than t1 volts.  Or maybe it's 0.5*t1 volts being
exceeded for 0.4*t1 seconds...... pick your parameters here.

These are sufficiently complex that it's not obvious that integrating
improves anything.  I'd model it.  A lot depends on t1 and the noise level.

Is this a homework problem?  I'll bet there's a pat answer.

Fred

```
```Hi Randy,
Average over t1 seconds. Threshold at 0.5, i.e. halfway between the bottom
and top of the unit step. That way the noise is just as likely to make you
go wrong before the step as after. 0.5 gives the biggest headroom before and
after the step.
Probably!
Cheers, Syms.
p.s. Here's some Perl I experimented with!

use warnings;
@correlate = (1,1,1,1,1);
@stored_samps = (0,0,0,0,0);
sub filter
{
unshift @stored_samps,\$vin;
pop @stored_samps;
\$tot = 0;
foreach \$coeff(@correlate)
{
\$temp = pop @stored_samps;
unshift @stored_samps, \$temp ;
\$tot += \$temp * \$coeff;
}
}
for (\$t = 1; \$t <= 40; \$t++)
{
if (\$t > 20)
{\$sig = 1;}
else
{\$sig = 0;}
\$random = rand(2) - 1;
\$vin = \$sig + \$random;
filter;
print "\$t  \$tot\n";
}

```
```Randy Yates wrote:

> This has been a question I've had for a long time. I have
> simplified it to its basic components.
>
> You have a signal x(t) composed as
>
>   x(t) = u(t-t0) + n(t),
>
> where u(t) is the unit step function, t0 is unknown, and n(t) is
> stationary white Gaussian noise. You need to be able to detect the
> signal "reliably" (I'll leave the definition of that open for now)
> within t1 seconds of t0.
>
> Now the obvious thing to do to increase reliability is to average.

Hi Randy,
just let me throw in an idea that comes to my mind.

If you know, signal is a unit step, then you know that there exist
two distinct levels of amplitude. (Leave out n(t) for a moment).
If you average, you end up with sliding from one exact level to the
other, thus messing the signal.
To my opinion, averaging would make things worse!

Still without considering noise, the best solution would probably
compare the signal with levels '0' and '1' and find the nearer.

This in mind, I'd say, that the same approach should work with noise
applied.
Averaging would help to reduce noise influence, but as soon as we
average over t0, it makes things worse.

Using averaging from the left (until close to t0), comparing signal
with '0' should show high correlation.
Averaging from the right (until close to t0), comparing signal with
'1' should show high correlation.
Then there's the t0-zone, where both evaluations reveal bad
correlation, with worst conditions at the point t0.

Got it?

I'm sure you're better to make some maths out of this than I, if it
seems to be worth.

Bernhard

```
```"Bernhard Holzmayer" <holzmayer.bernhard@deadspam.com>
schrieb im Newsbeitrag
> Randy Yates wrote:
>
> > This has been a question I've had for a long time. I have
> > simplified it to its basic components.
> >
> > You have a signal x(t) composed as
> >
> >   x(t) = u(t-t0) + n(t),
> >
> > where u(t) is the unit step function, t0 is unknown,
> > and n(t) is stationary white Gaussian noise.
> > You need to be able to detect the signal "reliably"
> > (I'll leave the definition of that open for now)
> > within t1 seconds of t0.
> >
> Hi Randy,
> just let me throw in an idea that comes to my mind.
>
> If you know, signal is a unit step, then you know that
> there exist two distinct levels of amplitude. (Leave out n
> (t) for a moment).
> If you average, you end up with sliding from one exact
> level to the other, thus messing the signal.
> To my opinion, averaging would make things worse!
>
This gave me an idea:
How about computing the standard deviation for [t,t+dt] and
for [t,t+2*dt]. In absence of the unit step, both standard
deviations should be "similar", no? (*)
If the unit steps occurs between t and t+2*dt, the
standard deviation should be much greater for s[t,t+2*dt]
than for s[t,t+dt] or s[t+dt,t+2*dt], although s[t,t+dt]
should be similar to s[t+dt,t+dt]. You could combine this
with an average calculation:
avg[t,t+dt] <= avg[t,t+2*dt] <= avg[t+dt,t+2*dt]
The trick is now to determine the optimum dt to use. It
should be
- as big as possible to detect differences
- as small as possible to detect it as soon as possible
and to pinpoint the instant t0 as exactly as possible
If you want to detect a step change in t1 seconds, I'd go
for a dt of t1/2 to t1/3.

In order to compute t0 as exactly as possible, you can
correlate x(t) with
e(t) = ue(t-tau)+ne
where
ue(t-tau) = the estimated level of amplitude change
= avg[t+dt,t+2*dt] - avg[t,t+dt]
ne = the estimated noise level
= avg[t,t+dt]
and look for the greatest correlation at time tau, giving
the estimate for t0.

Martin

(*) if the noise is the same during the observation time.
You can precompute the difference to be expected as a
function of S/N (and as function of where in [t,t+dt], resp
[t+dt,t+2*dt] the step change occurs).
On second thought you might want to compare [t,t+dt] with
[t+2*dt,t+3*dt] in order to exclude the step itself.

```
```Randy Yates wrote:
> This has been a question I've had for a long time. I have
> simplified it to its basic components.
>
> You have a signal x(t) composed as
>
>   x(t) = u(t-t0) + n(t),
>
> where u(t) is the unit step function, t0 is unknown, and n(t) is
> stationary white Gaussian noise. You need to be able to detect the
> signal "reliably" (I'll leave the definition of that open for now)
> within t1 seconds of t0.
>
> Now the obvious thing to do to increase reliability is to average.
> However, if you average, the unit step becomes a ramp. So in order to
> detect the averaged signal within the same time constraint, t0+t1, the
> threshold of the detector must be lower. But a lower threshold
>
> So, is it worth it? Is the increase in reliability due to averaging
> more than the decrease in reliability due to the reduced threshold?

Yes.  You've described the problem in the terms that make your solution,
where the likelihood of a correct solution is maximized by averaging.
The real world should be that accommodating.

You can go further and calculate the probability of an error given a
problem.

Note that if your step is attenuated by an unknown amount your choice of
threshold is blown out of the water...

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

```
```Bernhard Holzmayer <holzmayer.bernhard@deadspam.com> writes:

> Randy Yates wrote:
>
> > This has been a question I've had for a long time. I have
> > simplified it to its basic components.
> >
> > You have a signal x(t) composed as
> >
> >   x(t) = u(t-t0) + n(t),
> >
> > where u(t) is the unit step function, t0 is unknown, and n(t) is
> > stationary white Gaussian noise. You need to be able to detect the
> > signal "reliably" (I'll leave the definition of that open for now)
> > within t1 seconds of t0.
> >
> > Now the obvious thing to do to increase reliability is to average.
>
> Hi Randy,

Hello Bernhard,

> just let me throw in an idea that comes to my mind.

Thanks for responding!

> If you know, signal is a unit step, then you know that there exist
> two distinct levels of amplitude.

Right.

> (Leave out n(t) for a moment).
> If you average, you end up with sliding from one exact level to the
> other, thus messing the signal.
> To my opinion, averaging would make things worse!

Yes, in a sense, but it reduces the noise too, which makes things
better. That's the quandary!

> Still without considering noise, the best solution would probably
> compare the signal with levels '0' and '1' and find the nearer.
>
> This in mind, I'd say, that the same approach should work with noise
> applied.
> Averaging would help to reduce noise influence, but as soon as we
> average over t0, it makes things worse.
>
> Using averaging from the left (until close to t0), comparing signal
> with '0' should show high correlation.
> Averaging from the right (until close to t0), comparing signal with
> '1' should show high correlation.

Two things: 1) we don't know where t0 is!, and 2) I'm presuming a
causal system, so we can't really "average from the right" until
we get to that "right" point in time. Without knowing t0, I
don't see how we're going to get away from a sliding average, which
will cause the ramping function.

> Then there's the t0-zone, where both evaluations reveal bad
> correlation, with worst conditions at the point t0.

That I don't see - this would be the optimal point in terms
of the quickest detection, AND both would still be good correlations:
the data to the left of exactly t0 being 0 + n(t) and that to the
right being 1 + n(t).
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
randy.yates@sonyericsson.com, 919-472-1124
```
```p.kootsookos@remove.ieee.org (Peter J. Kootsookos) writes:

> Randy Yates <randy.yates@sonyericsson.com> writes:
>
> > This has been a question I've had for a long time. I have
> > simplified it to its basic components.
> >
> > You have a signal x(t) composed as
> >
> >   x(t) = u(t-t0) + n(t),
> >
> > where u(t) is the unit step function, t0 is unknown, and n(t) is
> > stationary white Gaussian noise. You need to be able to detect the
> > signal "reliably" (I'll leave the definition of that open for now)
> > within t1 seconds of t0.
> >
> > Now the obvious thing to do to increase reliability is to average.
> > However, if you average, the unit step becomes a ramp. So in order to
> > detect the averaged signal within the same time constraint, t0+t1, the
> > threshold of the detector must be lower. But a lower threshold
> >
> > So, is it worth it? Is the increase in reliability due to averaging
> > more than the decrease in reliability due to the reduced threshold?
>
> Hi Randy,
>
> There is a book by Basseville and Nikiforov called "Detection of
> Abrupt Changes" that is available in PDF form:
>
> http://www.irisa.fr/sigma2/kniga/
>
> The algorithm you're suggesting (averaging) looks like the CUSUM
> algorithm discussed in the early part of the book.  Unfortunately, I
> don't have time to decypher it, but section 5.2 of the book
> ("CUSUM-type Algorithms") appears to have some optimality results on
> the stopping time for such an algorithm.

Hi Peter,

Thanks for the reference. I took a look at it last night but it
will take me awhile to digest any significant portion of. Thanks
again!
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
randy.yates@sonyericsson.com, 919-472-1124
```
```"Fred Marshall" <fmarshallx@remove_the_x.acm.org> writes:

> "Randy Yates" <randy.yates@sonyericsson.com> wrote in message
> news:xxp1xn2fh53.fsf@usrts005.corpusers.net...
> > This has been a question I've had for a long time. I have
> > simplified it to its basic components.
> >
> > You have a signal x(t) composed as
> >
> >   x(t) = u(t-t0) + n(t),
> >
> > where u(t) is the unit step function, t0 is unknown, and n(t) is
> > stationary white Gaussian noise. You need to be able to detect the
> > signal "reliably" (I'll leave the definition of that open for now)
> > within t1 seconds of t0.
> >
> > Now the obvious thing to do to increase reliability is to average.
> > However, if you average, the unit step becomes a ramp. So in order to
> > detect the averaged signal within the same time constraint, t0+t1, the
> > threshold of the detector must be lower. But a lower threshold
> >
> > So, is it worth it? Is the increase in reliability due to averaging
> > more than the decrease in reliability due to the reduced threshold?
>
> Randy,

Hi Fred,

> You did say "unit" step function - so the amplitude of u(t) is known.  Let's
> call it 1.0 volt just to assign some dimensions.

Right. OK.

> I should think that one key criterion is where to set the detection
> threshold.  I believe you'll find that the optimum detection threshold for a
> unit step is 0.5 volts.  But that might depend on how long you have to
> detect and with what method.
> For example, if you have only an instant to "decide", then 0.5 volts would
> be the right threshold.  If you have t1 seconds to decide, then you might
> require that the threshold be exceeded "most of the time" for 0.5*t1 seconds
> or 0.9*t1 seconds or .....  I don't know what the optimum is but it isn't
> zero and it's not t1.
> This is all for a detecting a unit step.
>
> At some noise amplitude level, there will be no reliable detection.  So, you
> need a Receiver Operating Curve (ROC) to yield probability of detection as a
> function of SNR - whatever SNR definition you wish to use.  Let's say
> 1.0/nrms so the SNR is zero dB when the rms noise is 1.0.  This notion can
> be modified to include the time t1 - "what is the probability of detection
> within t1 at various SNRs".  There will be a different curve for each
> different "receiver" - or filtering method.
>
> Another curve can be developed that measures the false detection rate.  As a
> function of SNR, what is the probability that a unit step is detected
> erroneously?  A different curve for each "receiver".  This would be part of
> "reliability".

Yes. These are great definitions.

> As you know, averaging is just another type of filter.  In your description,
> you defined an averager for all time such that it yields a ramp output to a
> step input.  That's an integrator.

Ah - I see how you got that. What I meant was a ramp for some finite
period of time, flattening out eventually. Practically, it would
probably be a simple low-pass FIR - maybe even a boxcar averager.

> With unit gain, the integral of u(t) is t, a unit ramp - so the output is
> 1.0 after 1 second.  If t1 >> 1 second, then I suppose you could keep the
> threshold at 0.5 - but that seems unlikely.  There's no reason to relate the
> threshold of one receiver to the threshold of another - that I can think of
> anyway.  The question is, what is the optimum threshold for this new
>
> I venture to guess that the optimum threshold for the new receiver is
> something slightly less than t1 volts.  Or maybe it's 0.5*t1 volts being
> exceeded for 0.4*t1 seconds...... pick your parameters here.
>
> These are sufficiently complex that it's not obvious that integrating
> improves anything.  I'd model it.  A lot depends on t1 and the noise level.

I'm glad to know that this isn't a trivial problem in which I've overlooked

> Is this a homework problem?

No, absolutely not. It is motivated by some algorithms I've had to do here
at Ericsson/Sony Ericsson. Besides, my only class this semester is
"Coding and Modulation" and we wouldn't have something like this there.

> I'll bet there's a pat answer.

Maybe not!?!
--
Randy Yates
Sony Ericsson Mobile Communications
Research Triangle Park, NC, USA
randy.yates@sonyericsson.com, 919-472-1124
```