Hey Guys, I have been trying to figure out a similar problem to what Rany posted just a while ago. Hope you guys can provide some feedback. Basically I have a a "template signal" s(t) and a recorded signal x (t). Now the recorded signal x(t) can be thought of as truncated and shifted version of s(t) plus noise: x(t) = s(t- C) + n(t) for t1< t< t2 x(t) = n(t) else Where n(t) is white zero mean noise. We can assume t1 is known. The problem is to find out the the best way to estimate C and t2. %%What I have done I have looked into getting the unbiased cross-correlation of x(t) and z (t) (which is a windowed version of s(t): R(tau)=E[x(t)z(t-tau)] Where z(t) = s(t) for t1< t< theta1 z(t) = 0 else So then my estimate for t2 in theory should be the theta1 that yields the biggest peak correlation in R(tau) given the divided by the windowed template length: t2_hat= argmax(theta1) given: max(R(tau))/ (theta1-t1) And my estimate of C is simply the lag at which the peak occurs: C_hat = argmax(tau) given Ropt(tau) (where Ropt(tau) if a function of z(t) for t1< t< t2_hat) I hope this clear. This seems to work so far on my simulations for a very specific range of t2 I was wondering if there is any way to make this more robust. Any suggestion/ feedback or other ideas on how to approach this problem would be greatly appreciated.
Detecting a waveform at arbitrary positions (II)
Started by ●February 20, 2009
Reply by ●February 20, 20092009-02-20
> I have been trying to figure out a similar problem to what Rany postedSorry, I meant to say Randy.
Reply by ●February 20, 20092009-02-20
On Feb 20, 9:37 pm, Ikaro <ikarosi...@hotmail.com> wrote:> Hey Guys, > > I have been trying to figure out a similar problem to what Rany posted > just a while ago. Hope you guys can provide some feedback. > > Basically I have a a "template signal" s(t) and a recorded signal x > (t). Now the recorded signal x(t) can be thought of as truncated and > shifted version of s(t) plus noise: > > x(t) = s(t- C) + n(t) for t1< t< t2 > x(t) = n(t) else > > Where n(t) is white zero mean noise. We can assume t1 is known. The > problem is to find out the the best way to estimate C and t2. > > %%What I have done > > I have looked into getting the unbiased cross-correlation of x(t) and z > (t) (which is a windowed version of s(t): > > R(tau)=E[x(t)z(t-tau)] > > Where > > z(t) = s(t) for t1< t< theta1 > z(t) = 0 else > > So then my estimate for t2 in theory should be the theta1 that yields > the biggest peak correlation in R(tau) given the divided by the > windowed template length: > > t2_hat= argmax(theta1) given: max(R(tau))/ (theta1-t1) > > And my estimate of C is simply the lag at which the peak occurs: > > C_hat = argmax(tau) given Ropt(tau) (where Ropt(tau) if a > function of z(t) for t1< t< t2_hat)Here is my take on it. You want to infer C and t2 given x, s, t1, and the variance of the noise (set this to 1 for now). That is, you are interested in the posterior probability distribution: P(C, t2 | x, s, t1) = P(x | s, t1, t2, C) P(C, t2 | s, t1) / Z , by Bayes'theorem, where Z is a normalizing constant. Assuming that C and t2 are independent of s and t1 (apart from the condition t2 >= t1) and taking a uniform prior distribution, we can calculate a MAP estimate (essentially maximum likelihood) by minimizing the exponent of the Gaussian distribution P(x | s, t1, t2, C): \int_{-\infty}^{t1} x^{2}(t) + \int_{t1}^{t2} [x(t) - s(t - C)]^{2} + \int_{t2}^{\infty} x^{2}(t) . Expanding out the square bracket shows that the only part of this that depends on t2 and C is E(t2, C) = \int_{t1}^{t2} [s^{2}(t - C) - 2 x(t)s(t - C)] . This is what needs minimizing. Since it is only a function of two real variables, t2 and C, you can compute it for some reasonable set of discretized values and find the minimum. This is similar to what you are suggesting, but I am not sure if it is the same. Note that maximizing \int_{t1}^{t2} x(t)s(t - C) alone is not quite right, as the first term in s^{2} has also to be taken into account. Taking the derivative wrt t2 and C gives: s(t2 - C) = 2 x(t2), and \int_{t1}^{t2} x(t)s'(t - C) = \int_{t1}^{t2} s(t - C)s'(t - C) = (1/2)[s^{2}(t2 - C) - s^{2}(t1 - C)] = (1/2)[4x^{2}(t2) - s^{2}(t1 - C)] . The first just says: as long as the integrand is negative go on increasing t2. There may be several solutions, though, for which the values of E have to be compared, or no solutions, in which case t2 will lie at the end of the signal. illywhacker;
Reply by ●February 20, 20092009-02-20
On 20 Feb, 21:37, Ikaro <ikarosi...@hotmail.com> wrote:> Hey Guys, > > I have been trying to figure out a similar problem to what Rany posted > just a while ago. Hope you guys �can provide some feedback. > > Basically I have a a "template signal" s(t) and a recorded signal x > (t). Now the recorded signal x(t) can be thought of as truncated and > shifted version of s(t) plus noise: > > x(t) = s(t- C) �+ n(t) � �for t1< t< t2 > x(t) = n(t) � � � � � � � � �else > > Where n(t) is white zero mean noise. We can assume t1 is known. The > problem �is to find out the the best way to estimate C and t2.What's the context of this problem? I'm not able to sort out what you attempt to do. 1) You displace the reference signal s by some unknown amount C *before* inserting it into x. So apparently you know that x will contain *some* section of s, but not which one. 2) The segment of s lasts for t2-t1 seconds, meaning that you are looking for a segment s(tau), C <= tau <= C + t2-t1 These two issues mess things seriously up; remove those two constraints and the problem is at least at first glance not all that different from a regular matched filter problem. Which makes me ask if the statement of the problem might be somewhat off? Rune
Reply by ●February 20, 20092009-02-20
On Fri, 20 Feb 2009 14:47:27 -0800 (PST), Rune Allnor <allnor@tele.ntnu.no> wrote:>On 20 Feb, 21:37, Ikaro <ikarosi...@hotmail.com> wrote: >> Hey Guys, >> >> I have been trying to figure out a similar problem to what Rany posted >> just a while ago. Hope you guys �can provide some feedback. >> >> Basically I have a a "template signal" s(t) and a recorded signal x >> (t). Now the recorded signal x(t) can be thought of as truncated and >> shifted version of s(t) plus noise: >> >> x(t) = s(t- C) �+ n(t) � �for t1< t< t2 >> x(t) = n(t) � � � � � � � � �else >> >> Where n(t) is white zero mean noise. We can assume t1 is known. The >> problem �is to find out the the best way to estimate C and t2. > >What's the context of this problem? I'm not able to sort out >what you attempt to do. > >1) You displace the reference signal s by some unknown amount C > *before* inserting it into x. So apparently you know that > x will contain *some* section of s, but not which one. >2) The segment of s lasts for t2-t1 seconds, meaning that > you are looking for a segment > > s(tau), C <= tau <= C + t2-t1 > >These two issues mess things seriously up; remove those >two constraints and the problem is at least at first glance >not all that different from a regular matched filter problem.That's my take on it as well. If that's really the case and s(t) has reasonable autocorrelation properties then a matched filter via cross-correlation looks like a good candidate. That depends largely on the nature of s(t), though, the resolution required in estimating t2, etc., etc..>Which makes me ask if the statement of the problem might >be somewhat off? > >RuneEric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org Blog: http://www.dsprelated.com/blogs-1/hf/Eric_Jacobsen.php
Reply by ●February 21, 20092009-02-21
> > � s(tau), �C <= tau <= C + t2-t1 > > >These two issues mess things seriously up; remove those > >two constraints and the problem is at least at first glance > >not all that different from a regular matched filter problem.Thats exactly right. Unfortunately I cant remove these constraints. But shouldn't (on average), the maximum amount of correlation be related to t2 (assuming white noise). What I mean is that the peak of the cross-corelation between z(t) and x(t) should be largest when the windowed z(t) is between t1 and t2 and less for any other window size?
Reply by ●February 21, 20092009-02-21
On Feb 21, 4:58�pm, Neu <ikarosi...@hotmail.com> wrote:> > > � s(tau), �C <= tau <= C + t2-t1 > > > >These two issues mess things seriously up; remove those > > >two constraints and the problem is at least at first glance > > >not all that different from a regular matched filter problem. > > Thats exactly right. > Unfortunately I cant remove these constraints. > > But shouldn't (on average), the maximum amount of correlation be > related to t2 (assuming white noise). > What I �mean is that the peak of the cross-corelation between z(t) and > x(t) should be largest when the windowed z(t) �is between t1 and t2 > and less for any other window size?Not quite. Imagine both t1 and t2 are fixed. The best estimate of C is the value that minimizes the squared error between s(t - C) and x(t) in the interval [t1, t2]. This is not when the cross correlation is maximum, but when int_{t1}^{t2} [x(t)s(t - C) - (1/2) s^{2}(t - C)] is maximum. They are not the same because the second term depends on C, which it would not do if the interval was from -\inty to \infty. So the value of theta that gives the largest peak in this quantity is what you want. And the position of the peak is then your estimate for C. illywhacker; .
Reply by ●February 21, 20092009-02-21
On Feb 21, 8:03�pm, illywhacker <illywac...@gmail.com> wrote:> On Feb 21, 4:58�pm, Neu <ikarosi...@hotmail.com> wrote: > > > > > � s(tau), �C <= tau <= C + t2-t1 > > > > >These two issues mess things seriously up; remove those > > > >two constraints and the problem is at least at first glance > > > >not all that different from a regular matched filter problem. > > > Thats exactly right. > > Unfortunately I cant remove these constraints. > > > But shouldn't (on average), the maximum amount of correlation be > > related to t2 (assuming white noise). > > What I �mean is that the peak of the cross-corelation between z(t) and > > x(t) should be largest when the windowed z(t) �is between t1 and t2 > > and less for any other window size? > > Not quite. Imagine both t1 and t2 are fixed. The best estimate of C is > the value that minimizes the squared error between s(t - C) and x(t) > in the interval [t1, t2].Sorry - this is poorly expressed. The probability of the signal must be maximized, i.e. this quantity in my first post must be minimized over theta2 and C: \int_{-\infty}^{t1} x^{2}(t) + \int_{t1}^{theta2} [x(t) - s(t - C)]^ {2} + \int_{theta2}^{\infty} x^{2}(t) But the terms in x^{2} drops out as independent of theta2 and C, leaving these two terms to be maximized over theta2 and C: int_{t1}^{theta2} [x(t)s(t - C) - (1/2) s^{2}(t - C)] . illywhacker;
Reply by ●February 25, 20092009-02-25
> Not quite. Imagine both t1 and t2 are fixed. The best estimate of C is > the value that minimizes the squared error between s(t - C) and x(t) > in the interval [t1, t2]. This is not when the cross correlation is > maximum, but when > > int_{t1}^{t2} [x(t)s(t - C) - (1/2) s^{2}(t - C)] > > is maximum. They are not the same because the second term depends on > C, which it would not do if the interval was from -\inty to \infty. So > the value of theta that gives the largest peak in this quantity is > what you want. And the position of the peak is then your estimate for > C.This is interesting. So then the MAP estimator also yields the least square error estimate (i guess thats related to the gaussian assumption). And the equation : int_{t1}^{t2} [x(t)s(t - C) - (1/2) s^{2}(t - C)] Seems to make sense, if we think of the last term as some sort of correction factor because the "full version" of the signal s(t) might not be present in x(t). I will see if I can implement a quick example and see how it compares with the pure correlation method. Thanks!
Reply by ●February 25, 20092009-02-25
On Feb 25, 5:21�pm, Neu <ikarosi...@hotmail.com> wrote:> > Not quite. Imagine both t1 and t2 are fixed. The best estimate of C is > > the value that minimizes the squared error between s(t - C) and x(t) > > in the interval [t1, t2]. This is not when the cross correlation is > > maximum, but when > > > int_{t1}^{t2} [x(t)s(t - C) - (1/2) s^{2}(t - C)] > > > is maximum. They are not the same because the second term depends on > > C, which it would not do if the interval was from -\inty to \infty. So > > the value of theta that gives the largest peak in this quantity is > > what you want. And the position of the peak is then your estimate for > > C. > > This is interesting. So then the MAP estimator also yields the least > square error estimate (i guess thats related to the gaussian > assumption).Exactly. Coupled with a uniform prior.> And the equation : > > int_{t1}^{t2} [x(t)s(t - C) - (1/2) s^{2}(t - C)] > > Seems to make sense, if we think of the last term as some sort of > correction factor because the "full version" of the signal s(t) might > not be present in x(t).Exactly.> I will see if I can implement a quick example and see how it compares > with the pure correlation method. > > Thanks!No problem. illywhacker;






