If one has a noisy signal s(t) = A . sin( w.t ) . exp( -k.t ) + noise(t) and wants to estimate the unknowns A, w and k, then is there any way to define the number N of samples and the times [t0 t1 ... tN] that minimize the variance of the estimates? I'm really interested in the big picture of how to pose the problem as an optimization rather than a specific answer to this special case. We can assume we have reasonable initial estimates for A, w and k and the noise is Gaussian random.
Optimal sampling
Started by ●December 26, 2008
Reply by ●December 27, 20082008-12-27
spasmous wrote:> If one has a noisy signal > > s(t) = A . sin( w.t ) . exp( -k.t ) + noise(t) > > and wants to estimate the unknowns A, w and k, then is there any way > to define the number N of samples and the times [t0 t1 ... tN] that > minimize the variance of the estimates? I'm really interested in the > big picture of how to pose the problem as an optimization rather than > a specific answer to this special case. We can assume we have > reasonable initial estimates for A, w and k and the noise is Gaussian > random.There must be more constraints. Otherwise, the more, the merrier. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●December 27, 20082008-12-27
On Fri, 26 Dec 2008 19:40:07 -0800, spasmous wrote:> If one has a noisy signal > > s(t) = A . sin( w.t ) . exp( -k.t ) + noise(t) > > and wants to estimate the unknowns A, w and k, then is there any way to > define the number N of samples and the times [t0 t1 ... tN] that > minimize the variance of the estimates? I'm really interested in the big > picture of how to pose the problem as an optimization rather than a > specific answer to this special case. We can assume we have reasonable > initial estimates for A, w and k and the noise is Gaussian random.As Jerry said, without adding further constraints the more information you have the better you can do your estimation. Because your 'desired' signal is decaying exponentially the value that you can squeeze out of later samples decays exponentially as well, but because an exponential decay never goes to zero you'll never have a sample that -- in a purist sense -- you can ignore. A better way to frame the question may be "at what point should I truncate the series", but for that to be framed as an optimization question you need to find a way to assign a cost to the length of the series that you collect. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply by ●December 27, 20082008-12-27
On Dec 27, 4:18�pm, Tim Wescott <t...@seemywebsite.com> wrote:> On Fri, 26 Dec 2008 19:40:07 -0800, spasmous wrote: > > If one has a noisy signal > > > s(t) = A . sin( w.t ) . exp( -k.t ) + noise(t) > > > and wants to estimate the unknowns A, w and k, then is there any way to > > define the number N of samples and the times [t0 t1 ... tN] that > > minimize the variance of the estimates? I'm really interested in the big > > picture of how to pose the problem as an optimization rather than a > > specific answer to this special case. We can assume we have reasonable > > initial estimates for A, w and k and the noise is Gaussian random. > > As Jerry said, without adding further constraints the more information > you have the better you can do your estimation. �Because your 'desired' > signal is decaying exponentially the value that you can squeeze out of > later samples decays exponentially as well, but because an exponential > decay never goes to zero you'll never have a sample that -- in a purist > sense -- you can ignore.I agree with the above except that, since the OP already knows the form of the signal, and is only trying to estimate the unknown coefficients, it is not necessary to continue sampling forever. As others have mentioned, without any other constraints there is no optimization problem (assuming an ideal ADC). One possible constraint that could be imposed is for a fixed total number of samples (the timing of which could be anywhere). Since the SNR decays as time increases (due to the negative exponential term - assuming k is positive!), it would seem the samples should be concentrated at small values of t. Whether they should be regularly or irregularly sampled, and what the sample spacing should be, makes an interesting optimization problem (for which I do not know the solution). However it does have some similarities with the problem of optimization of pilot positions for channel estimation in OFDM. -T
Reply by ●December 27, 20082008-12-27
On Dec 26, 10:40�pm, spasmous <spasm...@gmail.com> wrote:> If one has a noisy signal > > s(t) = A . sin( w.t ) . exp( -k.t ) + noise(t) > > and wants to estimate the unknowns A, w and k, then is there any way > to define the number N of samples and the times [t0 t1 ... tN] that > minimize the variance of the estimates? I'm really interested in the > big picture of how to pose the problem as an optimization rather than > a specific answer to this special case. We can assume we have > reasonable initial estimates for A, w and k and the noise is Gaussian > random.Did the results of a search on "estimation of damped sinusoid parameters" turn up anything useful? The topic has been covered here before, see for example: http://www.dsprelated.com/showmessage/57062/1.php John
Reply by ●December 27, 20082008-12-27
On Dec 27, 2:50�am, Tom <tom.der...@gmail.com> wrote:> On Dec 27, 4:18�pm, Tim Wescott <t...@seemywebsite.com> wrote: > > > > > > > On Fri, 26 Dec 2008 19:40:07 -0800, spasmous wrote: > > > If one has a noisy signal > > > > s(t) = A . sin( w.t ) . exp( -k.t ) + noise(t) > > > > and wants to estimate the unknowns A, w and k, then is there any way to > > > define the number N of samples and the times [t0 t1 ... tN] that > > > minimize the variance of the estimates? I'm really interested in the big > > > picture of how to pose the problem as an optimization rather than a > > > specific answer to this special case. We can assume we have reasonable > > > initial estimates for A, w and k and the noise is Gaussian random. > > > As Jerry said, without adding further constraints the more information > > you have the better you can do your estimation. �Because your 'desired' > > signal is decaying exponentially the value that you can squeeze out of > > later samples decays exponentially as well, but because an exponential > > decay never goes to zero you'll never have a sample that -- in a purist > > sense -- you can ignore. > > I agree with the above except that, since the OP already knows the > form of the signal, and is only trying to estimate the unknown > coefficients, it is not necessary to continue sampling forever. > > As others have mentioned, without any other constraints there is no > optimization problem (assuming an ideal ADC). > > One possible constraint that could be imposed is for a fixed total > number of samples (the timing of which could be anywhere). Since the > SNR decays as time increases (due to the negative exponential term - > assuming k is positive!), it would seem the samples should be > concentrated at small values of t. Whether they should be regularly or > irregularly sampled, and what the sample spacing should be, makes an > interesting optimization problem (for which I do not know the > solution). However it does have some similarities with the problem of > optimization of pilot positions for channel estimation in OFDM. > > -T- Hide quoted text - > > - Show quoted text -This is the same as "sine-wave curve-fitting", with the addition of the "exp( -k.t )" term. You might search the literature on this and see if the known solutions can be modified to accomodate the exponential decay term. Probably an FFT extended with a zero-padded input (to get increased frequency resolution) could be used as a starting point for the estimation; if you get close enough on the initial guess then the error surface may be locally free of false minima and therefore searchable by the normal gradient methods. Since multiplication in the time time is convolution in the freq domain, the " exp( -k.t )" term will spread the spectral stick in the frequency domain, so you will want to zero-pad the input as much as possible so that you can pick the peak bin to get a good initial frequency estimate. Hopefully the noise(t) signal is not so large that it could create a larger peak than the signal itself! Bob Adams
Reply by ●December 27, 20082008-12-27
"spasmous" <spasmous@gmail.com> wrote in message news:87fefef8-4db8-4bad-9d5c-61a402e2aeb4@v5g2000prm.googlegroups.com...> If one has a noisy signal > > s(t) = A . sin( w.t ) . exp( -k.t ) + noise(t) > > and wants to estimate the unknowns A, w and k, then is there any way > to define the number N of samples and the times [t0 t1 ... tN] that > minimize the variance of the estimates?1. There should be one more variable - initial phase. 2. Obviously the higher is N, the better is the estimate. 3. If N is fixed, then the optimal sampling locations are going to be at or near the peaks and the zero crossings of the first few periods of the wave. The exact location depends on the expected W, K and phase. Vladimir Vassilevsky DSP and Mixed Signal Consultant www.abvolt.com
Reply by ●December 27, 20082008-12-27
Tom wrote:> ... Since the > SNR decays as time increases (due to the negative exponential term - > assuming k is positive!), it would seem the samples should be > concentrated at small values of t. ...Taking all the samples when t is small makes it hard to estimate the damping factor. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●December 27, 20082008-12-27
On 27 Des, 04:40, spasmous <spasm...@gmail.com> wrote:> If one has a noisy signal > > s(t) = A . sin( w.t ) . exp( -k.t ) + noise(t) > > and wants to estimate the unknowns A, w and k, then is there any way > to define the number N of samples and the times [t0 t1 ... tN] that > minimize the variance of the estimates?This is a parameter estimation problem where the properties of the estimates are governed by the Cramer-Rao Bound (CRB), if this exists. The CRB usually expresses the variance of the estimate in terms of SNR and number of samples, N. Since one never knows the parameters before doing the measurement (of one did, there would be no point in doing the meaurements), one uses whatever prior intuitions, estimates or guesswork as guides, and compare the end results with the CRB. If one did a good job, the variance of the estimates are close to the CRB; if one missed the variance is larger. Rune
Reply by ●December 28, 20082008-12-28
On Dec 27, 11:19�am, Rune Allnor <all...@tele.ntnu.no> wrote:> On 27 Des, 04:40, spasmous <spasm...@gmail.com> wrote: > > > If one has a noisy signal > > > s(t) = A . sin( w.t ) . exp( -k.t ) + noise(t) > > > and wants to estimate the unknowns A, w and k, then is there any way > > to define the number N of samples and the times [t0 t1 ... tN] that > > minimize the variance of the estimates? > > This is a parameter estimation problem where the properties of > the estimates are governed by the Cramer-Rao Bound (CRB), if this > exists. The CRB usually expresses the variance of the estimate > in terms of SNR and number of samples, N. > > Since one never knows the parameters before doing the measurement > (of one did, there would be no point in doing the meaurements), one > uses whatever prior intuitions, estimates or guesswork as guides, > and compare the end results with the CRB. If one did a good job, > the variance of the estimates are close to the CRB; if one missed > the variance is larger. > > RuneTo add to Rune's excellent comments, also be careful with how "SNR" is defined, whether it is a per-sample noise variance, or bandwidth- normalized noise variance. You have the latter case. So increasing your sampling rate will increase your per-sample noise variance. This is not necessarily a bad thing though, but you do have to be more careful with your analysis. Julius






