DSPRelated.com
Forums

simultaneous frequence and phase estimation

Started by Michal Kvasnicka April 21, 2004
christensen@nospam.ieee.org (Mads G. Christensen) wrote in message news:<wkyisfsfm53.fsf@zil.kom.auc.dk>...
> Hi Rune. > > Rune> A more interesting observation is that papers that describe > Rune> these types of methods applied to real-world data are simply > Rune> missing from the journals. I have yet to see any article > Rune> describing any frequency estimator being applied to real data in > Rune> real environments in "production type" applications. Heck, I > Rune> don't even know of any published real-world lab tests (as > Rune> opposed to computer-generated synthetic data) with these types > Rune> of techniques. > > Rune> I find this deafening silence with respect to practical > Rune> applications and real-world results quite worrying. > > Well, ESPRIT has been used (and you could very well use MUSIC, I > believe) for sinusoidal modeling and coding of speech and audio, and > there are papers on this in IEEE Trans. on Speech and Audio > Processing. Since different applications have different journals, no > wonder that one does not encounter these in the more theoretical > journals such as IEEE Trans. on Signal Processing. They don't belong > there. From your post I take it, that these methods are not applied in > your field. That doesn't mean that there aren't applications, though.
"Have been used" is a vague term. I am interested in seeing these types of methods being used in a running environment where there are requirements to fail-safe processing and robust performance. I ran a search on www.ieee.org/ieeexplore, ('music algorithm' <or> 'esprit algorithm') <and> (application <or> applied) and found a total of 55 articles, of which prehaps five treated these methods applied to real-world problems or data. Assuming MUSIC has been around for 25 years and ESPRIT for 15, five articles describing applications is not a lot. That's one article every five years or so. I have used frequency estimators to solve practical problems, although I have not published any of my results in the journals. During my work with the frequency estimators, I found that there are several reasons why they would not work, there are several problems to get them to work, and one must be very, very careful in interpreting and using the results. The problem is that I see no indications of the authors of research articles and textbooks being aware of the same problems and subtleties that I learned of in the hard way. Rune
Hi Rune.

Rune> "Have been used" is a vague term. I am interested in seeing
Rune> these types of methods being used in a running environment where
Rune> there are requirements to fail-safe processing and robust
Rune> performance.

Sure, they have been used, and papers on these applications exist. You
were saying that you couldn't find any papers documenting
applications. By applications I though you meant applied as in
used. I don't understand why you find this vague.

That the papers may not include the kind of evaluation you are looking
for is probably due to other reasons such as the complexity of the
experiments. At least that is the case in speech and audio
processing where the perceived quality (determined by listening tests)
is usually the measure of performance. 

Rune> I ran a search on www.ieee.org/ieeexplore,
Rune> ('music algorithm' <or> 'esprit algorithm') <and> (application
Rune> <or> applied)
Rune> and found a total of 55 articles, of which prehaps five treated
Rune> these methods applied to real-world problems or data. Assuming
Rune> MUSIC has been around for 25 years and ESPRIT for 15, five
Rune> articles describing applications is not a lot. That's one
Rune> article every five years or so.

For audio coding applications (my field), for example, the
computational complexity of these algorithms compared to the
alternatives may be prohibitive. The difficulties in incorporating
perception may be another reason for not using them (although it has
been done for ESPRIT). Anyways, my point is just that there may be
many reasons for the lack of papers.

I think that in many areas, the subspace methods are still considered
kind of excotic.

MUSIC also seems to be fairly popular in estimation problems in
communication. At least I know people who use it in such
applications. Again, I think that computational complexity is the
reason that there haven't been many papers in the past on applications
in that area. 

Rune> I have used frequency estimators to solve practical problems,
Rune> although I have not published any of my results in the
Rune> journals. During my work with the frequency estimators, I found
Rune> that there are several reasons why they would not work, there
Rune> are several problems to get them to work, and one must be very,
Rune> very careful in interpreting and using the results. The problem
Rune> is that I see no indications of the authors of research articles
Rune> and textbooks being aware of the same problems and subtleties
Rune> that I learned of in the hard way.

Are you thinking of, for example, spurious estimates, clustered
estimates for transient signals? What algorithm doesn't have such
problems? That the fundamental assumptions behind some solution may
turn out to be invalid in some applications is not specific to these
algorithms. I agree that you have to be careful in interpreting the
results, but that goes for any frequency estimator I know of. Anyways,
if your point is simply that you have to know what you are doing and
that these algorithms can be cumbersome, then I agree.

My guess is that the reason that you don't see many papers concerning
the problems in applications is, that tractable experiments showing
these kinds of problems are not very easily made and evaluated in a
formal way. Besides, who publishes negative results (although negative
results may prevent others from making the same mistakes)? :-)

-- 
/Mads (http://kom.aau.dk/~mgc)

Michal Kvasnicka wrote:
> Hi Paul, > > I am looking for adaptive high-resolution methods. FFT is standard but not > always suitable for this type of problems. > > Michal (ERA) > "Paul Howland" <howland@wanadoo.nl> p&#4294967295;se v diskusn&#4294967295;m pr&#4294967295;spevku
Sounds like an Eastern European language. From dsp, radar, and matlab I infer that you are perhaps not dealing with audio technology being close to my field of knowledge but possibly with spread spectrum experiments. In that case, temporal structure plays a role, too. So you might nevertheless be interested in my 'natural spectrogram: no arbitrary window, no trade-off' even if FCT does not at all provide phase.
Very nice...
Could you be more specific and less talkative? What exactly is "natural
spectrogram"?

Michal
"Eckard Blumschein" <blumschein@et.uni-magdeburg.de> p&#4294967295;se v diskusn&#4294967295;m
pr&#4294967295;spevku news:4087E522.8040106@et.uni-magdeburg.de...
> > > Michal Kvasnicka wrote: > > Hi Paul, > > > > I am looking for adaptive high-resolution methods. FFT is standard but
not
> > always suitable for this type of problems. > > > > Michal (ERA) > > "Paul Howland" <howland@wanadoo.nl> p&#4294967295;se v diskusn&#4294967295;m pr&#4294967295;spevku > > Sounds like an Eastern European language. From dsp, radar, and matlab I > infer that you are perhaps not dealing with audio technology being > close to my field of knowledge but possibly with spread spectrum > experiments. > In that case, temporal structure plays a role, too. So you might > nevertheless be interested in my 'natural spectrogram: no arbitrary > window, no trade-off' even if FCT does not at all provide phase. > > > > > > >
christensen@nospam.ieee.org (Mads G. Christensen) wrote in message news:<wkyzn94dq5s.fsf@zil.kom.auc.dk>...
> Hi Rune.
> Sure, they have been used, and papers on these applications exist. You > were saying that you couldn't find any papers documenting > applications. By applications I though you meant applied as in > used. I don't understand why you find this vague.
There are several meanings here. "I have implemented MUSIC as a homework computer assignment in class" is one, "I have designed a special variation of MUSIC that take advantage of some exotic condition hardly ever found in practice" is another, "I tested MUSIC with syntentic signals with different SNRs" is yet another. The use I don't see is as in "I used MUSIC in an unsupervised analysis system,applied to real-world measurements in production applications".
> That the papers may not include the kind of evaluation you are looking > for is probably due to other reasons such as the complexity of the > experiments.
Sure. That's why I haven't published anything. I find it not worth the while to squeeze al relevant background, examples and analysis etc into 10 or 12 pages, which usually is the budget of an article. > For audio coding applications (my field), for example, the
> computational complexity of these algorithms compared to the > alternatives may be prohibitive. The difficulties in incorporating > perception may be another reason for not using them (although it has > been done for ESPRIT). Anyways, my point is just that there may be > many reasons for the lack of papers.
My point exactly. I found several problematic aspects of these types of methods when I worked with them. The authors of journal articles appear not to be aware of the problems.
> I think that in many areas, the subspace methods are still considered > kind of excotic.
That's because they are. Unfortunately, one have to be very familiar with both linear algebra and vector spaces to work with these types of methods. Otherwise, they fail. Big time. That happens even with people who do know linear algebra and vector spaces...
> MUSIC also seems to be fairly popular in estimation problems in > communication. At least I know people who use it in such > applications. Again, I think that computational complexity is the > reason that there haven't been many papers in the past on applications > in that area.
Believe me, numerical complexity is the least problem with MUSIC.
> Are you thinking of, for example, spurious estimates, clustered > estimates for transient signals? What algorithm doesn't have such > problems? That the fundamental assumptions behind some solution may > turn out to be invalid in some applications is not specific to these > algorithms.
Perhaps not specific but such assumptions are qruical in getting half-relevant results. Try, for instance, to implement MUSIC with a signal covariance matrix of order 8. Then generate a synthetic signal, SNR of your choosing, that comprises 10 complex exponentials at arbitrary frequencies. How does MUSIC perform? What indication do you see that MUSIC has broken down? What will the cost of such a break-down be in a real-world applications?
> I agree that you have to be careful in interpreting the > results, but that goes for any frequency estimator I know of.
Of course. I mention MUSIC since that's the one everybody knows.
> Anyways, > if your point is simply that you have to know what you are doing and > that these algorithms can be cumbersome, then I agree.
I would go a bit further. These algorithms are so cumbersome that one should not expect them to work at all. Now, that applies to a certain extent to class-room (or lab) excercises with synthetic data, but the huge problems appear when you try to take the methods out of the computer lab and try to use the methods with real data.
> My guess is that the reason that you don't see many papers concerning > the problems in applications is, that tractable experiments showing > these kinds of problems are not very easily made and evaluated in a > formal way. Besides, who publishes negative results (although negative > results may prevent others from making the same mistakes)? :-)
Exactly. So we agree one should expect some sort of bias in the literature. It appears we agree on most points. Rune
Michal Kvasnicka wrote:
> Very nice... > Could you be more specific and less talkative? What exactly is "natural > spectrogram"?
Traditional spectrograms are based on complex-valued Fourier transform but they do not directly and not completely represent phase information. They omit phase because the ears are believed to ignore phase anyway which is not exactly true. Exhibiting further serious flaws, the usual spectrograms are is not a proper basis for a realistic description of cochlear function. There is only one natural spectrogram. It is based on real-valued Fourier cosine transform which does not provide magnitude vs. phase but amplitude as a function of frequency and time, instead. So the information is conveyed without any loss, and in a physiologically realistic manner. Amplitudes alternate, so they may be rectified. The natural spectrogram shows largely the same output as does basilar membrane. While such physiological details are not of interest here, I would like to stress a fundamental peculiarity. Use of FCT implies the possibility to chose a sliding point of time reference. If one intends to avoid permanent relocation of the time window for real-time analysis then there is no alternative but to use this admittedly uncommon notion of elapsed time instead of usual time. Actually elapsed time is always zero at the border towards future. Future does not matter in physics. You are certainly aware that it is strictly speaking physical nonsense to integrate over time from minus infinite to plus infinite. Causality limits relevant time to the past one. So Heaviside's trick of adding mutually cancelling even and odd mirror pictures into the empty half plane is a bit self-deceptive. It necessarily causes Hermitian redundancy. Negative values of frequency or wave number are due to the neglect of the half of rotating phasors in complex plane and do not correspond to any physical meaning. Real-valued frequency analysis is furthermore natural in that, it avoids a lot of arbitrariness. Already the choice of time reference t=0 at the natural border towards past is the only available natural one, while complex FT suffers from linear phase and is based on the neglect of either arbitrarily excluded clockwise or anti-clockwise rotating phasors. One arbitrariness demands an other one. In the end, users of a spectrogram have to choose shape, width, repetition rate, or overlap, respectively of the temporal window. FT is even to blame for the notorious dielemma between spectral and temporal resolution. Not so with FCT. Do not worry about FCT-mathematics. DCT works well in MP3. FCT may be combined with subsequent complex calculus in order to completely replace FT, if necessary. Of course, FCT is not a tool for deterministically predicting a tree of responses but it is adequate to the logically inversely structured task of signal analysis. FCT-mathematics offers nice simplifications. Not just the number of so called singularity functions is considerably reduced. More importantly, the remaining constitute a naturally ordered system, and integration constants are no longer necessary. No information is lost with inversion of integration. Admittedly, I am facing trouble with those who have to accept some amendments. I do not preach against negative values of frequency or wave number. Constituting inavoidable redundancy they are justified with FT. However, I would like to clearly point out that Hermitian symmetry is a must if the underlying quantity obeys unilaterality as do elapsed time and radius. This limits validity of the shift theorems. Quantum physicist meanwhile avoid discussing possible consequences. Maybe, they dislike the ideas that elapsed time has a natural zero because they do not yet consider Hermitian PCT symmetries just due to complex analysis. Introducing wave function immediately within complex plane, they could not even tell me which of the two rotating phasors they neglect. If one tries to transfer the flow of time into complex plane one ends up at Feynman's silly suggestion that there is a forward and a backward time, simultaneously. I disagree with experts of set theory on a flaw being at least as old as Buridan's famous donkey. In order to benefit from absence of redundance, FCT has to be defined not as a particular case of FT within IR but independently (and as I would like to stress equal in status) within *IR+. Mathematicians did not object when I said *IR+ contains IR. I feel in debt to Hermann Kremer who pointed me to original work by Cantor and Hilbert. I realized that Cantor's diagonal method was only consistent with infinity up to a n-th 'Grad'. Unfortunately, I have to confess that not even hyperreal numbers already offer equality between open and closed intervals of continuum. In other words, I suggest attributing if necessary two different values or signs to the same point. While it is common but rather abitrary habit to except the point t=0 from sign function, I imagine past (-infty, 0) and future (0, inft) - not [0, infty) - to join each other without any neutral point 0 in between. Both past and future own zeros, each with the appropriate sign. The controversy might have serious consequences for mathematical tenets. Because my puzzle surprisingly comes completely out, I am convinced to be correct. If not, it would not at all impair FCT and natural spectrogram because at the sliding origin t=0, theoretically infinitely many new values emerge each moment in permanent succession. Using Matlab, I managed to demonstrate that the natural spectrogram actually reaches best similarity with the natural solution, and its resolution does not just outperform the uncertainty principle as does audition too. Its resolution is in principle not subject to any limit. In all, you might be sure that the natural spectrogram is well founded. Eckard Blumschein
Hi Rune.

Sorry about the late reply. I've been away since thursday.


Rune> There are several meanings here. "I have implemented MUSIC as a
Rune> homework computer assignment in class" is one, "I have designed
Rune> a special variation of MUSIC that take advantage of some exotic
Rune> condition hardly ever found in practice" is another, "I tested
Rune> MUSIC with syntentic signals with different SNRs" is yet
Rune> another. The use I don't see is as in "I used MUSIC in an
Rune> unsupervised analysis system,applied to real-world measurements
Rune> in production applications".

Okay, that's what I thought you meant. Audio and speech modeling and
coding are exactly such applications and I know that at least ESPRIT
works very well in such a setting. Maybe it is also important to point
out that in these applications, people are usually happy if some
modeling or coding error is minimzed. Thus, the physical
interpretation of the parameters may not be all that important. I
guess that is a big difference from your applications and stuff like
DOA.  
 
Rune> Sure. That's why I haven't published anything. I find it not
Rune> worth the while to squeeze al relevant background, examples and
Rune> analysis etc into 10 or 12 pages, which usually is the budget of
Rune> an article.

Complex experiments are also often very hard to describe in a nice,
concise way. I consider it a big problem in audio coding, that the
experiments are typiccaly so complex, that there is no way, that you
can reproduce the results in a reasonable amount of time. Often people
leave out (or forget to mention) important details.

>> For audio coding applications (my field), for example, the >> computational complexity of these algorithms compared to the >> alternatives may be prohibitive. The difficulties in incorporating >> perception may be another reason for not using them (although it >> has been done for ESPRIT). Anyways, my point is just that there may >> be many reasons for the lack of papers.
Rune> My point exactly. I found several problematic aspects of these Rune> types of methods when I worked with them. The authors of journal Rune> articles appear not to be aware of the problems. I don't know if I really see that as a huge problem. If an estimator can be shown to have nice properties lige low variance and low bias in a synthetic setting, then I would say that we have good reasons to put an effort into making it work in an application also. But that can, as you point out, be difficult and usually takes a lot of expertice within the area of the application. What can be frustraing, though, is when the people working on the purely theoretical part of it don't get the problems and just keep writing paper after paper after paper on the subject. Rune> That's because they are. Unfortunately, one have to be very Rune> familiar with both linear algebra and vector spaces to work with Rune> these types of methods. Otherwise, they fail. Big time. That Rune> happens even with people who do know linear algebra and vector Rune> spaces... Yeah, sometimes I also get the feeling that people are happy about these methods because they are cool from a mathematical point of view. This also goes for subspace-based speech enhancement (denoising or whatever you want to call it). The mathematics look way cooler than the results sound ;)
>> MUSIC also seems to be fairly popular in estimation problems in >> communication. At least I know people who use it in such >> applications. Again, I think that computational complexity is the >> reason that there haven't been many papers in the past on >> applications in that area.
Rune> Believe me, numerical complexity is the least problem with Rune> MUSIC. I do believe you. But complexity may be a very good reason that people in digital communication haven't really looked into it until recently. Rune> Perhaps not specific but such assumptions are qruical in getting Rune> half-relevant results. Try, for instance, to implement MUSIC Rune> with a signal covariance matrix of order 8. Then generate a Rune> synthetic signal, SNR of your choosing, that comprises 10 Rune> complex exponentials at arbitrary frequencies. How does MUSIC Rune> perform? What indication do you see that MUSIC has broken down? Rune> What will the cost of such a break-down be in a real-world Rune> applications? Yeah. That is a big problem with MUSIC. Like I also said in a discussion we had earlier on, the problem of chosing the right orders of the covarinace matrix and the subspaces are not easy. At least in my applications, you can never really know in advance how many sinusoids there are going to be. Generally, the art of making the right assumptions is the art of engineering, at least in estimation. As always, it's a compromise. Rune> I would go a bit further. These algorithms are so cumbersome Rune> that one should not expect them to work at all. Now, that Rune> applies to a certain extent to class-room (or lab) excercises Rune> with synthetic data, but the huge problems appear when you try Rune> to take the methods out of the computer lab and try to use the Rune> methods with real data. That's a good point. At my university (Aalborg University in Denmark), teaching is group and project-based which means that usually people do end up working on real-life signals, so the students do get practical experience working with signal processing algorithms, for example. The downside is that we don't have as many lectures as other places do.
>> My guess is that the reason that you don't see many papers >> concerning the problems in applications is, that tractable >> experiments showing these kinds of problems are not very easily >> made and evaluated in a formal way. Besides, who publishes negative >> results (although negative results may prevent others from making >> the same mistakes)? :-)
Rune> Exactly. So we agree one should expect some sort of bias in the Rune> literature. It appears we agree on most points. Indeed. -- /Mads (http://kom.aau.dk/~mgc)
Randy Yates wrote:
> > A single sinusoid, well even I can estimate that. > But simultaneously estimating multiple sinusoids of unknown frequencies???
Yes, this is not hard if you know the # of frequencies in advance, and not *that* hard even if you don't but can bound it to a reasonably small number (say, dozens). It's just parametric time series modeling. There's a wide literature on it, but I'm most familiar with the Bayesian literature. Bretthorst shows how to do it from this point of view in his book, which is available online: http://bayes.wustl.edu/ (See the G. Larry Bretthorst section near the bottom.) If you email him he will probably even have free software for you to do it. I have implemented it in Python (as part of a NASA project developing software for analysis of astrophysical time series and other data) and will be releasing that implementation this summer. In a nutshell---if the frequencies are well-separated, the periodogram (the power spectrum, thought of as a continuous function of frequency) is a suffucient statistic for estimating the frequencies (they are just the frequencies of the peaks). But for close frequencies, you have to do something more sophisticated with the DFT than just look at its magnitude; the refined procedure uses phase information and can reliably measure frequencies, amplitudes, and phases for sinusoids much closer than the Fourier frequencies, if only the S/N is high enough. The approach is not limited to sinusoids; in fact, Bretthorst's main practical application has been to analyzing NMR data (exponentially damped sinusoids). In that case a fast algorithm is still possible by apodizing the data before the FFT. I actually got into this for developing methods for detecting extrasolar planets via doppler shifts in spectral lines; in our case we use a basis that isn't sinusoids, but instead the functions that describe the line-of-sight velocity of a Keplerian orbit. The data are not evenly sampled; this combined with the Keplerian shape means we can't use FFTs, but the data sets are small and the approach works fine doing the sums the "slow" way. Rune Allnor wrote:
> > "Have been used" is a vague term. I am interested in seeing these types > of methods being used in a running environment where there are > requirements to fail-safe processing and robust performance.
If you buy an NMR machine from Varian corporation (the kind used for chemical analysis, not medical imaging), it runs Bretthorst's algorithm for you. They hired him to write the software. It works incredibly well in such a real-world environment, enough so that it was even covered as part of a news item in *Science* a few years ago: Malakoff, D. 1999, Bayes offers a `New' Way to Make Sense of Numbers, Science, 286, 1460--1464 http://www.sciencemag.org/cgi/content/full/286/5444/1460 (I don't know if this URL has limited access...) If, however, the frequencies or phases are changing in time in a complicated way, you have to do more complicated things. Actually, Larry has worked on this (for modeling astrophysical sources) and has some impressive algorithms, but they are a lot more complicated than the "Bretthorst algorithm" I described above, which uses nothing other than the DFT. I have a paper describing the Bretthorst algorithm implemented for audio testing (with sums of sinusoids) if any readers would like to see some examples of the method in that context. Email me. Cheers, Tom Loredo -- To respond by email, replace "somewhere" with "astro" in the return address.
Tom Loredo <loredo@somewhere.cornell.edu> wrote in message news:<40900A32.8B6DB7F2@somewhere.cornell.edu>...
> Randy Yates wrote: > > > > A single sinusoid, well even I can estimate that. > > But simultaneously estimating multiple sinusoids of unknown frequencies??? > > Yes, this is not hard if you know the # of frequencies in advance, and > not *that* hard even if you don't but can bound it to a reasonably > small number (say, dozens). It's just parametric time series modeling. > There's a wide literature on it, but I'm most familiar with the Bayesian > literature. Bretthorst shows how to do it from this point of view in > his book, which is available online: > > http://bayes.wustl.edu/
Thanks for posting the link to the book. It looks interesting. Yesterday I gave a lecture in of a course we give here on statistical signal processing. My main objection against parametric frequency estimators (or any other parametric model, for that matter) is that some "clairvoyance" is required from the designer/user. As you say, knowing the number of frequencies in advance is of great help. How often does it happen, in real life, that you know that? In "my" seismic applications I have all the data available when I start the analysis. Because of that, I can do certain preliminary tests based on standard DFTs. Add some intuition and previous experience, and I can give a rough opinion about what waves and phenomena will be found in the data sets from a given area. However, I have discussed other types of applications of frequency estimators where one does not have this prior knowledge of what goes on, and where one does not have the opportunity to inspect and assess the data before embarking on an analysis using frequency estimators. My gambling records, minuscule as they may be, clearly prove that I am not a clairvoyant person. I can not tell beforehand how many narrow-band frequency lines, if any, will be present in the signal. For the most part, I can't even tell what model is best before testing models with known reference data. And I certainly don't know how to link up one particular parametric model with any given observation, unless doing several tests, evaluations and interpretations. And even then I may be plain wrong. To make a short story long, I'll describe a project I was involved in. Many years ago, an oil company did a marine seismic survey where shear waves were recorded. The shear waves that had bounced off the gas reservoir deep in the sea floor, were clearly visible in the recorded data. The problem was, where did those shear waves come from? The seismic source went off in the water column and could only generate pressure waves (P waves). The shear waves (S waves) had to be generated by P waves being converted somewhere. There were two competing hypotheses: 1) The S waves were generated when the P waves from the source hit the sea floor, then the S waves traveled down to the gas reservoir, bounced off, and the reflected S waves were recorded. This was known as the "PSS hypothesis". 2) The P wave from the source penetrated the sea bottom, traveled as P waves to the reservoir, got converted to S waves at the target, and the S waves propagated back up to the sea floor to be recorded. This was the "PPS hypothesis". When dealing with sedimentary rocks, the rule of thumb is that the S wave velocity (Vs) is half the P wave velocity (Vp). Vp can be measured by studying certain types of waves in the seismic data. The next rule of thumb is that Vs in non-consolidated sediments is lower than in consolidated rocks. Given that Vp was measured to 1800 m/s, we apply the first rule of thumb to find that Vs would be 900 m/s in the top sediments, if they were to be regarded as consolidated rocks. Which we know they are not. The second rule of thumb is rather vague. How much lower should we choose the Vs to be for the non-consolidated sediments? Should we choose Vs=850 m/s? Vs=600 m/s? Vs=50 m/s? During the initial analyses, no one knew. Some expert chose to use 600 m/s, based on these rules of thumb and some gut feeling. When they put these numbers into the equations, they found that the physical conditions for generating S waves in the data supported hypothesis 1, that S waves were generated at the sea floor. But they couldn't quite get the numbers to add up for the rest of the analysis. At that time, I was invloved in a student project at the university, on estimating Vs just below the sea floor. I got to test "my" method with the seismic data, and came up with estimates for Vs in the range 100 m/s - 300 m/s. Way lower than the initial "guesstimates" of 600 m/s. All of a sudden, the PSS hypotheses had to be rejected, and one had to concentrate on the PPS hypotheses. With the new numbers, everything started coming together. Note that the Expert who provided the initial estimates did everything right: He used what measurements were available to him, he used the rules of thumb that are generally acknowledged. He just missed on the scale. He guessed that Vs in sediments are half or two thirds of Vs in consolidated rocks. With 20/20 hindsight, he should have guessed one fifth of the Vs of consolidated rocks. Had he done that, however, he would probably faced severe criticism. Guessing numbers an order of magnitude lower than the 900 m/s he started from, could probably not be defended without other means of justification. Which were not available until my results were available one year later. After that initial test, I worked with the company for five years, developing and implementing high resolution analysis methods both for frequency estimation and parametric descriptions of the sea bed. I saw, and learned the hard way, that everything depends on getting those initial assumptions and models right. Achieving that requires either meticulous preparations or some sort of clairvoyance, some times even both. If you miss at this very first stage, you very quickly find yourself in very big trouble. So, in a very breaf summary, I find it very hard to see how to use frequency estimators in situations where I can not inspect and test the data beforehand, and then decide whether it's worth the effort to apply a frequency estimator. Rune
Hi Tom,

your post  and www pages are very useful. Thanks a lot. I am looking for
something like this.

Regards,
Michal Kvasnicka
"Tom Loredo" <loredo@somewhere.cornell.edu> p&#4294967295;se v diskusn&#4294967295;m pr&#4294967295;spevku
news:40900A32.8B6DB7F2@somewhere.cornell.edu...
> Randy Yates wrote: > > > > A single sinusoid, well even I can estimate that. > > But simultaneously estimating multiple sinusoids of unknown
frequencies???
> > Yes, this is not hard if you know the # of frequencies in advance, and > not *that* hard even if you don't but can bound it to a reasonably > small number (say, dozens). It's just parametric time series modeling. > There's a wide literature on it, but I'm most familiar with the Bayesian > literature. Bretthorst shows how to do it from this point of view in > his book, which is available online: > > http://bayes.wustl.edu/ > > (See the G. Larry Bretthorst section near the bottom.) If you email > him he will probably even have free software for you to do it. I have > implemented it in Python (as part of a NASA project developing software > for analysis of astrophysical time series and other data) and will be > releasing that implementation this summer. > > In a nutshell---if the frequencies are well-separated, the periodogram > (the power spectrum, thought of as a continuous function of frequency) > is a suffucient statistic for estimating the frequencies (they are just > the frequencies of the peaks). But for close frequencies, you have > to do something more sophisticated with the DFT than just look at its > magnitude; the refined procedure uses phase information and can reliably > measure frequencies, amplitudes, and phases for sinusoids much closer > than the Fourier frequencies, if only the S/N is high enough. > > The approach is not limited to sinusoids; in fact, Bretthorst's main > practical application has been to analyzing NMR data (exponentially > damped sinusoids). In that case a fast algorithm is still possible > by apodizing the data before the FFT. I actually got into this for > developing methods for detecting extrasolar planets via doppler shifts > in spectral lines; in our case we > use a basis that isn't sinusoids, but instead the functions that > describe the line-of-sight velocity of a Keplerian orbit. The data > are not evenly sampled; this combined with the Keplerian shape means > we can't use FFTs, but the data sets are small and the approach works > fine doing the sums the "slow" way. > > Rune Allnor wrote: > > > > "Have been used" is a vague term. I am interested in seeing these types > > of methods being used in a running environment where there are > > requirements to fail-safe processing and robust performance. > > If you buy an NMR machine from Varian corporation (the kind used for > chemical analysis, not medical imaging), it runs Bretthorst's algorithm > for you. They hired him to write the software. It works incredibly well > in such a real-world environment, enough so that it was even covered as > part of a news item in *Science* a few years ago: > > Malakoff, D. 1999, Bayes offers a `New' Way to Make Sense > of Numbers, Science, 286, 1460--1464 > http://www.sciencemag.org/cgi/content/full/286/5444/1460 > (I don't know if this URL has limited access...) > > If, however, the frequencies or phases are changing in time in a > complicated way, you have to do more complicated things. Actually, > Larry has worked on this (for modeling astrophysical sources) and has > some impressive algorithms, but they are a lot more complicated than the > "Bretthorst algorithm" I described above, which uses nothing other > than the DFT. > > I have a paper describing the Bretthorst algorithm implemented for > audio testing (with sums of sinusoids) if any readers would like to see > some examples of the method in that context. Email me. > > Cheers, > Tom Loredo > > -- > > To respond by email, replace "somewhere" with "astro" in the > return address.