DSPRelated.com
Forums

tracking sound source

Started by Sylvia July 15, 2007
On 16 Jul, 04:52, dbd <d...@ieee.org> wrote:
> On Jul 15, 6:05 pm, Rune Allnor <all...@tele.ntnu.no> wrote: > > > > > > > On 16 Jul, 02:28, dbd <d...@ieee.org> wrote: > > > > Look in Journal of the Acoustic Society of America (JASA) for examples > > > of acoustically tracking aircraft. > > > The key word here is "aircraft." The other scenario where > > passive acoustic tracking is relevant is underwater acoustics. > > JASA has a history of letting weird underwater stuff through, > > so be very careful. > > > If you find an article which cites the paper > > > Collins & Kuperman: "Focalization: Environmental focusing > > and source localization", JASA, vol 90, p 1410 - 1422, > > 1991. > > > be very, very careful about how you proceed. The authors > > of that paper claim that they are able to find the range > > to a source in an "unknown or partially unknown" environment. > > > It's been some time since I last read that paper, but I > > never understood how the techniques worked. The way I > > interpreted the article, they had done a computer > > simulation to generate "measured" data and then run > > a computer program again to find the input parameters > > which replicated the "measured" data. There was, as far > > as I could tell, no records of blind -- let alone double > > blind -- tests or verifications, so I had no choise but > > to assume that the same person who generated the "measured" > > data also set up the "inverse" experiment. If that's correct, > > the paper is worthless as the "analyst" knows what > > results the experiment ought to produce, and so can > > stop the computations when the results are "good". > > Nevertheless, five years ago it was cited some 80 times > > by other articles. > > > So just be careful. JASA might look impressive but > > not everything in there is quite up to expectations. > > > Rune > > When reading any journal (or newsgroup) it is always necessary to > carefully consider claims made and methods followed. I am sorry to > hear that this has lead you to disappointment with an article in JASA.
Not only one article. For some reason, that particular article has had a huge impact over the last decade and a half. Underwater Acoustics has forfeited its status as a engineering dicipline because of the way this article (and a few others like it) have been embraced in the community. I understand you are familiar with MFP; the last good MFP article -- as far as I am concerned -- is the Hampson and Heitmeyer article of November -89. That article showed what everybody who actually have been to sea at least suspected; namely that the dynamics of ocean are so great that they mess up virtucally all aspects of high-precision measurements.
> I have had that happen more than once, too. Since I have never felt > adequately motivated to study all of the intricate details of the > myriad match field processing algorithms that have been proposed, I > have not read and do not intend to read this article in detail. From > the abstract, introduction and conclusions it appears that Collins and > Kuperman have proposed yet another MFP algorithm, listed their > suggested approaches to implementation and claimed that it is feasible > to try the algorithm on real data. They have made no performance > claims.
JASA consistently lets articles on MFP through where no robustness tests, no real-life data processing, and no verification tests are done. Granted, I discontinued my JASA subscription some 5 yeras ago, so things might have changed since then, but during the 15 years between 1987 and 2002 JASA published on average on the order of 30 to 50 articles on MFP per year. I don't think as many as 10% of those (3-5 per year) treated real-life data. The one article that really pi**ed me off was the article Soares, Siderius and Jesus: "Source localization in a time-varying ocean waveguide", Journ. Ac. Soc. Am., V 112 no 5, p 1879, 2002 describing an experiment where the acoustic parameters of the water column was measured continuously (to the extent that's practicaliy possible) during a propagation experiment. The analysis showed that the measured data could not be replicated by modeling unless the parameter measurements were updated on-line. More or less what I would expect; the sea is a very dynamic environment which changes very fast, and MFP s very sensitive to even the most minute changes. The authors then claimed -- and both JASA reviewers and editors let the statement thorugh to the published article! -- that "this demonstrates the *robustness* of MFP." There was a statement by the same Jesus in one of his articles around 1993-93 where he commeneted how few "good" results were present even in the few articles that did treat real-life data. I used to hold the greatest respect him for that comment.
> I am sorry this disappoints you.
I don't care what people can find funding to do to make a living, as long as they keep it to themselves. Once upon a time I used to work at a government agency. I quit when I was assigned the task of making MFP -- based on focalization -- work.
> If others have gone on to > actually perform some of the implementations suggested it is only > appropriate for them to cite the source.
Of course.
> Have you looked to see if any > the citations are from people who have actually performed the > additional tests you feel are important?
The tests I "feel is important" is to use the maths I -- and presumably others, too -- learned at age 12: If you have a mathematical expression with two unknowns, you need two equations to solve it. Collins and Kuperman's approach would fail at a high- school maths graduation, since they have a system with three "meta variables" (source, environment, sensor system) and attempt to solve for two (the source and the environment). People who know about remote sensing systems (even those who make remote sensing systems which work) control the sensor setup + at lest one more factor. ATC radars are based on knowledge of the environment (ground- based radars are directed towards free space, c = 3e8 m/s); in seismic surveys the surveyors control both the source and sensor geometries. This is basic stuff. A 12-year old would spot the flaw, if it was presented in a context other than JASA.
> Did any of them even work? In > any event, most on comp.dsp are protected from this particular article > by its specialization and with your warning the OP is now safe. > > Thank you,
Rune
>On Jul 15, 5:50 am, "Sylvia" <sylvia.za...@gmail.com> wrote: >> Does any one know good material on tracking single sound source using
only
>> two microphones on a dummy head.I have seen kalman filter tracking for >> constant velocity targets etc in case of radar applications but i
dont
>> know how to use these in case of sound sources. >> Thanks > >Sylvia > >Is 'dummy head' a necessary part of the problem statement? If your >question is about modeling the human head then look for HRTF: head >related transfer function. > >If your question is about tracking sound sources with two sensors, >there are a number of possibilities. > >If your problem can be defined to allow you to determine bearing, such >a operating in a half-plane containing the sensors and using time >delay to determine bearing, there is a considerable literature on >"bearings only trackers". Some of this literature was developed to >support the sonobuoy community where collocated directional and omni >sensors are used to resolve bearing ambiguity. Some of these >algorithms require multiple sensor locations. Look in IEEE >Transactions on AES. > >In the special case of a source at a constant frequency traveling in a >straight line at a constant velocity, a single sensor allows >calculation of velocity from the maximum doppler deviation at long >range and range of CPA from the doppler slope at CPA. A second sensor >allows bearing calculation. >Look in Journal of the Acoustic Society of America (JASA) for examples >of acoustically tracking aircraft. > >If you are interested in applications for tracking human voice then >the simplifying assumption of a single constant frequency is unlikely >to be satisfied. Indoor applications will also be complicated by >reverberation. > >So what are you really looking for? > >As always at comp.dsp, a more detailed question increases the chance >of a relevant response. > >Dale B. Dalrymple >http://dbdimages.com > > >Thanx Dale for ypur response
Yes,HRTF is part of my problem.Without any noise or reverberation,I can determine the elevation and azimuth of source by using HRTF based binaural sound localization algorithms(using only two microphones).If I want to track a sound source,I will have only elevation and azimuth corresponding to each location in 3d,which tracking model is used in this situation?
>On Jul 15, 5:50 am, "Sylvia" <sylvia.za...@gmail.com> wrote: >> Does any one know good material on tracking single sound source using
only
>> two microphones on a dummy head.I have seen kalman filter tracking for >> constant velocity targets etc in case of radar applications but i
dont
>> know how to use these in case of sound sources. >> Thanks > >Sylvia > >Is 'dummy head' a necessary part of the problem statement? If your >question is about modeling the human head then look for HRTF: head >related transfer function. > >If your question is about tracking sound sources with two sensors, >there are a number of possibilities. > >If your problem can be defined to allow you to determine bearing, such >a operating in a half-plane containing the sensors and using time >delay to determine bearing, there is a considerable literature on >"bearings only trackers". Some of this literature was developed to >support the sonobuoy community where collocated directional and omni >sensors are used to resolve bearing ambiguity. Some of these >algorithms require multiple sensor locations. Look in IEEE >Transactions on AES. > >In the special case of a source at a constant frequency traveling in a >straight line at a constant velocity, a single sensor allows >calculation of velocity from the maximum doppler deviation at long >range and range of CPA from the doppler slope at CPA. A second sensor >allows bearing calculation. >Look in Journal of the Acoustic Society of America (JASA) for examples >of acoustically tracking aircraft. > >If you are interested in applications for tracking human voice then >the simplifying assumption of a single constant frequency is unlikely >to be satisfied. Indoor applications will also be complicated by >reverberation. > >So what are you really looking for? > >As always at comp.dsp, a more detailed question increases the chance >of a relevant response. > >Dale B. Dalrymple >http://dbdimages.com > > >Thanx Dale for ypur response
Yes,HRTF is part of my problem.Without any noise or reverberation,I can determine the elevation and azimuth of source by using HRTF based binaural sound localization algorithms(using only two microphones).If I want to track a sound source,I will have only elevation and azimuth corresponding to each location in 3d,which tracking model is used in this situation?
On Sun, 15 Jul 2007 14:52:38 -0700, Rune Allnor <allnor@tele.ntnu.no>
wrote:

>On 15 Jul, 22:28, "Philip Martel" <pomar...@comcast.net> wrote: >> "Eric Jacobsen" <eric.jacob...@ieee.org> wrote in message >> >> news:cfmk93t3lkb3de6tsjj24ots2usdnjnp9o@4ax.com... >> >> >> >> >> >> > On Sun, 15 Jul 2007 11:19:31 -0400, "Philip Martel" >> > <pomar...@comcast.net> wrote: >> >> >>"Sylvia" <sylvia.za...@gmail.com> wrote in message >> >>news:uLOdnfVcWq6GhQfbnZ2dnUVZ_u6rnZ2d@giganews.com... >> >>> Does any one know good material on tracking single sound source using >> >>> only >> >>> two microphones on a dummy head.I have seen kalman filter tracking for >> >>> constant velocity targets etc in case of radar applications but i dont >> >>> know how to use these in case of sound sources. >> >>> Thanks >> >> >>Unless you know the sound amplitude of the source, you won't be able to >> >>get >> >>range information. With only two microphones, you can use beamforming or >> >>interfrometry techniques to detrmine the position of the source as >> >>somewhere >> >>on a cone. In the 2 dimensional case this reduces to 2 lines that cross >> >>the >> >>line between the sensors at the same point. If, as is usually the case, >> >>the >> >>source is far from the two microphones compared to the separation of the >> >>microphones, you will have localized the source to two lines that cross >> >>the >> >>line formed by the microphones at a known angle. Usually, you assume that >> >>the source is on one side of the sensor. >> >> >>With these assumptions, you have a series of angles to the sensor. Google >> >>"alpha beta tracker" or "alpha beta gamma tracker" for ways of predicting >> >>the source's future position. >> >> >> Best wishes, >> >> --Phil Martel >> >> > Believe it or not, you can get range with a single microphone *with >> > some qualifying assumptions*. Basically, if the target is travelling >> > in a straight line the doppler characteristic can be used to determine >> > range once the target approaches close to (but even a little before) >> > the point where it is closest to the microphone. >> >> > Eric Jacobsen >> > Minister of Algorithms >> > Abineau Communications >> >http://www.ericjacobsen.org >> >> Well, given a fixed frequency sound source (helicopter for exemple) I >> suppose you're right, though I'd have to think about it for a while to >> convince myself that the shape of the doppeler curve before CPA and the >> bearing rate would be enough to determine a unique range > >You can't fix the range that way, only get a time for the CPA. >You'll need at least two mics to geat a bearing to CPA. > >In order to fix a range with only one mic, you will need >*knowledge* of the type of helicopter. If you *know* the >make and model of the helicopter, you also *know* certain >key characteristics in the sound signature, and can use >those to estimate the speed and range based on the Doppler >characteristics. Provided, of course, that the pilot plays >your game and flies at constant speed in a straight line. > >Once you do no longer *know* the characteristics, but have >to *estimate* them, with all the uncertainty that follows, >all bets are off what ranges and speed are concerned -- again, >with only one mic involved. If you have an array where you >can track bearings, things become somewhat easier. > >Rune
Well, I demonstrated range detection using a single microphone for my thesis, and it required no previous characterization of the signal. It does require that the target is moving straight and level and isn't making rapid variations in it's acoustic signature (slower variations are actually okay). It also requires that the acoustic signature has some discernible features that provides a reasonably well-behaved cross-correlation of the spectrum. It didn't work well for jet aircraft on afterburner, as their acoustic spectrum tends to look like noise. Propeller aircraft and helicopters are good, and I even got some good results with recorded tapes of Indy cars going down the straight at Indianapolis Motor Speedway. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org
On Jul 16, 2:52 am, "Sylvia" <sylvia.za...@gmail.com> wrote:

> Yes,HRTF is part of my problem.Without any noise or reverberation,I can > determine the elevation and azimuth of source by using HRTF based binaural > sound localization algorithms(using only two microphones).If I want to > track a sound source,I will have only elevation and azimuth corresponding > to each location in 3d,which tracking model is used in this situation?
Your situation sounds like 'bearings-only' and 'target motion analysis'. If you have a single source and good signal strength, there are papers back to the '70s and '80s that discuss track formation. If you add multiple sources to track and lowered signal to noise conditions you need more complicated algorithms such as 'probabilistic data association filter' (PDAF). I would sugest that you look at something like the IEEE Xplore site and search on these terms in Trans. on AES. Look at abstracts. If they are more complicated than you are interested in, track back through the referenced documents to find simpler cases, follow through the referencing documents to find more developed applications. I think you will find that Google will lead to the same path. Good Luck! Dale B. Dalrymple http://dbdimages.com
Eric Jacobsen wrote:
> On Sun, 15 Jul 2007 14:52:38 -0700, Rune Allnor <allnor@tele.ntnu.no> > wrote: > >> On 15 Jul, 22:28, "Philip Martel" <pomar...@comcast.net> wrote: >>> "Eric Jacobsen" <eric.jacob...@ieee.org> wrote in message >>> >>> news:cfmk93t3lkb3de6tsjj24ots2usdnjnp9o@4ax.com... >>> >>> >>> >>> >>> >>>> On Sun, 15 Jul 2007 11:19:31 -0400, "Philip Martel" >>>> <pomar...@comcast.net> wrote: >>>>> "Sylvia" <sylvia.za...@gmail.com> wrote in message >>>>> news:uLOdnfVcWq6GhQfbnZ2dnUVZ_u6rnZ2d@giganews.com... >>>>>> Does any one know good material on tracking single sound source using >>>>>> only >>>>>> two microphones on a dummy head.I have seen kalman filter tracking for >>>>>> constant velocity targets etc in case of radar applications but i dont >>>>>> know how to use these in case of sound sources. >>>>>> Thanks >>>>> Unless you know the sound amplitude of the source, you won't be able to >>>>> get >>>>> range information. With only two microphones, you can use beamforming or >>>>> interfrometry techniques to detrmine the position of the source as >>>>> somewhere >>>>> on a cone. In the 2 dimensional case this reduces to 2 lines that cross >>>>> the >>>>> line between the sensors at the same point. If, as is usually the case, >>>>> the >>>>> source is far from the two microphones compared to the separation of the >>>>> microphones, you will have localized the source to two lines that cross >>>>> the >>>>> line formed by the microphones at a known angle. Usually, you assume that >>>>> the source is on one side of the sensor. >>>>> With these assumptions, you have a series of angles to the sensor. Google >>>>> "alpha beta tracker" or "alpha beta gamma tracker" for ways of predicting >>>>> the source's future position. >>>>> Best wishes, >>>>> --Phil Martel >>>> Believe it or not, you can get range with a single microphone *with >>>> some qualifying assumptions*. Basically, if the target is travelling >>>> in a straight line the doppler characteristic can be used to determine >>>> range once the target approaches close to (but even a little before) >>>> the point where it is closest to the microphone. >>>> Eric Jacobsen >>>> Minister of Algorithms >>>> Abineau Communications >>>> http://www.ericjacobsen.org >>> Well, given a fixed frequency sound source (helicopter for exemple) I >>> suppose you're right, though I'd have to think about it for a while to >>> convince myself that the shape of the doppeler curve before CPA and the >>> bearing rate would be enough to determine a unique range >> You can't fix the range that way, only get a time for the CPA. >> You'll need at least two mics to geat a bearing to CPA. >> >> In order to fix a range with only one mic, you will need >> *knowledge* of the type of helicopter. If you *know* the >> make and model of the helicopter, you also *know* certain >> key characteristics in the sound signature, and can use >> those to estimate the speed and range based on the Doppler >> characteristics. Provided, of course, that the pilot plays >> your game and flies at constant speed in a straight line. >> >> Once you do no longer *know* the characteristics, but have >> to *estimate* them, with all the uncertainty that follows, >> all bets are off what ranges and speed are concerned -- again, >> with only one mic involved. If you have an array where you >> can track bearings, things become somewhat easier. >> >> Rune > > Well, I demonstrated range detection using a single microphone for my > thesis, and it required no previous characterization of the signal. It > does require that the target is moving straight and level and isn't > making rapid variations in it's acoustic signature (slower variations > are actually okay). It also requires that the acoustic signature has > some discernible features that provides a reasonably well-behaved > cross-correlation of the spectrum. It didn't work well for jet > aircraft on afterburner, as their acoustic spectrum tends to look like > noise. Propeller aircraft and helicopters are good, and I even got > some good results with recorded tapes of Indy cars going down the > straight at Indianapolis Motor Speedway.
Do you mean you followed the Doppler's progress through the tangential point, and worked back to the trajectory, assuming a straight line path, and a constant pitch from the source? That works. Steve
"Sylvia" <sylvia.zakir@gmail.com> wrote in message 
news:2-ednV7h9OTb3QbbnZ2dnUVZ_tijnZ2d@giganews.com...
> >On Jul 15, 5:50 am, "Sylvia" <sylvia.za...@gmail.com> wrote: >>> Does any one know good material on tracking single sound source using > only >>> two microphones on a dummy head.I have seen kalman filter tracking for >>> constant velocity targets etc in case of radar applications but i > dont >>> know how to use these in case of sound sources. >>> Thanks >> >>Sylvia >> >>Is 'dummy head' a necessary part of the problem statement? If your >>question is about modeling the human head then look for HRTF: head >>related transfer function. >> >>If your question is about tracking sound sources with two sensors, >>there are a number of possibilities. >> >>If your problem can be defined to allow you to determine bearing, such >>a operating in a half-plane containing the sensors and using time >>delay to determine bearing, there is a considerable literature on >>"bearings only trackers". Some of this literature was developed to >>support the sonobuoy community where collocated directional and omni >>sensors are used to resolve bearing ambiguity. Some of these >>algorithms require multiple sensor locations. Look in IEEE >>Transactions on AES. >> >>In the special case of a source at a constant frequency traveling in a >>straight line at a constant velocity, a single sensor allows >>calculation of velocity from the maximum doppler deviation at long >>range and range of CPA from the doppler slope at CPA. A second sensor >>allows bearing calculation. >>Look in Journal of the Acoustic Society of America (JASA) for examples >>of acoustically tracking aircraft. >> >>If you are interested in applications for tracking human voice then >>the simplifying assumption of a single constant frequency is unlikely >>to be satisfied. Indoor applications will also be complicated by >>reverberation. >> >>So what are you really looking for? >> >>As always at comp.dsp, a more detailed question increases the chance >>of a relevant response. >> >>Dale B. Dalrymple >>http://dbdimages.com >> >> >>Thanx Dale for ypur response > Yes,HRTF is part of my problem.Without any noise or reverberation,I can > determine the elevation and azimuth of source by using HRTF based binaural > sound localization algorithms(using only two microphones).If I want to > track a sound source,I will have only elevation and azimuth corresponding > to each location in 3d,which tracking model is used in this situation?
As I mentioned before, google "alpha beta tracker" Roughly: T->Bearing = FixAngle( T->Bearing + Dt * BearErr ); BearErr = DeltAngle( H->Bearing, T->Bearing ); T->Bearing = FixAngle( T->Bearing + A * BearErr ); T->DBearing += B * BearErr / Dt; Where H->Bearing is the measured new bearing and T->Bearing is the filtered bearing. A(alpha) and B(beta) are fixed values FixAngle() handles angles wrapping around 360 degrees and DeltAngle() calculates the difference between the angles and puts it in the range -180..180. There are, of course, better estimators. For a steady velocity, the bearing to a target versus time has the shape of an arctangent curve. The alpha beta tracker is fairly simple to implement.
On Tue, 17 Jul 2007 01:35:22 +0800, Steve Underwood <steveu@dis.org>
wrote:

>Eric Jacobsen wrote: >> On Sun, 15 Jul 2007 14:52:38 -0700, Rune Allnor <allnor@tele.ntnu.no> >> wrote: >> >>> On 15 Jul, 22:28, "Philip Martel" <pomar...@comcast.net> wrote: >>>> "Eric Jacobsen" <eric.jacob...@ieee.org> wrote in message >>>> >>>> news:cfmk93t3lkb3de6tsjj24ots2usdnjnp9o@4ax.com... >>>> >>>> >>>> >>>> >>>> >>>>> On Sun, 15 Jul 2007 11:19:31 -0400, "Philip Martel" >>>>> <pomar...@comcast.net> wrote: >>>>>> "Sylvia" <sylvia.za...@gmail.com> wrote in message >>>>>> news:uLOdnfVcWq6GhQfbnZ2dnUVZ_u6rnZ2d@giganews.com... >>>>>>> Does any one know good material on tracking single sound source using >>>>>>> only >>>>>>> two microphones on a dummy head.I have seen kalman filter tracking for >>>>>>> constant velocity targets etc in case of radar applications but i dont >>>>>>> know how to use these in case of sound sources. >>>>>>> Thanks >>>>>> Unless you know the sound amplitude of the source, you won't be able to >>>>>> get >>>>>> range information. With only two microphones, you can use beamforming or >>>>>> interfrometry techniques to detrmine the position of the source as >>>>>> somewhere >>>>>> on a cone. In the 2 dimensional case this reduces to 2 lines that cross >>>>>> the >>>>>> line between the sensors at the same point. If, as is usually the case, >>>>>> the >>>>>> source is far from the two microphones compared to the separation of the >>>>>> microphones, you will have localized the source to two lines that cross >>>>>> the >>>>>> line formed by the microphones at a known angle. Usually, you assume that >>>>>> the source is on one side of the sensor. >>>>>> With these assumptions, you have a series of angles to the sensor. Google >>>>>> "alpha beta tracker" or "alpha beta gamma tracker" for ways of predicting >>>>>> the source's future position. >>>>>> Best wishes, >>>>>> --Phil Martel >>>>> Believe it or not, you can get range with a single microphone *with >>>>> some qualifying assumptions*. Basically, if the target is travelling >>>>> in a straight line the doppler characteristic can be used to determine >>>>> range once the target approaches close to (but even a little before) >>>>> the point where it is closest to the microphone. >>>>> Eric Jacobsen >>>>> Minister of Algorithms >>>>> Abineau Communications >>>>> http://www.ericjacobsen.org >>>> Well, given a fixed frequency sound source (helicopter for exemple) I >>>> suppose you're right, though I'd have to think about it for a while to >>>> convince myself that the shape of the doppeler curve before CPA and the >>>> bearing rate would be enough to determine a unique range >>> You can't fix the range that way, only get a time for the CPA. >>> You'll need at least two mics to geat a bearing to CPA. >>> >>> In order to fix a range with only one mic, you will need >>> *knowledge* of the type of helicopter. If you *know* the >>> make and model of the helicopter, you also *know* certain >>> key characteristics in the sound signature, and can use >>> those to estimate the speed and range based on the Doppler >>> characteristics. Provided, of course, that the pilot plays >>> your game and flies at constant speed in a straight line. >>> >>> Once you do no longer *know* the characteristics, but have >>> to *estimate* them, with all the uncertainty that follows, >>> all bets are off what ranges and speed are concerned -- again, >>> with only one mic involved. If you have an array where you >>> can track bearings, things become somewhat easier. >>> >>> Rune >> >> Well, I demonstrated range detection using a single microphone for my >> thesis, and it required no previous characterization of the signal. It >> does require that the target is moving straight and level and isn't >> making rapid variations in it's acoustic signature (slower variations >> are actually okay). It also requires that the acoustic signature has >> some discernible features that provides a reasonably well-behaved >> cross-correlation of the spectrum. It didn't work well for jet >> aircraft on afterburner, as their acoustic spectrum tends to look like >> noise. Propeller aircraft and helicopters are good, and I even got >> some good results with recorded tapes of Indy cars going down the >> straight at Indianapolis Motor Speedway. > >Do you mean you followed the Doppler's progress through the tangential >point, and worked back to the trajectory, assuming a straight line path, >and a constant pitch from the source? That works. > >Steve
Yup, exactly. That does limit the applications, naturally, but it is possible to do. Eric Jacobsen Minister of Algorithms Abineau Communications http://www.ericjacobsen.org
In article <2-ednV7h9OTb3QbbnZ2dnUVZ_tijnZ2d@giganews.com>, "Sylvia" <sylvia.zakir@gmail.com> wrote:
>>On Jul 15, 5:50 am, "Sylvia" <sylvia.za...@gmail.com> wrote: >>> Does any one know good material on tracking single sound source using >only >>> two microphones on a dummy head.I have seen kalman filter tracking for >>> constant velocity targets etc in case of radar applications but i >dont >>> know how to use these in case of sound sources. >>> Thanks >> >>Sylvia >> >>Is 'dummy head' a necessary part of the problem statement? If your >>question is about modeling the human head then look for HRTF: head >>related transfer function. >> >>If your question is about tracking sound sources with two sensors, >>there are a number of possibilities. >> >>If your problem can be defined to allow you to determine bearing, such >>a operating in a half-plane containing the sensors and using time >>delay to determine bearing, there is a considerable literature on >>"bearings only trackers". Some of this literature was developed to >>support the sonobuoy community where collocated directional and omni >>sensors are used to resolve bearing ambiguity. Some of these >>algorithms require multiple sensor locations. Look in IEEE >>Transactions on AES. >> >>In the special case of a source at a constant frequency traveling in a >>straight line at a constant velocity, a single sensor allows >>calculation of velocity from the maximum doppler deviation at long >>range and range of CPA from the doppler slope at CPA. A second sensor >>allows bearing calculation. >>Look in Journal of the Acoustic Society of America (JASA) for examples >>of acoustically tracking aircraft. >> >>If you are interested in applications for tracking human voice then >>the simplifying assumption of a single constant frequency is unlikely >>to be satisfied. Indoor applications will also be complicated by >>reverberation. >> >>So what are you really looking for? >> >>As always at comp.dsp, a more detailed question increases the chance >>of a relevant response. >> >>Dale B. Dalrymple >>http://dbdimages.com >> >> >>Thanx Dale for ypur response >Yes,HRTF is part of my problem.Without any noise or reverberation,I can >determine the elevation and azimuth of source by using HRTF based binaural >sound localization algorithms(using only two microphones).If I want to >track a sound source,I will have only elevation and azimuth corresponding >to each location in 3d,which tracking model is used in this situation?
You also have the model of the tafget track. The standard assumption is straight line course at a constant altitude and constant speed. Some models allow acceleration but this is very, very complicated to implement.
On 16 Jul, 19:18, Eric Jacobsen <eric.jacob...@ieee.org> wrote:
> On Sun, 15 Jul 2007 14:52:38 -0700, Rune Allnor <all...@tele.ntnu.no> > wrote: > > > > > > >On 15 Jul, 22:28, "Philip Martel" <pomar...@comcast.net> wrote: > >> "Eric Jacobsen" <eric.jacob...@ieee.org> wrote in message > > >>news:cfmk93t3lkb3de6tsjj24ots2usdnjnp9o@4ax.com... > > >> > On Sun, 15 Jul 2007 11:19:31 -0400, "Philip Martel" > >> > <pomar...@comcast.net> wrote: > > >> >>"Sylvia" <sylvia.za...@gmail.com> wrote in message > >> >>news:uLOdnfVcWq6GhQfbnZ2dnUVZ_u6rnZ2d@giganews.com... > >> >>> Does any one know good material on tracking single sound source using > >> >>> only > >> >>> two microphones on a dummy head.I have seen kalman filter tracking for > >> >>> constant velocity targets etc in case of radar applications but i dont > >> >>> know how to use these in case of sound sources. > >> >>> Thanks > > >> >>Unless you know the sound amplitude of the source, you won't be able to > >> >>get > >> >>range information. With only two microphones, you can use beamforming or > >> >>interfrometry techniques to detrmine the position of the source as > >> >>somewhere > >> >>on a cone. In the 2 dimensional case this reduces to 2 lines that cross > >> >>the > >> >>line between the sensors at the same point. If, as is usually the case, > >> >>the > >> >>source is far from the two microphones compared to the separation of the > >> >>microphones, you will have localized the source to two lines that cross > >> >>the > >> >>line formed by the microphones at a known angle. Usually, you assume that > >> >>the source is on one side of the sensor. > > >> >>With these assumptions, you have a series of angles to the sensor. Google > >> >>"alpha beta tracker" or "alpha beta gamma tracker" for ways of predicting > >> >>the source's future position. > > >> >> Best wishes, > >> >> --Phil Martel > > >> > Believe it or not, you can get range with a single microphone *with > >> > some qualifying assumptions*. Basically, if the target is travelling > >> > in a straight line the doppler characteristic can be used to determine > >> > range once the target approaches close to (but even a little before) > >> > the point where it is closest to the microphone. > > >> > Eric Jacobsen > >> > Minister of Algorithms > >> > Abineau Communications > >> >http://www.ericjacobsen.org > > >> Well, given a fixed frequency sound source (helicopter for exemple) I > >> suppose you're right, though I'd have to think about it for a while to > >> convince myself that the shape of the doppeler curve before CPA and the > >> bearing rate would be enough to determine a unique range > > >You can't fix the range that way, only get a time for the CPA. > >You'll need at least two mics to geat a bearing to CPA. > > >In order to fix a range with only one mic, you will need > >*knowledge* of the type of helicopter. If you *know* the > >make and model of the helicopter, you also *know* certain > >key characteristics in the sound signature, and can use > >those to estimate the speed and range based on the Doppler > >characteristics. Provided, of course, that the pilot plays > >your game and flies at constant speed in a straight line. > > >Once you do no longer *know* the characteristics, but have > >to *estimate* them, with all the uncertainty that follows, > >all bets are off what ranges and speed are concerned -- again, > >with only one mic involved. If you have an array where you > >can track bearings, things become somewhat easier. > > >Rune > > Well, I demonstrated range detection using a single microphone for my > thesis, and it required no previous characterization of the signal. It > does require that the target is moving straight and level and isn't > making rapid variations in it's acoustic signature (slower variations > are actually okay). It also requires that the acoustic signature has > some discernible features that provides a reasonably well-behaved > cross-correlation of the spectrum.
How did you do that? You need to observe the source while passing the CPA and estimate the acoustic signature with no Doppler? OK, I'll agree that would work, but it would hardly be robust. As you may be aware of, I have this very awkward preoccupation with applications and robustness; I can't see how your method would work if you do *not* observe the source at CPA and do *not* know the source characteristics. Rune