Problem with MUSIC Algorithm

Started by Brenneman September 23, 2006
Hi,

I am curious if anyone knows a way around a problem I've encountered
with the MUSIC algorithm (Multiple Signal Classification method people
use to determine a signal's angle of arrival). I've just started
working with MUSIC, and since I know there are many different
variations on this method, I am thinking that perhaps someone has
addressed this problem.

Suppose you have a uniform linear array of antennas with some spacing
L0/2, where L0 is some effective wavelength for which this array is a
"resolving array" (as defined by Cantoni and Godara).

Now suppose you have 2 signals where for the sake of specificity has
wavelengths L1 =0.9*L0 and and L2 =1.1*L0
and AOAs, alpha1 = pi/6 and alpha2 = pi/3

You do a FFT and find that there are two freqs present in the signal,
f1 = c/L1 and f2 = c/L2.

If you run MUSIC by doing a simple search over the AOA values at freq
f1, you should get two peaks: one at the true AOA (alpha1), but a
second false peak since we can find some angle alpha' such that at f1
it produces a phase that is identical to that for the second source,
i.e.: f1 *Cos(alpha') = f2*Cos(Alpha2), or if we work it out, we'd find
52 degrees. In other words, when we run MUSIC at this frequency, the
freq mismatch between f1 and f2 results in a misestimation in the
second AOA.

How does MUSIC handle this problem? Is there some variation of MUSIC
(ESPIRT, etc) that deals with this type of case?

MTIA,

Matt B

Brenneman skrev:
> Hi, > > I am curious if anyone knows a way around a problem I've encountered > with the MUSIC algorithm (Multiple Signal Classification method people > use to determine a signal's angle of arrival). I've just started > working with MUSIC, and since I know there are many different > variations on this method, I am thinking that perhaps someone has > addressed this problem. > > Suppose you have a uniform linear array of antennas with some spacing > L0/2, where L0 is some effective wavelength for which this array is a > "resolving array" (as defined by Cantoni and Godara). > > Now suppose you have 2 signals where for the sake of specificity has > wavelengths L1 =0.9*L0 and and L2 =1.1*L0 > and AOAs, alpha1 = pi/6 and alpha2 = pi/3 > > You do a FFT and find that there are two freqs present in the signal, > f1 = c/L1 and f2 = c/L2. > > If you run MUSIC by doing a simple search over the AOA values at freq > f1, you should get two peaks: one at the true AOA (alpha1), but a > second false peak since we can find some angle alpha' such that at f1 > it produces a phase that is identical to that for the second source, > i.e.: f1 *Cos(alpha') = f2*Cos(Alpha2), or if we work it out, we'd find > 52 degrees. In other words, when we run MUSIC at this frequency, the > freq mismatch between f1 and f2 results in a misestimation in the > second AOA. > > How does MUSIC handle this problem? Is there some variation of MUSIC > (ESPIRT, etc) that deals with this type of case?
Welcome to the world of statistical signal processing. MUSIC and related methods are usually labeled "frequency estimatiors", not "DoA/AoA estimators", and that is for a very specific reason. If you look at the exponential term in the temopral spectrum of a signal, you find S(w) = integral s(t) exp(jwt) dt where w = 2*pi*f. So the exponential term contains *only* a term related to the temporal frequency of the signal. Estimate w and you are done. Depending on what you want to do, yopu may already have the answer yopu seek; maybe you have to scale the estimate with a factor 1/2*pi, but that's a minor detail. The key is to see that while estimating w, any variation in the w term can be related to the "frequency content" of the signal. No problem. For the spatial spectrum, the basic expression is deceptively similar: S(k) = integral s(x) exp (jkx) dx where k is the wavenumber. The wavenumber k is defined as k = 2*pi*f/c' = 2*pi*f*cos(phi) /c0 where c' is an "apparent velocity", c0 is the intrinsic medium velocit and phi is the Angle of Arrival. Now, the key here is to realize that estimating a wavenumber k is only half the story. Unlike the w or 2*pi*f case, you here have to accound for a number of other less interesting parameters. You need to know the frequency you measure at, since the f term appears in the expression for the wavenumber. You need to know the c0 term, since it appears in the expression for the wavenumber. For EM signals, only the f term poses difficulties. The way I solved that, was to implement the signal analysis as a 2D DFT, where the DFT for the x -> k transform was replaced by a MUSIC-like algorithm. So all of a sudden, the computational complexity of your system has increased a LOT. For sonar applications, which was what I worked with, the c0 term poses additional problems, since it is hard to measure the speed of sound very presicely, and the properties of the water change rapidly. My personal opinion is that using high resolution methods in an environment that is known to within a few percent, is wated. To summarize: Your MUSIC implementation behaves exactly as expected. Dealing with this behaviour can be done, but at a significant computational cost. There are other aspects of MUSIC and friends that are less than favourable for certain practical applications, so make sure you understand all aspects of how MUSIC works in your application, before making any big decisions about how to proceed. Rune
Rune Allnor wrote:
> Brenneman skrev: > > Hi, > > > > I am curious if anyone knows a way around a problem I've encountered > > with the MUSIC algorithm (Multiple Signal Classification method people > > use to determine a signal's angle of arrival). I've just started > > working with MUSIC, and since I know there are many different > > variations on this method, I am thinking that perhaps someone has > > addressed this problem. > > > > Suppose you have a uniform linear array of antennas with some spacing > > L0/2, where L0 is some effective wavelength for which this array is a > > "resolving array" (as defined by Cantoni and Godara). > > > > Now suppose you have 2 signals where for the sake of specificity has > > wavelengths L1 =0.9*L0 and and L2 =1.1*L0 > > and AOAs, alpha1 = pi/6 and alpha2 = pi/3 > > > > You do a FFT and find that there are two freqs present in the signal, > > f1 = c/L1 and f2 = c/L2. > > > > If you run MUSIC by doing a simple search over the AOA values at freq > > f1, you should get two peaks: one at the true AOA (alpha1), but a > > second false peak since we can find some angle alpha' such that at f1 > > it produces a phase that is identical to that for the second source, > > i.e.: f1 *Cos(alpha') = f2*Cos(Alpha2), or if we work it out, we'd find > > 52 degrees. In other words, when we run MUSIC at this frequency, the > > freq mismatch between f1 and f2 results in a misestimation in the > > second AOA. > > > > How does MUSIC handle this problem? Is there some variation of MUSIC > > (ESPIRT, etc) that deals with this type of case? > > Welcome to the world of statistical signal processing. > > MUSIC and related methods are usually labeled "frequency > estimatiors", not "DoA/AoA estimators", and that is for a > very specific reason. > > If you look at the exponential term in the temopral spectrum > of a signal, you find > > S(w) = integral s(t) exp(jwt) dt > > where w = 2*pi*f. So the exponential term contains *only* a > term related to the temporal frequency of the signal. Estimate > w and you are done. Depending on what you want to do, > yopu may already have the answer yopu seek; maybe you > have to scale the estimate with a factor 1/2*pi, but that's > a minor detail. > > The key is to see that while estimating w, any variation in the > w term can be related to the "frequency content" of the signal. > No problem. > > For the spatial spectrum, the basic expression is deceptively > similar: > > S(k) = integral s(x) exp (jkx) dx > > where k is the wavenumber. > > The wavenumber k is defined as > > k = 2*pi*f/c' = 2*pi*f*cos(phi) /c0 > > where c' is an "apparent velocity", c0 is the intrinsic > medium velocit and phi is the Angle of Arrival. > > Now, the key here is to realize that estimating a > wavenumber k is only half the story. Unlike the w or 2*pi*f > case, you here have to accound for a number of other > less interesting parameters. You need to know the > frequency you measure at, since the f term appears in > the expression for the wavenumber. You need to know > the c0 term, since it appears in the expression for the > wavenumber. > > For EM signals, only the f term poses difficulties. The way > I solved that, was to implement the signal analysis as a > 2D DFT, where the DFT for the x -> k transform was replaced > by a MUSIC-like algorithm. > > So all of a sudden, the computational complexity of > your system has increased a LOT. > > For sonar applications, which was what I worked with, > the c0 term poses additional problems, since it is hard > to measure the speed of sound very presicely, and the > properties of the water change rapidly. My personal > opinion is that using high resolution methods in an > environment that is known to within a few percent, > is wated. > > To summarize: Your MUSIC implementation behaves > exactly as expected. Dealing with this behaviour can > be done, but at a significant computational cost. > > There are other aspects of MUSIC and friends that > are less than favourable for certain practical applications, > so make sure you understand all aspects of how MUSIC > works in your application, before making any big > decisions about how to proceed. > > Rune
hi Rune, excellent man, good reply indeed, i also clarified lot many points through your answer. good keep it up. regards particlereddy
Hi Rune,

Thank you for your thoughts on my problem. I was wondering if you could
expand a bit more on something you said in your response. You said that
your solution to the problem of dealing with misestimated AOAs due to
multiple signals with frequency differences was:

 The way
> I solved that, was to implement the signal analysis as a > 2D DFT, where the DFT for the x -> k transform was replaced > by a MUSIC-like algorithm. > > So all of a sudden, the computational complexity of > your system has increased a LOT. >
which I interret to mean that you are doing a 2D characterization of the total signal by plotting the DFT on one axis and an AOA search over the other. Am I understanding this approach correctly? Thank you again, Matt
Brenneman skrev:
> Hi Rune, > > Thank you for your thoughts on my problem. I was wondering if you could > expand a bit more on something you said in your response. You said that > your solution to the problem of dealing with misestimated AOAs due to > multiple signals with frequency differences was: > > The way > > I solved that, was to implement the signal analysis as a > > 2D DFT, where the DFT for the x -> k transform was replaced > > by a MUSIC-like algorithm. > > > > So all of a sudden, the computational complexity of > > your system has increased a LOT. > > > which I interret to mean that you are doing a 2D characterization of > the total signal by plotting the DFT on one axis and an AOA search over > the other. Am I understanding this approach correctly?
The data I worked with were regorded on a Uniform Linear Array, ULA. The data were broadband in nature, and I knew that a 2D DFT would provide the parameters I wanted. However, the spatial DFT required far longer arrays than practical if implemented naively, so my task was to see what could be gained from using something else than the DFT for the spatial analysis. The general approach was %%%%%%%%%%%% [N,M]=size(s); % t in the column direction, % x in the row direction for m=1:M Ss = fft(s(:,m)); end for n=1:floor(N/2) SS = MUSIC(Ss(n,:)); end %%%%%%%%%%%% The MUSIC stage here worked on complex-valued data, since the array Ss has already been FFT'd in the N direction. Rune