DSPRelated.com
Forums

MUSIC Aglorithm (Source Localization)

Started by rudykeram October 31, 2013
Hi, 
I am trying to understand the MUISC (Multiple Signal Classification)
algorithm. I am new in this topic. So, I am sorry if my questions seem so
elementry.
First of all, I would like to make sure that Source Localization is
different than beamforming. Even though in Source Localization, we are
dealing with phase arrays, but the goal is not to form (amplify) the beam
in one direction and suppress it in the other directions, correct? And,
instead we are trying to find the DoA of the sound sources, correct?

After reading some articles, I tried to look at matlab pmusic algorithm:
http://www.mathworks.com/help/signal/ref/pmusic.html
 
And I looked at the Example 1 of the above link, here is where I am getting
confused.
Because my understanding is that with MUISC we can do source localization
by finding the DoA.
But all the examples that I looked on the web, including the one I listed
above, they are all treating the input signal being a sinusoidal waveform
having two different frequencies, with small seperation. 
But are they all assuming that each sound source producing only one tone (a
single frequency singal) ?!?

Because, my understanding of the Example 1 is that one source produces a
waveform at a certain frequency, and the second source produces another
waveform at a different frequency?
Is this what MUSIC algorithm can handle (Having sources running at one
frequency)? 

How about the real system? What if in real system, one speaker generates a
sinusoidal waveform that has several different frequencies, which is a very
general and realiztic case. Just to simplify my question, let's just say
that Sound Source "A" generates a sinusoidal wave with two different
frequencies, and Sound Sounce "B" generates another sinusoidal wave with
different two frequenceis. 
In this case, the input signal will be composed of four different
frequencies when it arrives at the system.

Then can we still apply MUSIC algorithm to determine the DoA of the two
sources?
I would appreciate some help on this, to remove my confusion.

Thanks, 
--Rudy
 
 	 

_____________________________		
Posted through www.DSPRelated.com
MUSIC, ESPRIT, and all related methods basically "sample" a signal spatially, and use an appropriate Fourier transform to transform direction of arrival onto a frequency term. 

The actual prototype problem they solve is the classical Prony problem. 
y_n = \sum_k a_k exp(b_k n), n = 0, ..., N-1. 

In MUSIC/ESPRIT papers written in the 90s, the derivation follows: a narrowband signal arrives at a uniform array, such that when a snapshot of the received signal is taken, one gets:
y_n = \sum_k a_k exp(j \theta_k n), n = 0, ..., N-1. 

In this case \theta_k are the directions of arrival, up to a scaling factor, and n indexes the antenna arrays. 

So to recap you need two things: 1. The signal is narrowband, 2. The array is uniformly spaced. 

What you are trying to solve is actually much easier. Since you say your two signals are at different frequencies, you can simply separate them first, and then for each of the two frequencies apply MUSIC/ESPRIT or what have you. 

The amplification part that papers of that era are fixated on can in fact help you. Both MUSIC/ESPRIT families of algorithm first solve for the b_k, and then do a linear fit to get a_k. The a_k values can be used to estimate signal strengths, or for optimizing how to best combine the incident signals to get the best SNR. 

If your source signal is not narrowband in the first place, then the typical approach is to break it down via filterbank or FFT bank, then for each bin one can apply MUSIC/ESPRIT. Then one typically applies a higher-level algorithm to pick and choose and combine the outputs of the different bins. 

However, the usual caution applies: in the general non-narrowband case, there is no one method that can be applied universally and trivially at the same time. You have to look at the specific case, and make sure the whole system works for whatever your goal is. 

Anyway, I hope the above helps. Your questions aren't elementary. 

Julius 



On Thursday, October 31, 2013 3:26:37 PM UTC-4, rudykeram wrote:
> Hi, > > I am trying to understand the MUISC (Multiple Signal Classification) > > algorithm. I am new in this topic. So, I am sorry if my questions seem so > > elementry. > > First of all, I would like to make sure that Source Localization is > > different than beamforming. Even though in Source Localization, we are > > dealing with phase arrays, but the goal is not to form (amplify) the beam > > in one direction and suppress it in the other directions, correct? And, > > instead we are trying to find the DoA of the sound sources, correct? > > > > After reading some articles, I tried to look at matlab pmusic algorithm: > > http://www.mathworks.com/help/signal/ref/pmusic.html > > > > And I looked at the Example 1 of the above link, here is where I am getting > > confused. > > Because my understanding is that with MUISC we can do source localization > > by finding the DoA. > > But all the examples that I looked on the web, including the one I listed > > above, they are all treating the input signal being a sinusoidal waveform > > having two different frequencies, with small seperation. > > But are they all assuming that each sound source producing only one tone (a > > single frequency singal) ?!? > > > > Because, my understanding of the Example 1 is that one source produces a > > waveform at a certain frequency, and the second source produces another > > waveform at a different frequency? > > Is this what MUSIC algorithm can handle (Having sources running at one > > frequency)? > > > > How about the real system? What if in real system, one speaker generates a > > sinusoidal waveform that has several different frequencies, which is a very > > general and realiztic case. Just to simplify my question, let's just say > > that Sound Source "A" generates a sinusoidal wave with two different > > frequencies, and Sound Sounce "B" generates another sinusoidal wave with > > different two frequenceis. > > In this case, the input signal will be composed of four different > > frequencies when it arrives at the system. > > > > Then can we still apply MUSIC algorithm to determine the DoA of the two > > sources? > > I would appreciate some help on this, to remove my confusion. > > > > Thanks, > > --Rudy > > > > > > > > _____________________________ > > Posted through www.DSPRelated.com
Julius, 
Thank you veyr much for taking the time and explaining it. You answer make
it cleaner, but I am still little unsure about the whole idea that why and
how the algorithm actually works.

Because for example, let's say that I have two different sources.
Let's say the first souce generates the following waveform:
S1 = sin(pi/4*n) + cos(pi/3*n) + sin(pi/5*n)
So, my first source is a waveform composed of three different frequencies.


And, let's say my second waveform runs at completely different
frequencies.
S2 = sin(pi/6*n) + cos(pi/2*n) + sin(pi/7*n)

At the input of the the ADC of each of my antennas, I am going to see the
combination of these signals (S1+S2).

So, each ADC will see a phase shifted version of (S1+S2), but in fact it
will see a combination of waveform comprising 6 different freqauency terms.
Then how would the MUSIC algorithm be able to even distinguish between the
two sources ?!?

We have a mixutre of six different frequencies. How would the algorithm
works itself out such that it knows that which three frequencies belong to
S1, and which three frequencies belong to S2 ?


Could you please explain in simple terms, why the algorithm works the way
it works. What is it about breaking down the covarience matrix that will
determine what direction these six signals are coming from ?

I would really appreciate some explanation.

Thanks, 
--Rudy 


>MUSIC, ESPRIT, and all related methods basically "sample" a signal
spatiall=
>y, and use an appropriate Fourier transform to transform direction of
arriv=
>al onto a frequency term.=20 > >The actual prototype problem they solve is the classical Prony
problem.=20
>y_n =3D \sum_k a_k exp(b_k n), n =3D 0, ..., N-1.=20 > >In MUSIC/ESPRIT papers written in the 90s, the derivation follows: a
narrow=
>band signal arrives at a uniform array, such that when a snapshot of the
re=
>ceived signal is taken, one gets: >y_n =3D \sum_k a_k exp(j \theta_k n), n =3D 0, ..., N-1.=20 > >In this case \theta_k are the directions of arrival, up to a scaling
factor=
>, and n indexes the antenna arrays.=20 > >So to recap you need two things: 1. The signal is narrowband, 2. The array
=
>is uniformly spaced.=20 > >What you are trying to solve is actually much easier. Since you say your
tw=
>o signals are at different frequencies, you can simply separate them
first,=
> and then for each of the two frequencies apply MUSIC/ESPRIT or what have
y=
>ou.=20 > >The amplification part that papers of that era are fixated on can in fact
h=
>elp you. Both MUSIC/ESPRIT families of algorithm first solve for the b_k,
a=
>nd then do a linear fit to get a_k. The a_k values can be used to estimate
=
>signal strengths, or for optimizing how to best combine the incident
signal=
>s to get the best SNR.=20 > >If your source signal is not narrowband in the first place, then the
typica=
>l approach is to break it down via filterbank or FFT bank, then for each
bi=
>n one can apply MUSIC/ESPRIT. Then one typically applies a higher-level
alg=
>orithm to pick and choose and combine the outputs of the different
bins.=20
> >However, the usual caution applies: in the general non-narrowband case,
the=
>re is no one method that can be applied universally and trivially at the
sa=
>me time. You have to look at the specific case, and make sure the whole
sys=
>tem works for whatever your goal is.=20 > >Anyway, I hope the above helps. Your questions aren't elementary.=20 > >Julius=20 > > > >On Thursday, October 31, 2013 3:26:37 PM UTC-4, rudykeram wrote: >> Hi,=20 >>=20 >> I am trying to understand the MUISC (Multiple Signal Classification) >>=20 >> algorithm. I am new in this topic. So, I am sorry if my questions seem
so
>>=20 >> elementry. >>=20 >> First of all, I would like to make sure that Source Localization is >>=20 >> different than beamforming. Even though in Source Localization, we are >>=20 >> dealing with phase arrays, but the goal is not to form (amplify) the
beam
>>=20 >> in one direction and suppress it in the other directions, correct? And, >>=20 >> instead we are trying to find the DoA of the sound sources, correct? >>=20 >>=20 >>=20 >> After reading some articles, I tried to look at matlab pmusic
algorithm:
>>=20 >> http://www.mathworks.com/help/signal/ref/pmusic.html >>=20 >> =20 >>=20 >> And I looked at the Example 1 of the above link, here is where I am
getti=
>ng >>=20 >> confused. >>=20 >> Because my understanding is that with MUISC we can do source
localization
>>=20 >> by finding the DoA. >>=20 >> But all the examples that I looked on the web, including the one I
listed
>>=20 >> above, they are all treating the input signal being a sinusoidal
waveform
>>=20 >> having two different frequencies, with small seperation.=20 >>=20 >> But are they all assuming that each sound source producing only one tone
=
>(a >>=20 >> single frequency singal) ?!? >>=20 >>=20 >>=20 >> Because, my understanding of the Example 1 is that one source produces
a
>>=20 >> waveform at a certain frequency, and the second source produces another >>=20 >> waveform at a different frequency? >>=20 >> Is this what MUSIC algorithm can handle (Having sources running at one >>=20 >> frequency)?=20 >>=20 >>=20 >>=20 >> How about the real system? What if in real system, one speaker generates
=
>a >>=20 >> sinusoidal waveform that has several different frequencies, which is a
ve=
>ry >>=20 >> general and realiztic case. Just to simplify my question, let's just
say
>>=20 >> that Sound Source "A" generates a sinusoidal wave with two different >>=20 >> frequencies, and Sound Sounce "B" generates another sinusoidal wave
with
>>=20 >> different two frequenceis.=20 >>=20 >> In this case, the input signal will be composed of four different >>=20 >> frequencies when it arrives at the system. >>=20 >>=20 >>=20 >> Then can we still apply MUSIC algorithm to determine the DoA of the two >>=20 >> sources? >>=20 >> I would appreciate some help on this, to remove my confusion. >>=20 >>=20 >>=20 >> Thanks,=20 >>=20 >> --Rudy >>=20 >> =20 >>=20 >> =20 >>=20 >>=20 >>=20 >> _____________________________ =09 >>=20 >> Posted through www.DSPRelated.com > >
_____________________________ Posted through www.DSPRelated.com
I think you are mixing up time- and spatial- domain sampling. 

For what you describe, you have to do it in two steps: 

1. For each antenna array waveform, Isolate each frequency tone, regardless of which source they come for. This is done by taking into account that each tone (disregarding from which source) is narrowband, so you can apply MUSIC/ESPRIT in *time* domain, for each antenna array waveform. 

2. For each processed / isolated tone, take spatial snapshots in *spatial* domain, across your (uniform) antenna array. Then apply MUSIC/ESPRIT in *spatial* domain, to get the arrival angles. 

If you want to then identify which sources sent which tones, you need a third step where you associate tones and DOAs. 

It may be possible to do both steps at once, but I am not aware of any references. 

In summary, in what you are describing you have two domains: temporal and spatial. MUSIC and ESPRIT works in one dimension at a time. For them to work in either dimension, the other dimension has to be "narrowband". Thus, you have to pick one domain to use to isolate its components, such that when you switch to the other dimension, the narrowband assumption applies. 

Recall the original Prony model, and write down your problem simultaneously in time- and spatial-domains, keep track of your indices. 

Hope this explains it better. If you need a more elementary explanation, feel free to let me know, but if you do then you have to forget either the spatial domain or the time domain. Given your way of writing S1 and S2 I think we can stick to time domain first, ignore the spatial part. 

At any rate, for any signal that has components cos(2pi f0 n), you can write as 1/2* [ \exp(j 2pi f0 n)+\exp(-j2pi f0 n) ] which fits into the original Prony problem. 

Julius


On Friday, November 1, 2013 5:55:01 PM UTC-4, rudykeram wrote:
> Julius, > > Thank you veyr much for taking the time and explaining it. You answer make > > it cleaner, but I am still little unsure about the whole idea that why and > > how the algorithm actually works. > > > > Because for example, let's say that I have two different sources. > > Let's say the first souce generates the following waveform: > > S1 = sin(pi/4*n) + cos(pi/3*n) + sin(pi/5*n) > > So, my first source is a waveform composed of three different frequencies. > > > > > > And, let's say my second waveform runs at completely different > > frequencies. > > S2 = sin(pi/6*n) + cos(pi/2*n) + sin(pi/7*n) > > > > At the input of the the ADC of each of my antennas, I am going to see the > > combination of these signals (S1+S2). > > > > So, each ADC will see a phase shifted version of (S1+S2), but in fact it > > will see a combination of waveform comprising 6 different freqauency terms. > > Then how would the MUSIC algorithm be able to even distinguish between the > > two sources ?!? > > > > We have a mixutre of six different frequencies. How would the algorithm > > works itself out such that it knows that which three frequencies belong to > > S1, and which three frequencies belong to S2 ? > > > > > > Could you please explain in simple terms, why the algorithm works the way > > it works. What is it about breaking down the covarience matrix that will > > determine what direction these six signals are coming from ? > > > > I would really appreciate some explanation. > > > > Thanks, > > --Rudy > > > > > > >MUSIC, ESPRIT, and all related methods basically "sample" a signal > > spatiall= > > >y, and use an appropriate Fourier transform to transform direction of > > arriv= > > >al onto a frequency term.=20 > > > > > >The actual prototype problem they solve is the classical Prony > > problem.=20 > > >y_n =3D \sum_k a_k exp(b_k n), n =3D 0, ..., N-1.=20 > > > > > >In MUSIC/ESPRIT papers written in the 90s, the derivation follows: a > > narrow= > > >band signal arrives at a uniform array, such that when a snapshot of the > > re= > > >ceived signal is taken, one gets: > > >y_n =3D \sum_k a_k exp(j \theta_k n), n =3D 0, ..., N-1.=20 > > > > > >In this case \theta_k are the directions of arrival, up to a scaling > > factor= > > >, and n indexes the antenna arrays.=20 > > > > > >So to recap you need two things: 1. The signal is narrowband, 2. The array > > = > > >is uniformly spaced.=20 > > > > > >What you are trying to solve is actually much easier. Since you say your > > tw= > > >o signals are at different frequencies, you can simply separate them > > first,= > > > and then for each of the two frequencies apply MUSIC/ESPRIT or what have > > y= > > >ou.=20 > > > > > >The amplification part that papers of that era are fixated on can in fact > > h= > > >elp you. Both MUSIC/ESPRIT families of algorithm first solve for the b_k, > > a= > > >nd then do a linear fit to get a_k. The a_k values can be used to estimate > > = > > >signal strengths, or for optimizing how to best combine the incident > > signal= > > >s to get the best SNR.=20 > > > > > >If your source signal is not narrowband in the first place, then the > > typica= > > >l approach is to break it down via filterbank or FFT bank, then for each > > bi= > > >n one can apply MUSIC/ESPRIT. Then one typically applies a higher-level > > alg= > > >orithm to pick and choose and combine the outputs of the different > > bins.=20 > > > > > >However, the usual caution applies: in the general non-narrowband case, > > the= > > >re is no one method that can be applied universally and trivially at the > > sa= > > >me time. You have to look at the specific case, and make sure the whole > > sys= > > >tem works for whatever your goal is.=20 > > > > > >Anyway, I hope the above helps. Your questions aren't elementary.=20 > > > > > >Julius=20 > > > > > > > > > > > >On Thursday, October 31, 2013 3:26:37 PM UTC-4, rudykeram wrote: > > >> Hi,=20 > > >>=20 > > >> I am trying to understand the MUISC (Multiple Signal Classification) > > >>=20 > > >> algorithm. I am new in this topic. So, I am sorry if my questions seem > > so > > >>=20 > > >> elementry. > > >>=20 > > >> First of all, I would like to make sure that Source Localization is > > >>=20 > > >> different than beamforming. Even though in Source Localization, we are > > >>=20 > > >> dealing with phase arrays, but the goal is not to form (amplify) the > > beam > > >>=20 > > >> in one direction and suppress it in the other directions, correct? And, > > >>=20 > > >> instead we are trying to find the DoA of the sound sources, correct? > > >>=20 > > >>=20 > > >>=20 > > >> After reading some articles, I tried to look at matlab pmusic > > algorithm: > > >>=20 > > >> http://www.mathworks.com/help/signal/ref/pmusic.html > > >>=20 > > >> =20 > > >>=20 > > >> And I looked at the Example 1 of the above link, here is where I am > > getti= > > >ng > > >>=20 > > >> confused. > > >>=20 > > >> Because my understanding is that with MUISC we can do source > > localization > > >>=20 > > >> by finding the DoA. > > >>=20 > > >> But all the examples that I looked on the web, including the one I > > listed > > >>=20 > > >> above, they are all treating the input signal being a sinusoidal > > waveform > > >>=20 > > >> having two different frequencies, with small seperation.=20 > > >>=20 > > >> But are they all assuming that each sound source producing only one tone > > = > > >(a > > >>=20 > > >> single frequency singal) ?!? > > >>=20 > > >>=20 > > >>=20 > > >> Because, my understanding of the Example 1 is that one source produces > > a > > >>=20 > > >> waveform at a certain frequency, and the second source produces another > > >>=20 > > >> waveform at a different frequency? > > >>=20 > > >> Is this what MUSIC algorithm can handle (Having sources running at one > > >>=20 > > >> frequency)?=20 > > >>=20 > > >>=20 > > >>=20 > > >> How about the real system? What if in real system, one speaker generates > > = > > >a > > >>=20 > > >> sinusoidal waveform that has several different frequencies, which is a > > ve= > > >ry > > >>=20 > > >> general and realiztic case. Just to simplify my question, let's just > > say > > >>=20 > > >> that Sound Source "A" generates a sinusoidal wave with two different > > >>=20 > > >> frequencies, and Sound Sounce "B" generates another sinusoidal wave > > with > > >>=20 > > >> different two frequenceis.=20 > > >>=20 > > >> In this case, the input signal will be composed of four different > > >>=20 > > >> frequencies when it arrives at the system. > > >>=20 > > >>=20 > > >>=20 > > >> Then can we still apply MUSIC algorithm to determine the DoA of the two > > >>=20 > > >> sources? > > >>=20 > > >> I would appreciate some help on this, to remove my confusion. > > >>=20 > > >>=20 > > >>=20 > > >> Thanks,=20 > > >>=20 > > >> --Rudy > > >>=20 > > >> =20 > > >>=20 > > >> =20 > > >>=20 > > >>=20 > > >>=20 > > >> _____________________________ =09 > > >>=20 > > >> Posted through www.DSPRelated.com > > > > > > > > > > _____________________________ > > Posted through www.DSPRelated.com
Thanks a lot for bearing with me. It started to make more sense. 

I guess I am still confused between *time* vs. *frequency* domain MUSIC.
All the literature that I read about MUSIC, I don't see if they are making
clear distinction between MUSIC in time vs frequency. 

For example, a typical paper in MUSIC is the following:
http://www.ll.mit.edu/asap/asap_03/6-page_Papers/wangJ-asap03.pdf

I believe that equation (1) is looking at the output "y" in spatial sense,
correct? And, this is because it it looking at different sources at time
"t" (e.g. s(1(t), s2(t), s3(t)...). Which means that we are sampling them
in space rather than time, correct?

Also, equation (6) is the pseudo-spectrum MUSIC algorithm, correct? 
So, this paper talks about MUSIC in spatial terms.


But, I also understand your point about separating each tone at each
antenna, regardless of the source or DoA.

But, how would you apply MUSIC in time domain to separate tones?

Is there any papers or documentation that talks about this method?

By the way, what do you mean by saying assume that each tone is narrowband?
I noticed that most of the papers mention about this assumption also. 

Thanks so much for helping me to understand this.
--Rudy
	 

_____________________________		
Posted through www.DSPRelated.com
The best reference text is Stoica/Moses' book. Here's a set of lecture notes from one of the authors.
http://www2.ece.ohio-state.edu/~randy/SAtext/sm-slides-1ed.pdf

The question you are asking re: multiple sinusoids is answered under "line spectrum estimation", parametric method. 

Narrowband means there is no modulation or jitter around the sinusoids. 

Have fun!
Julius

On Thursday, October 31, 2013 3:26:37 PM UTC-4, rudykeram wrote:
> Hi, > > I am trying to understand the MUISC (Multiple Signal Classification) > > algorithm. I am new in this topic. So, I am sorry if my questions seem so > > elementry. > > First of all, I would like to make sure that Source Localization is > > different than beamforming. Even though in Source Localization, we are > > dealing with phase arrays, but the goal is not to form (amplify) the beam > > in one direction and suppress it in the other directions, correct? And, > > instead we are trying to find the DoA of the sound sources, correct? > > > > After reading some articles, I tried to look at matlab pmusic algorithm: > > http://www.mathworks.com/help/signal/ref/pmusic.html > > > > And I looked at the Example 1 of the above link, here is where I am getting > > confused. > > Because my understanding is that with MUISC we can do source localization > > by finding the DoA. > > But all the examples that I looked on the web, including the one I listed > > above, they are all treating the input signal being a sinusoidal waveform > > having two different frequencies, with small seperation. > > But are they all assuming that each sound source producing only one tone (a > > single frequency singal) ?!? > > > > Because, my understanding of the Example 1 is that one source produces a > > waveform at a certain frequency, and the second source produces another > > waveform at a different frequency? > > Is this what MUSIC algorithm can handle (Having sources running at one > > frequency)? > > > > How about the real system? What if in real system, one speaker generates a > > sinusoidal waveform that has several different frequencies, which is a very > > general and realiztic case. Just to simplify my question, let's just say > > that Sound Source "A" generates a sinusoidal wave with two different > > frequencies, and Sound Sounce "B" generates another sinusoidal wave with > > different two frequenceis. > > In this case, the input signal will be composed of four different > > frequencies when it arrives at the system. > > > > Then can we still apply MUSIC algorithm to determine the DoA of the two > > sources? > > I would appreciate some help on this, to remove my confusion. > > > > Thanks, > > --Rudy > > > > > > > > _____________________________ > > Posted through www.DSPRelated.com