DSPRelated.com
Forums

Understanding MUSIC algorithm and applying it in 2 cases

Started by chiraganand 4 years ago15 replieslatest reply 4 years ago2555 views

Could someone please provide an intuitive explanation of MUSIC and how to apply it in 2 cases?

i have 2 cases

  1. A signal which is obtained from Optical Time domain reflectometry. It is the attenuation along the optical fibre length. I receieve a signal which contains the attenuation along the length of the fibre. I want to apply MUSIC to this (Nx1) dimensional signal to increase the spatial resolution.
  2. Ultrasonic signal received on an array of transmitter and receivers. In this matrix each trasnmitter transmits the ultrasound and the reflected wave is picked up by each receiver. I want to use the data to create an image of the tested structure. I want to use MUSIC for DOA. 
  3. Hence my question is about implementation of the MUSIC algorithm for the above two cases? How do i use it for super resolution? How do i reconstruct the original signal from the covariance matrix? How do i get the super resolution signal from the noise or signal subspace?

Thanks and Regards

[ - ]
Reply by NirRegevSeptember 23, 2020
How do i use it for super resolution?

-- Since the signal containing N spectral harmonies, you can generate the auto-correlation matrix from the signal. The signal subspace will be of Rank N. The noise subspace should contain "zero" signal energy, and that's exactly what you look for in the MUSIC algorithm. for the frequencies that minimize (or maximize) this energy.

-- The algorithm is a spectral estimation algorithm. it's indifferent to whether the signal represent a temporal process or a spatial process. so the implementation is the same.


-- You should know the number of harmonies (or targets in array processing) in advance


-- you can reconstruct a de-noised signal using the estimated frequencies, Note that you only estimated the frequencies.


[ - ]
Reply by chiraganandSeptember 23, 2020

Thank you for your reply. So when I carry out SVD of the covariance matrix in matlab i get [U,S,V] where U is now NxN matrix, V is NxN matrix and S is a diagonal matrix. Assuming U is the signal subspace how do I get a Nx1 signal from that?

[ - ]
Reply by NirRegevSeptember 23, 2020

Once you have the estimated frequencies the 'noiseless' phase shifted version of the corrupted signal will be a sum of complex sinusoids in these freqs.

[ - ]
Reply by chiraganandSeptember 23, 2020

Thank you for your reply. Hmm i am a little of a noob in this. So how do i get these estimated frequencies from the matrices from the svd decomposition? Sorry for the novice question!

[ - ]
Reply by NirRegevSeptember 23, 2020

Using the noise subspace matrix you calculate/plot the MUSUC Pseudo-Spectrum and find the peak(s). These are the dominant frequencies

[ - ]
Reply by jjamisonSeptember 23, 2020

Maybe this will clear some things up for you.  But there are two common applications for MUSIC (there are probably more but these are the two I know of).  That is for frequency estimation and DOA estimations.  As the other user noted, DOA estimation is really spectral estimation in spatial frequency which is why the algorithm applies here also.  A cool thing about MUSIC is that you can estimate parameters on multiple signals in the same frequency band, but you need to know how many signals there are for MUSIC to work properly (lets call this number of signals `k`).  One "gotcha" is that k needs to estimated accurately for this to work, for example if there were actually 2 signals in your band and you set k=1 you would not get an estimate of "one of the signals" but would actually get an incorrect estimate.

As far as how to actually do the calculations I would recommend reading up on it a bit more.  If you can find Schmidt's dissertation "A Signal Subspace Approach to Multiple Emitter Location and Spectral Estimation" it does a great job of laying out the theory for how and why MUSIC works.  Also section 8.6 of the textbook "Statistical Digital Signal Processing" by Hayes provides a relatively straight forward introduction to MUSIC for frequency estimation and I believe a PDF of the text is available.  The Wikipedia can also be helpful (https://en.wikipedia.org/wiki/MUSIC_(algorithm)) but is not very thorough.

At a very high level MUSIC separates the noise subspace from the signal subspace and looks for nulls in the noise subspace (because there should be signal where the noise subspace goes to zero).  In slightly more detail ...

Frequency Estimation:

- Compute the NxN autocorrelation matrix (R) on 1xN samples

- Perform eigendecomposition on R to get your N eigenvalues and N Nx1 eigenvectors.

- Compute your spectral estimate P(w) from the N-k eigenvectors which span your noise subspace (the Hayes text has the equation for this).

- Peaks of P(w) correspond to estimated frequencies


DOA Estimation:

- Compute the NxN sample correlation matrix (S) from M samples received from an N element antenna array.  (i.e. S = 1/M * (X.conj().T @ X) in Python)

- Eigendecomposition of S to get N eigenvalues and N Nx1 eigenvectors

- Compute spatial spectrum estimate P(w) from the N-k eigenvectors which span your noise subspace and the array manifold (either Schmidt's dissertation or one of his papers on this will have the exact equation)

- Peaks in P(w) correspond to the estimated angles of arrival


Hopefully that helps some.

[ - ]
Reply by chiraganandSeptember 23, 2020

Thank you so much for your detailed response. Its quite helpful. So once I get the peaks of the estimated frequencies, my reconstructed signal will be longer than the original signal? Will the number of samples be more than with what I started? I have read a paper where MUSIC is applied to a signal and then they show a reconstructed signal with higher resolution

[ - ]
Reply by jjamisonSeptember 23, 2020

I have only used MUSIC for parameter estimation, and haven't heard of using MUSIC to reconstruct a signal.  Could you link to the paper you're referring to?

[ - ]
Reply by chiraganandSeptember 23, 2020

Its this paper and I am trying to replicate the results..

https://www.google.com/url?sa=t&source=web&rct=j&u...

[ - ]
Reply by chiraganandSeptember 23, 2020

Hi, did you have time to go through the paper? Any thoughts?

[ - ]
Reply by weetabixharrySeptember 23, 2020

If a simple Matlab implementation would be instructive, then take a look at this forum post I wrote some years ago.

Later in the same thread, I expanded it to 2D (azimuth, elevation) DOA estimation. However, the 1D case is probably a more instructive starting point:

close all; clear all; clc;
% ======= (1) TRANSMITTED SIGNALS ======= %

% Signal source directions
az = [35;39;127]; % Azimuths
el = zeros(size(az)); % Simple example: assume elevations zero
M = length(az); % Number of sources

% Transmitted signals
L = 200; % Number of data snapshots recorded by receiver
m = randn(M,L); % Example: normally distributed random signals

% ========= (2) RECEIVED SIGNAL ========= %

% Wavenumber vectors (in units of wavelength/2)
k = pi*[cosd(az).*cosd(el), sind(az).*cosd(el), sind(el)].';

% Array geometry [rx,ry,rz]
N = 10; % Number of antennas
r = [(-(N-1)/2:(N-1)/2).',zeros(N,2)]; % Assume uniform linear array

% Matrix of array response vectors
A = exp(-1j*r*k);

% Additive noise
sigma2 = 0.01; % Noise variance
n = sqrt(sigma2)*(randn(N,L) + 1j*randn(N,L))/sqrt(2);

% Received signal
x = A*m + n;

% ========= (3) MUSIC ALGORITHM ========= %

% Sample covariance matrix
Rxx = x*x'/L;

% Eigendecompose
[E,D] = eig(Rxx);
[lambda,idx] = sort(diag(D)); % Vector of sorted eigenvalues
E = E(:,idx); % Sort eigenvalues accordingly
En = E(:,1:end-M); % Noise eigenvectors (ASSUMPTION: M IS KNOWN)

% MUSIC search directions
AzSearch = (0:1:180).'; % Azimuth values to search
ElSearch = zeros(size(AzSearch)); % Simple 1D example

% Corresponding points on array manifold to search
kSearch = pi*[cosd(AzSearch).*cosd(ElSearch), ...
          sind(AzSearch).*cosd(ElSearch), sind(ElSearch)].';
ASearch = exp(-1j*r*kSearch);

% MUSIC spectrum
Z = sum(abs(ASearch'*En).^2,2);

% Plot
figure();
plot(AzSearch,10*log10(Z));
title('Simple 1D MUSIC Example');
xlabel('Azimuth (degrees)');
ylabel('MUSIC spectrum (dB)');
grid on; axis tight;
[ - ]
Reply by chiraganandSeptember 23, 2020

Thank you for your reply! So the problem I have is number of sources (M). What if I dont know the number of sources? Is it the source of the which sends out the signal. In my case the source and receiver are the same. Then when we come to the MUSIC algo and En = E(:,1:end-M); what would happen here if M=1. I am not sure if I am getting the terminology right too, so I am sorry for the naivety. What is meant by azimuth?

[ - ]
Reply by weetabixharrySeptember 23, 2020

If there is only one signal arriving from one direction, then M=1.

If there is one signal source, but arriving from multiple directions (due to reflections/echoes), then MUSIC won't work without modification (unless each reflection arrives with a different delay, and the signal has favorable correlation properties). The required "modification" is usually a technique called spatial smoothing (of which 2 variants exist, each requiring different constraints on the receive array geometry). I also covered that later in the same thread I mentioned above.

In your case number 2, it sounds like your transmitter is an array (as well as your receiver). This makes things more complicated because you ideally need a MIMO radar signal model (or similar). Depending on the correlation properties of the transmitted signal, it may be possible to apply MUSIC to a "virtual" array constructed from the spatial convolution of the transmit and receive arrays (i.e. under certain conditions, you can re-pose the MIMO model as a SIMO model). You may also need a spherical propagation model if the usual plane-wave model isn't sufficiently accurate in your application.

If you have an unknown number of sources, then it can be estimated. Popular methods for doing this are the MDL and AIC algorithms.

These are rather advanced concepts. I think you will need to make huge simplifications if you are going to make any progress at all.

To understand MUSIC intuitively, I would start by understanding what the array manifold is. MUSIC measures the distance between the estimated noise subspace and the array manifold. You can find a relatively intuitive explanation from slide 77 in these lecture notes. You might want to read the whole course for some background.

[ - ]
Reply by chiraganandSeptember 23, 2020

Thank you once again for your reply. Yes the 2nd question does sound like MIMO radar. I am not from the RAdar field so I had no idea. I found this paper online. Maybe then it makes my question clearer.

https://www.researchgate.net/publication/253680812...

So it is similar to MIMO radar and in the paper they have processed the signals to get a final image using a different imaging algorithm. I want to get there but use MUSIC as the imaging algorithm, if that makes any sense

http://users.cecs.anu.edu.au/~parastoo/papers/2015...

Thank you again for all your help. I will go through the lecture slides too!

[ - ]
Reply by chiraganandSeptember 23, 2020

I am actually trying to replicate these results. do you have any insights?

https://www.google.com/url?sa=t&source=web&rct=j&u...