2D-MUSIC algorithm for range-azimuth mapping in MATLAB

``` f1 = 17000; f2 = 19000; doa1 = [40;10]; doa2 = [-60;20]; thetas = 90*pi/180 p = -pi/2:pi/180:pi/2; doa = [doa1 doa2]; fc = 18000; c = physconst('LightSpeed') lam = c/fc fs = 48000; array = phased.URA('Size',[15 12],'ElementSpacing',[lam/2 lam/2]); array.Element.FrequencyRange = [18e5 23.0e5]; fileId = fopen('D:\signal123.mat'); signal1 = fileId; %signal_reshape = reshape(signal,[],2) x = collectPlaneWave(array,[signal1,signal1],doa,fc) %noise = 0.1*(randn(size(x))+1i*randn(size(x))); estimator = phased.MUSICEstimator2D('SensorArray',array,... 'OperatingFrequency',fc,... 'NumSignalsSource','Property',... 'DOAOutputPort',true,'NumSignals',2,... 'AzimuthScanAngles',-90:.5:90,... 'ElevationScanAngles',-90:.5:90) [~,doas] = estimator(x) plotSpectrum(estimator); ```

You haven't said what the problem is. Since you are just using MATLAB's pre-packaged functions, I suggest you read the MATLAB documentation (which is typically quite good).
You seem to be getting confused between azimuth, elevation and range. In its basic form, the two dimensions in "2D" MUSIC are azimuth and elevation. (Or it can be re-parameterized into a different coordinate system, such as cone angles).
However, range is significantly different. Range is not a direction. You can still use the MUSIC algorithm to estimate delay (and therefore range), but the formulation is completely different.
One way of estimating both direction and range is using the concept of the "spatio-temporal array manifold" (google can give you plenty of information about that). However, this is very sub-optimal and better results are achieved (with much less computational effort) by simply splitting the problem into two 1D searches. First estimate range (using any method you like - you can even use MUSIC to search over a "temporal" manifold). Then you can isolate the signal subspace associated with each delay (range) using geometric projections and, for each discrete range, perform a 1D azimuth search over the spatial manifold to obtain a direction estimate.

Hi dear weetabixharry !
If I want to estimate the range only with MUSIC algorithm, how will I estimate it using MATLAB? Can you write the code here please? Further if I want to estimate the amplitudes of the incoming signals, how will I do it?
Regards,

hello @
for estimating range and azimuth, do you know any advantage of using spatio-temporal array manifold over splitting the problem into two 1d searches? i have a hard time finding online a comparison between these two techniques. for instance, (Manokhin, 2015) and (Belfiori, 2012) don't mention this 2-1D-searches technique, i believe.

@corvoattano I don't know any advantage of using the spatiotemporal array manifold (and i cannot imagine any advantage). In all the cases I ever analyzed, splitting into two 1D searches yielded better performance with less computation.
The key to understanding this is that the number of spatiotemporal snapshots is smaller than the number of ordinary (spatial) snapshots by a factor of Nc (where Nc is the code length of the spreading sequence).
So, you can't just think that the spatiotemporal manifold increases the length of the manifold vector for free. It costs a proportional drop in the number of snapshots.

Hello @
I am trying to implement range-angle estimation for FMCW MIMO radar data (fast-time x channels matrix) from your proposed technique (range search then azimuth search from MUSIC). I know how to estimate the target range bins (range search). However I am not sure how to isolate the signal subspace associated with each found target range bin: I don't know how to use the target range bin information to estimate the specific azimuth MUSIC spectrum that corresponds to that target range bin.
An idea (not working): To estimate a MUSIC spectrum corresponding to a target range bin, I thought about selecting all subspace vectors (obtained from eigen analysis of the sample cov matrix) except one from the signal subspace (so the other vectors from the signal subspace would be considered in the noise subspace). Doing this for all K signal subspace vectors (corresponding to the K highest eigenvalues), I could obtain K different azimuth MUSIC spectra. However with this technique I would not know which MUSIC spectrum would correspond to which target range bin so linking target range bins to target azimuth bins would not be possible.
Should the sample covariance matrix be calculated differently for each target range bin? I would not know what operation should be applied to the original signal to obtain this matrix.
Could you give me an indication on how to implement the signal subspace isolation depending on target range?
Thanks in advance for any input on this.

I'm not sure if I fully understand. However, regarding this part:
I am not sure how to isolate the signal subspace associated with each found target range bin
My post above (July 12 2022) said this:
First estimate range (using any method you like - you can even use MUSIC to search over a "temporal" manifold). Then you can isolate the signal subspace associated with each delay (range) using geometric projections and, for each discrete range, perform a 1D azimuth search over the spatial manifold to obtain a direction estimate.
So, I'll try to break that down a bit. Let's assume we've performed some kind of delay estimation, which has detected targets in three range bins: [3, 5, 6]. That means our next step will be to extract those three subspaces and then perform ordinary azimuth estimation in each case.
How do we extract each of those subspaces? There are many ways, but I will try to describe a "superresolution beamformer" type method.
We start by constructing a matrix C whose columns are all the possible time-shifts of the zero-padded spreading sequence, c:
C = [c, shift(c, 1), shift(c, 2), shift(c, 3), ... etc]
To isolate range bin 3, we start by removing the shift(c, 3) column from C. Let's call the resulting matrix C3.
To form the superresolution beamformer, we then need to project shift(c, 3) onto the complement subspace of C3. We do that by generating the complement projection matrix (of C3) and right-multiplying it with shift(c, 3). The resulting vector is the "superresolution beamformer" that we need to apply to our received data to extract delay 3.
Note: Your received data needs to be rearranged carefully into an appropriate matrix to apply the "superresolution beamformer" to. The exact rearrangement depends on the TX signal design. After applying the beamformer, ordinary azimuth estimation can be done.
Then we need to repeat the above steps for delay 5 and delay 6.

Note that instead of calculating that complicated superresolution beamformer, you could just use shift(c, 3) directly. Assuming the spreading sequence has excellent autocorrelation properties, the result should not be much worse.
Then the only tricky part is to carefully understand your signal model so you can construct the received signal matrix appropriately.

Thank you for your prompt response and your detailed explanation. I understand better your method. Also I found your article "Spatiotemporal Arrayed MIMO Radar: Joint Doppler, Delay and DOA Estimation" on an open repository that seems to deal with exactly this issue. So I will try to adapt what you proposed for my FM-CW radar data and I will update here when I made more progress.
Thanks again.

Yes, that old document includes exactly what I described above. It uses a relatively complicated signal model which includes Doppler. If you don't care about Doppler, then you could remove that from the signal model and skip that manifold extension.
I recommend ignoring any sections related to performance bounds or virtual array modelling. They really don't belong in this paper, but they were included for reasons beyond my control.
I would send you the MATLAB code, but I accidentally deleted my PhD thesis and all source code some years ago. If you get stuck, I could probably code up a simplified example (even if I'm a bit rusty after not touching this topic for 11+ years).

I finally took some time to try and implement this technique.
I adapted the signal model of your paper for FM-CW signals. I simulated FM-CW data with \( N_c=64 \) samples along the fast-time axis (equiv. to the "spreading sequence" axis in your case) and \( N=16 \) channels without noise. First I estimated the target range bins using a modified MUSIC algorithm (my implementation of this seems to work ok). To estimate the azimuth profile for a given range bin l, I considered \( N_{cr}=512 \) range samples to evaluate the possible range response vectors (equiv. to the "temporal response vectors") to build the \( C_l \) matrix:
$$ C_l[p][q]=\exp(2 * \pi * j * (f_c * \tau[q] + \alpha * F[p] * \tau[q])) / \sqrt{N_c}$$
with \( \tau[q]=2*R[q]/c \) is the delay contribution due to range and \( R[q] \) is the range vector (of size \( N_{cr}=512 \)) and \( F[p] \) is the fast-time vector (of size \( N_c=64 \)) so the \( C_l \) matrix has size 64 x 512. (the others parameters are specific to FM-CW)
Then I applied your proposed steps on the \( C_l \) matrix to evaluate the beam vector for range bin l. Here are the Python lines of my script that perform this (full Python script attached):
# ... target range bins were estimated # estimate azimuth angles for each target range bin for r_val, mag_val, r_idx in zip(r_vect_sel, mag_r_sel, r_idx_max): fast_time_vect = np.arange(0, t_rep, t_rep / n_s) fast_time_grid, range_grid = np.meshgrid(fast_time_vect, r_vect_r, indexing="ij") # size: 64 x 512 tau = 2 * range_grid / c steering_vect = np.exp(2 * np.pi * 1j * (f_c * tau + alpha * fast_time_grid * tau)) / np.sqrt(n_s) # size: 64 x 512 # build the vector containing all possible temporal response vectors c_vect = np.delete(steering_vect, r_idx, axis=1) # size: 64 x 511 # compute the complement projection matrix of the preceding vector c_proj_mat = np.eye(c_vect.shape[0]) - c_vect @ np.linalg.inv(c_vect.conj().T @ c_vect) @ c_vect.conj().T # size: 64 x 64 # compute the super-resolution beamformer for the selected target range bin c_vect_pinv = c_proj_mat @ steering_vect[:, r_idx] # size: 64 # evaluate the beam vector for the selected range bin beam_vect_az = beam_matrix.T @ c_vect_pinv # size: 16 # then we may estimate the azimuth profile using a DOA technique ...To test the technique I simulated two targets with different range and azimuth angle (with same RCS) without noise. I tested several DOA methods to estimate the azimuth angles from the estimated beam vector. However I did not obtain correct results: e.g. from IAA (I validated its implementation prior), I obtained two peaks (at the correct angles though) in both azimuth profiles (expected one per azimuth profile) so it seems that the target signals were not correctly separated.
I also tried calculating the beam vector directly from the range response vector at the selected range bin (i.e. without using the beamformer) and it yielded similar results.
I am thinking maybe the signal model isn't correct? Do you see were else I went wrong ?
See Python script attached and figures of range and azimuth profiles.
Thank you in advance for any help and let me know if files were not attached correctly.
test_2d_ra_est_hc_share.pyr_r-MUSIC.pngaz_1.20m_IAA.pngaz_0.70m_IAA.png