I am working on beam forming of high frequency (>400 KHz) linear array for imaging sonar. I have gathered sensor data of 80 channel linear array sub merged halfway in 8 m deep water tank with a transmitting probe 6 m away from it at same depth. I recorded multiple pulses and perform offline analysis on it. The problem i am facing is very high side lobe levels in a beam pattern developed using recorded data. Can any one kindly guide me what could be possible causes of higher side lobe levels?
Thanks & Regards
By far the easiest way to reduce the sidelines of a delay-sum beam former for a linear array is to implement array shading, essentially to weight each sensor a different amount. It has all the normal properties of a normal time domain window, wider bean width at the gain of more side lobe suppression.
For a linear array the generalised side-lobe canceller can be very effective especially if a real time implementation is required. Generalised sidelobe canceller is essentially and unconstrained implementation of MVDR.
As to why you are seeing high sidelobe levels, my guess would be that you are using a data-independent beam former. Delay-sum or filter-sum beamformers that are data independent often pass a lot especially at higher frequencies(where the wave length is much lower than the array length)
Thanks for reply. I am using chebyshev array shading with Delay-Sum beam former in frequency domain.
To be honest I am not an expert on the field, but I'll do my best.
Based on the (somewhat limited) info you provide I will assume that your array is a uniform linear array. Secondly I am not sure how exactly you perform your beamforming (is it delay-and-sum, MVDR, LCMV, or some other formulation) so I am not sure I can provide too much help.
Nevertheless, there is always a possibility of high (in level) side-lobes depending on the inte-relement distance of your sensors and the recorderfrequency. For more information on that you can consult various books in the literature*. On good example can be seen in the following image (taken from Optimum Array Processing Part IV - Detection, Estimation and Modulation Theory by Harry L. Van Trees) where you can see the beam pattern in the "visible region" of the w-k (omega-k) space.
As you can see by increasing the frequency you can see more lobes showing up, so if you go higher than d=λ (where d is the inter-element distance and λ the wavelength) the "new" side-lobes will show up inside the visible region and they will be present in your beam pattern.
There are quite some methods you can use to try to avoid this phenomenon (such as LCMV or possibly some I am not aware of) and (in my opinion) the simplest is to filter the recorded signal in order to avoid such "aliasing" (here I use the term somewhat abusively).
I hope that was of some help.
All the best,
* Literature: - One of my favorites, which is very general on array processing though and does not include only beamforming, is Optimum Array Processing - Part IV, Detection, Estimation and Modulation Theory by Harry L. Van Trees).
- Others are Beamforming Techniques for Spatial Filtering, by Barry Van Veen et al., Sensor Array Signal Processing, by Prabhakar S. Naidu and possibly more I am not aware of.
Thanks very much for detailed reply. I am not facing 'grating lobes' issue as my array coverage area is very short i.e. <15 deg and no grating lobes can occur in this area theoretically considering my operating frequency and inter element spacing of array.
test setup sketch.docx A rough sketch of the test setup is attached. I performed base banding for each channel, apply focusing coefficients corresponding to 6 m distance on complex output of each channel (to compensate near field) and then performed delay and sum beam forming in frequency domain on the data chunk containing pulse. Then, I performed IFFT on beam formed output and plot the beam levels (in Log scale) corresponding to time instants of receiving pulse.
Hhhmmm, this seems "normal" to me. One subtle point I would like to make (one I had an issue when I was working with beamformers) is to make sure you implement the delay in the frequency domain correctly (this means that the phase should be unwrapped when calculating delays, especially for high frequencies and long arrays).
From the fact that you seem to knowledgeable enough on the matter I assume you have taken care of that, but I just wanted to mention it (I am one of those people that can make such mistakes even after quite some years working on an issue :D).
Well, phase unwrapping is not beamforming-specific. It is just a way to interpret phase.
For example, if you have two sensors that are lets say 3.5λ apart and you measure the wrapped phase difference between them, it will be 180 degrees for this specific frequency. But the actual/unwrapped phase difference will be 1260 degrees, which, if translated to time, is different than the 180 degrees in this same frequency. So the actual delay that has to be introduced in order to achieve the "desired" result will be wrong.
Thanks for clarification. I think the phase unwrapping phenomenon will effect time domain implementation of beam former where time delays are being calculated depending upon phase coefficients (Kindly correct me if i am wrong)
One more observation i want to share is that during individual channel analysis, phase of array sensors w.r.t. reference sensor are changing drastically for each ping (for same setup) i.e. if for example phase of channel 10 is 80 degree w.r.t. reference channel #50 for one ping, for next ping it will change dramatically to say 300 deg w.r.t. 50# channel and in next ping it will again change to some other random value.
Can you kindly put some light on this issue?
Well, as for the first part (frequency domain implementation), I'll be honest to say that I am not aware of the method, so I cannot add any insight to that.
Regarding the phase phenomenon......... Absolutely no idea, but I'll look into it when I find some time (I will also consult some colleagues about it) and let you know if I find anything.