On Friday, August 30, 2013 1:30:47 PM UTC-4, RIH7777 wrote:

> Making progress: I've worked out examples where if d > lambda/2, and I've steered beamformer to theta = PI/2, then I can find another angle theta where beamformer gives max output. Everything now seems almost completely consistent w/ Nyquist requirement. Only exception I see: if d = lambda/2 and beamformer is steered to theta = PI/2, then theta = -PI/2 gives max output. Is this to be expected for this simple beamformer? (I dont see this ambiguity if d = ((lambda/2) - epsilon), for very small epsilon). Thanks again

It's fairly easy to see where the lambda/2 requirement comes from. Just thinking in one dimension, you can see that a waveform of some wavelength lambda will be sampled at two places per cycle. Just as in time domain processing, if you sample precisely twice per cycle, and if you're sampling the Fs/2 (ie: highest) frequency, you will get some ambiguity as to magnitude and phase. Diagrams help to see this.
Check your math if you're getting results that don't seem right. Sometimes it's due to using the wrong function (eg: atan vs. atan2, etc.), or just some round-off error creeping into the calculations.

Reply by RIH7777●August 30, 20132013-08-30

Making progress: I've worked out examples where if d > lambda/2, and I've steered beamformer to theta = PI/2, then I can find another angle theta where beamformer gives max output. Everything now seems almost completely consistent w/ Nyquist requirement.
Only exception I see: if d = lambda/2 and beamformer is steered to theta = PI/2, then theta = -PI/2 gives max output. Is this to be expected for this simple beamformer? (I dont see this ambiguity if d = ((lambda/2) - epsilon), for very small epsilon).
Thanks again

Reply by kevin●August 30, 20132013-08-30

On Thursday, August 29, 2013 10:31:30 PM UTC-4, RIH7777 wrote:

> Thx much for responding, and this makes sense. But I'm still not seeing one thing: For this simple beamformer, it seems we only get problems if d >= 2*lambda. In this case I can see that multiple source angles give identical, (+/- 2*PI radians), differential phase shifts. However I dont see a problem if e.g. d = lambda. (Nyquist criterion being d <= lambda/2). I understand in the general case that spatial sampling works just like time domain sampling and Nyquist criterion must be met for accurate reconstruction. But I'm not seeing a problem for this beamformer unless d >= 2*lambda. Am I missing something? Or is the Nyquist criterion more relaxed for this simple beamformer? Is there a simple example where I can see same differential phase shift for 2 source angles with d < 2*lambda?

Consider two microphones situated on an x-axis. If the microphone spacing is one lambda, then that's one wavelength. So if a wave arrives from the left side, the outputs of the two microphones will be identical (just think of a sine wave that's measured one wavelength apart). Similarly, if the sinusoid marches in from the right, you'll also get identical waveforms out of each microphone. So you have ambiguity if the wave comes from the extreme left, or the extreme right.
Also keep in mind that in the time domain, your ability to measure time delays (and thus, spatial angles) is limited by your sampling rate (or, more accurately, your sample rate and SNR).
Things can get a little confusing because you're working in both the spatial domain and the time domain (and sometime in the frequency domain).
A few minutes of searching 'delay-sum' and "graphic descriptions' yielded these sites:
http://www.labbookpages.co.uk/audio/beamforming/delaySum.htmlhttp://cnx.org/content/m12570/latest/http://cnx.org/content/m12564/latest/
Perhaps someone can suggest others.
Kevin McGee

Reply by RIH7777●August 29, 20132013-08-29

Thx much for responding, and this makes sense. But I'm still not seeing one thing: For this simple beamformer, it seems we only get problems if d >= 2*lambda. In this case I can see that multiple source angles give identical, (+/- 2*PI radians), differential phase shifts. However I dont see a problem if e.g. d = lambda. (Nyquist criterion being d <= lambda/2).
I understand in the general case that spatial sampling works just like time domain sampling and Nyquist criterion must be met for accurate reconstruction. But I'm not seeing a problem for this beamformer unless d >= 2*lambda.
Am I missing something? Or is the Nyquist criterion more relaxed for this simple beamformer? Is there a simple example where I can see same differential phase shift for 2 source angles with d < 2*lambda?

Reply by ●August 27, 20132013-08-27

On Monday, August 26, 2013 3:26:38 PM UTC-4, RIH7777 wrote:

> I'm trying to understand how spatial aliasing will give problems with a simple delay/sum beamformer.
>
>
>
> I understand the the principle of aliasing/Nyquist rate.
>
>
>
> Lets take simplest case: linear microphone array with 2 mics located d meters apart along the x axis. Sound source is far field and monochromatic with wavelength lambda. Theta is the angle between mic array and sound source. (Theta = 0 ---> direction of wave propagation is in -y direction. Theta = 90 --> wave propagtes in negative x direction). 0 < theta < 90.
>
>
>
> Mic1 is located at x = 0. Mic2 is located at x = d. Mic2's output is delayed by tau = d*sin(theta)/c, (where c is speed of sound), and then added to Mic1 output. Final output is M(t). If no spatial aliasing then M(t) will be maximum for a sound source at direction theta, and smaller for a sound source at all other directions.
>
>
>
> To avoid spatial aliasing d <= (lambda/2).
>
>
>
> My understanding is:
>
> Assumption 1.)If d > (lambda/2), then the beamformer will give max output for source angles other than theta.
>
>
>
> Question 1.)Is Assumption 1 correct?
>
> (I cannot get this result when I play with the equations involved.)
>
>
>
> Question 2.)If the assumption is correct can someone give me an example of this happening? If the assumption is wrong, please explain how spatial aliasing will cause the beamformer to 'malfunction'.
>
>
>
> Thanks a lot

Your assumption is correct.
As you vary the source angle, the path length to each microphone changes. The difference in path lengths will go from 0 when the source is directly on-axis to a positive or negative max value when source moves to the far right or far left. If the source wavelength is small relative to the mic spacing, there will be multiple source angles where the path-length difference results in a differential phase shift that is a multiple of 2*PI radians, resulting in the maximum possible summed output. The use of a delay behind one microphone serves to rotate the entire patern so that the main lobe is pointing where you want.
Note the the main lobe will be in the same direction for all source frequencies, whereas the location of the "aliases" will vary with source frequency, so in the case of a broadband source you will not have a flat response once you go away from the main lobe.
Bob

Reply by RIH7777●August 26, 20132013-08-26

I'm trying to understand how spatial aliasing will give problems with a simple delay/sum beamformer.
I understand the the principle of aliasing/Nyquist rate.
Lets take simplest case: linear microphone array with 2 mics located d meters apart along the x axis. Sound source is far field and monochromatic with wavelength lambda. Theta is the angle between mic array and sound source. (Theta = 0 ---> direction of wave propagation is in -y direction. Theta = 90 --> wave propagtes in negative x direction). 0 < theta < 90.
Mic1 is located at x = 0. Mic2 is located at x = d. Mic2's output is delayed by tau = d*sin(theta)/c, (where c is speed of sound), and then added to Mic1 output. Final output is M(t). If no spatial aliasing then M(t) will be maximum for a sound source at direction theta, and smaller for a sound source at all other directions.
To avoid spatial aliasing d <= (lambda/2).
My understanding is:
Assumption 1.)If d > (lambda/2), then the beamformer will give max output for source angles other than theta.
Question 1.)Is Assumption 1 correct?
(I cannot get this result when I play with the equations involved.)
Question 2.)If the assumption is correct can someone give me an example of this happening? If the assumption is wrong, please explain how spatial aliasing will cause the beamformer to 'malfunction'.
Thanks a lot