DSPRelated.com
Forums

to calculate time delay between two signals

Started by padma.kancharla October 26, 2011
On Oct 28, 11:24&#4294967295;am, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> robert bristow-johnson wrote: > > On Oct 27, 12:15 am, "steveu" <steveu@n_o_s_p_a_m.coppice.org> wrote: > > >>Isn't that only going to work well for clean signals from the source? The > >>OP said these are signals from two mics, so they are going to have a lot of > >>reverb mixed in, and the reverb will be quite different at each mic. As > >>Vlad said, the cross correlation might look near to random. > > > so you'll get noisy peaks in the auto-correlation (and there will also > > be other sources from other angles that will have other path-length > > differences - so then when you see a peak, that will be a legitimate > > candidate for the time delay, but of a different source). &#4294967295;but > > (alternatively to what Vlad is saying) i would expect that, for a > > source that is significantly louder than the competing sources > > The reality is that if you hear the same audio source from two different > positions, those are going to be two very different signals
do they lose their property of correlation?
> and their > correlation function is a mess. That happens due to multipath and > reverberations,
yeah, that will result in correlations at different lags. at *longer* lags.
> as well as because the same audio source behaves quite > differently at different view angles. It is a naive idea to expect any > accurate result from trivial AMDF or correlation approach.
i don't see how AMDF will work at all. AMDF is more related to *auto*- correlation (operating on a single input signal), not the cross- correlation between two inputs.
> DOA is no simple problem
i guess not. i understand the multipath problem, i was just assuming that the direct path (from the single signal source to the two microphones) is shorter and louder than the reflected paths. r b-j
On Oct 28, 11:24&#4294967295;am, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> robert bristow-johnson wrote: > > On Oct 27, 12:15 am, "steveu" <steveu@n_o_s_p_a_m.coppice.org> wrote: > > >>Isn't that only going to work well for clean signals from the source? The > >>OP said these are signals from two mics, so they are going to have a lot of > >>reverb mixed in, and the reverb will be quite different at each mic. As > >>Vlad said, the cross correlation might look near to random. > > > so you'll get noisy peaks in the auto-correlation (and there will also > > be other sources from other angles that will have other path-length > > differences - so then when you see a peak, that will be a legitimate > > candidate for the time delay, but of a different source). &#4294967295;but > > (alternatively to what Vlad is saying) i would expect that, for a > > source that is significantly louder than the competing sources > > The reality is that if you hear the same audio source from two different > positions, those are going to be two very different signals and their > correlation function is a mess. That happens due to multipath and > reverberations, as well as because the same audio source behaves quite > differently at different view angles. It is a naive idea to expect any > accurate result from trivial AMDF or correlation approach. DOA is no > simple problem; tons of books are written about it. IIRC doctor Rune was > specializing at that; perhaps he could clarify. > > Vladimir Vassilevsky > DSP and Mixed Signal Design Consultanthttp://www.abvolt.com
On Oct 27, 8:51&#4294967295;pm, robert bristow-johnson <r...@audioimagination.com>
wrote:
...
> > > > > If you're going to use correlation, you need the envelope (or something > > > > similar). > > that i don't completely understand. > > > > > > no it won't! Cross-correlation between two speech signals gives you > > > the delay. This is well documented in the literature. > > i think the hardy soul is right. &#4294967295;(but the two speech signals really > oughta be the same speech signal with different delays and some kinda > noise or error. &#4294967295;they must have some common source or they won't > correlate at any lag.) >
When I was working for ISL we developed and delivered an intercept sonar for the Swedish submarine fleet that used time delay estimation between 3 sensors to determine DOA of incoming active sonar pulses. For this application, simple cross correlation of CW pulses gave multiple peaks. For short pulses,selecting the highest peak as the time delay was unreliable at low SNR. Using the envelope of the positive frequency component of the short pulse results in a single broader peak that can be smoothed before estimating the position of the peak. For long pulses it was necessary to use only the front edge of the envelope of the pulse for cross correlation, partly because the pulses were too long for practical FFT sizes and partly because the trailing edges were obscured by multipath and reverberation effects. So Hardy is right that cross correlation gives you the delay (lots of them, from many effects) and Maurice is right that further processing is required to produce useful results. Dale B. Dalrymple
On Oct 27, 10:51&#4294967295;pm, robert bristow-johnson
<r...@audioimagination.com> wrote:
> On Oct 27, 4:36&#4294967295;pm, maury <maury...@core.com> wrote: > > > Of course. I'm still have the AMDF in mind. It's a lot cheaper, > > computationally, but you must filter the signal first. > > i still don't get what you're saying, maury. > > > but AM[D]F does not work so well getting the relative delay of two > correlated signals. &#4294967295;even if the [D]ifference was that of the two > signals (not of the same signal at different delays), the AMDF ain't > gonna minimize too well if the amplitudes (or attached gains) of the > one signal and its delayed version are significantly different. &#4294967295;but, > for cross-correlation, different gains don't change anything except > for the scaling of the whole thang. &#4294967295;the relative peaks stay at the > same relative values and at exactly the same lags. > > r b-j-
Robert,I think you got it. The AMDF can be used as a cheap correlator. BUT, for delay estimate of speech signals, you need the envelope to keep it from keying in on the pitch. Even if the signals are of different levels, you will still get the "null" (though not zero) using the envelope. Maurice
On Oct 28, 10:24&#4294967295;am, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> robert bristow-johnson wrote: > > On Oct 27, 12:15 am, "steveu" <steveu@n_o_s_p_a_m.coppice.org> wrote: >
Vlad, originally, we weren't talking about DOA, nor reverberation, nor multipath. The original question was how to get time delay between two signals.
On Oct 28, 10:43&#4294967295;am, robert bristow-johnson
<r...@audioimagination.com> wrote:
> On Oct 28, 11:24&#4294967295;am, Vladimir Vassilevsky <nos...@nowhere.com> wrote: > > > > > > > robert bristow-johnson wrote: > > > On Oct 27, 12:15 am, "steveu" <steveu@n_o_s_p_a_m.coppice.org> wrote: > > > >>Isn't that only going to work well for clean signals from the source? The > > >>OP said these are signals from two mics, so they are going to have a lot of > > >>reverb mixed in, and the reverb will be quite different at each mic. As > > >>Vlad said, the cross correlation might look near to random. > > > > so you'll get noisy peaks in the auto-correlation (and there will also > > > be other sources from other angles that will have other path-length > > > differences - so then when you see a peak, that will be a legitimate > > > candidate for the time delay, but of a different source). &#4294967295;but > > > (alternatively to what Vlad is saying) i would expect that, for a > > > source that is significantly louder than the competing sources > > > The reality is that if you hear the same audio source from two different > > positions, those are going to be two very different signals > > do they lose their property of correlation? > > > and their > > correlation function is a mess. That happens due to multipath and > > reverberations, > > yeah, that will result in correlations at different lags. &#4294967295;at *longer* > lags. > > > as well as because the same audio source behaves quite > > differently at different view angles. It is a naive idea to expect any > > accurate result from trivial AMDF or correlation approach. > > i don't see how AMDF will work at all. &#4294967295;AMDF is more related to *auto*- > correlation (operating on a single input signal), not the cross- > correlation between two inputs. > > > DOA is no simple problem > > i guess not. &#4294967295;i understand the multipath problem, i was just assuming > that the direct path (from the single signal source to the two > microphones) is shorter and louder than the reflected paths. > > r b-j- Hide quoted text - > > - Show quoted text -
Robert, it does work for estimating delay between a speech signal and its delayed form, not two independent signals. The original post implied this situation.
On Oct 28, 1:06&#4294967295;pm, dbd <d...@ieee.org> wrote:
> > So Hardy is right that cross correlation gives you the delay (lots of > them, from many effects) and Maurice is right that further processing > is required to produce useful results. >
well, i didn't see what maury was saying about further processing. what i *didn't* understand from maury was this thing about the AMDF. i see (and wrote) how AMDF can be related to *auto*-correlation (AMDF should minimize at lags where auto-correlation maximizes). but auto- correlation is a single input signal operation (it's the cross- correlation between a signal and itself). cross-correlation is a two input operation and i do not see how AMDF could be adapted to perform a similar operation unless somehow the two signals were of the same energy (but i don't expect that to be the case for time delay difference, the delayed signal is likely to be lower in energy). but i fully understand and agree with the need for post-processing of the output of AMDF, ASDF, auto-correlation, cross-correlation, the output of an FFT, whatever. it's one thing to get this intermediate data, it's another thing to tease outa this intermediate data a salient parameter of interest. usually that requires some sorta "expert systems" programming, not the sorta DSProgramming we normally do. r b-j

robert bristow-johnson wrote:

> On Oct 28, 11:24 am, Vladimir Vassilevsky <nos...@nowhere.com> wrote: > >> >>The reality is that if you hear the same audio source from two different >>positions, those are going to be two very different signals > > do they lose their property of correlation?
The envelope of correlation function would be a hump of several tens of millisecond wide. The underlying fine structure inside the hump is pretty much random. The naive method won't make for any accurate estimate.
>>and their >>correlation function is a mess. That happens due to multipath and >>reverberations, > > yeah, that will result in correlations at different lags. at *longer* > lags.
Different results for different signals as well.
>>as well as because the same audio source behaves quite >>differently at different view angles. It is a naive idea to expect any >>accurate result from trivial AMDF or correlation approach. > > i don't see how AMDF will work at all.
For AMDF, use 1-bit quantization (used to be common with radars), or use some sort of AGC to normalize amplitudes, or find a gain which makes for the best AMDF match.
> AMDF is more related to *auto*- > correlation (operating on a single input signal), not the cross- > correlation between two inputs.
Why not.
> >>DOA is no simple problem > > i guess not. i understand the multipath problem, i was just assuming > that the direct path (from the single signal source to the two > microphones) is shorter and louder than the reflected paths.
Heh. Been there. I've burned with that once. Won't fall into this mistake again. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
On Oct 28, 1:16&#4294967295;pm, maury <maury...@core.com> wrote:
> On Oct 28, 10:43&#4294967295;am, robert bristow-johnson > > > > i don't see how AMDF will work at all. &#4294967295;AMDF is more related to *auto*- > > correlation (operating on a single input signal), not the cross- > > correlation between two inputs. > > > Robert, it does work for estimating delay between a speech signal and > its delayed form, not two independent signals.
so what if the delayed signal has a significant amount of attenuation besides the delay? how well does AMDF work for that? r b-j
Vladimir Vassilevsky <nospam@nowhere.com> wrote:

> robert bristow-johnson wrote:
(snip)
>> so you'll get noisy peaks in the auto-correlation (and there will also >> be other sources from other angles that will have other path-length >> differences - so then when you see a peak, that will be a legitimate >> candidate for the time delay, but of a different source). but >> (alternatively to what Vlad is saying) i would expect that, for a >> source that is significantly louder than the competing sources
> The reality is that if you hear the same audio source from two different > positions, those are going to be two very different signals and their > correlation function is a mess. That happens due to multipath and > reverberations, as well as because the same audio source behaves quite > differently at different view angles. It is a naive idea to expect any > accurate result from trivial AMDF or correlation approach. DOA is no > simple problem; tons of books are written about it. IIRC doctor Rune was > specializing at that; perhaps he could clarify.
Is cross-correlation of the absolute value or square better? It seems that might reduce some phase effects that could confuse the correlation. Otherwise, you hope that the amplitude of multipath and reverberation is smaller that the main signal. I have been in places where sound reflecting off a building was the most direct source from an audio standpoint. (More direct from the diffraction around other objects.) -- glen