DSPRelated.com
Forums

to calculate time delay between two signals

Started by padma.kancharla October 26, 2011
robert bristow-johnson <rbj@audioimagination.com> wrote:

(snip)
> i guess not. i understand the multipath problem, i was just assuming > that the direct path (from the single signal source to the two > microphones) is shorter and louder than the reflected paths.
I was just remembering, how we learn in 3rd grade science class (or maybe 4th grade) that light goes in straight lines, and sound doesn't. They can't teach diffraction at that point, though. You can easily be in a place where the direct path involves lots of diffraction and absorption, where another path reflects off a large object, such as a building. That could be true for audio and, for people in big cities, for radio signals. -- glen
(snip, someone wrote)
>> and their correlation function is a mess. That happens due >> to multipath and reverberations,
Reminds me of our library book sale, which is in a really large building (previously owned by the Navy, but now the city). They have a PA system and sometimes make announcements, talking really slow. There are a few speakers around, but also large brick walls which make good reflections, and you really can't hear well at all. I was wondering at the last sale, if you have the appropriate DSP equipment, and some number of amplifiers and speakers, what would be the best way to make a system that is most audible in the room? I was wondering about enough speakers and delay modules to try to cancel out some of the reflections. It might be that sound absorbing pads are cheaper, but maybe not. -- glen
On Oct 28, 1:17&#4294967295;pm, robert bristow-johnson <r...@audioimagination.com>
wrote:
> On Oct 28, 1:16&#4294967295;pm, maury <maury...@core.com> wrote: > > > On Oct 28, 10:43&#4294967295;am, robert bristow-johnson > > > > i don't see how AMDF will work at all. &#4294967295;AMDF is more related to *auto*- > > > correlation (operating on a single input signal), not the cross- > > > correlation between two inputs. > > > Robert, it does work for estimating delay between a speech signal and > > its delayed form, not two independent signals. > > so what if the delayed signal has a significant amount of attenuation > besides the delay? &#4294967295;how well does AMDF work for that? > > r b-j
It worked fine. You still get a minimum at the point of (envelope) correlation. The AMDF isn't usually used this way, but when you're trying to "cram" a bunch of stuff in a 44-pin quad ASIC, the reduced complexity of the AMDF was needed.
On Oct 28, 10:52&#4294967295;am, robert bristow-johnson
<r...@audioimagination.com> wrote:
> On Oct 28, 1:06&#4294967295;pm, dbd <d...@ieee.org> wrote: > > So Hardy is right that cross correlation gives you the delay (lots of > > them, from many effects) and Maurice is right that further processing > > is required to produce useful results. > > well, i didn't see what maury was saying about further processing.
I took "further processing" than cross correlation to be additional processing not necessarily just later processing. Use of the start of the envelope is one I have used. It requires that you have some method of identifying signal present as different from background level so that you can select the "front end" of the envelope to avoid multipath and reverberation. Towards this end, I've used time domain processing to identify wideband signals in parallel with frequency domain processing to identify narrowband signals. There can be a lot of "further processing" so the system is far more than a simple cross correlator. Even the conversion of time delay to direction of arrival can involve further processing to avoid needing to measure propagation velocity directly. Dale B. Dalrymple
On Fri, 28 Oct 2011 18:34:49 +0000 (UTC), glen herrmannsfeldt
<gah@ugcs.caltech.edu> wrote:

>robert bristow-johnson <rbj@audioimagination.com> wrote: > >(snip) >> i guess not. i understand the multipath problem, i was just assuming >> that the direct path (from the single signal source to the two >> microphones) is shorter and louder than the reflected paths. > >I was just remembering, how we learn in 3rd grade science class >(or maybe 4th grade) that light goes in straight lines, and >sound doesn't. They can't teach diffraction at that point, though. > >You can easily be in a place where the direct path involves lots >of diffraction and absorption, where another path reflects off >a large object, such as a building. That could be true for audio >and, for people in big cities, for radio signals. > >-- glen
In EM propagation we call that the "urban canyon" environment. It's one of the most aggressive multipath cases, especially when you're mobile. Eric Jacobsen Anchor Hill Communications www.anchorhill.com