DSPRelated.com
Forums

Problems after freezing FIR coefficients of Acoustic Echo Canceller

Started by johan kleuskens April 11, 2005
Hi Jerry,

Thank you for your input.

Your correct about  the difference in sample frequency of the microphone and 
speaker when you look at it in an analogous way. But if you put the played 
and the recorded wave files together in one stereo file and compare them on 
a sample by sample basis, you see a slightly drift of the phase diffence 
between those two when the play and frequency sample frequency are 
different. This occurs for example when you play a file on soundcard A and 
record it on soundcard B. If you play and record on the same soundcard, the 
phase difference is constant.

With kind regards,

Johan Kleuskens

"Jerry Avins" <jya@ieee.org> wrote in message 
news:GdadnfDmEY0J2cbfRVn-vw@rcn.net...
> johan kleuskens wrote: >> Hi Steve, >> >> The room is a closed office with no curtains or moving things in it, and >> "as soon as" means immediately. For a long time we tought is was caused >> by a difference in sample frequency of the speaker and the microphone >> part of the PC soundcard we use. However, we did a test and concluded >> that microphone and speaker sample frequency are synchronous. We tested >> as follows: we played a 1000Hz soundfile through our speakers, and >> recorded this at the same time via our microphone. The phase of the sine >> on the recorded file was compared with the sine on the 1000Hz speaker >> file. The phase should be constant when speaker en microphone are >> synchronous and non constant if speaker and microphone are non >> synchronous. The phase difference was constant, and therefore the speaker >> and microphone are synchronous. > > What the microphone picks up will bear a constant phase relationship to > what the speaker puts out even if the sample rates are very different, so > long as the Nyquist criterion is met for both devices. I hope sample rate > isn't the identity you thought to prove. > >> Your idea of sample jitter is interesting. I will give that a thought, >> but i have no idea how to solve this problem if jiiter is the cause of >> all this. The recording device is a ordinary soundcard, and it is not >> possible to adjust jitter-behaviour on such a device. > > It seems unlikely to me that sample jitter could be the cause of > progressive deterioration. > > Jerry > -- > Engineering is the art of making what you want from things you can get. > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
I work indirectly in this area so my understanding is far from
complete, but I'm wondering if what your seeing is in fact 'normal'.

Sure, adaption should be frozen during double talk, that's to prevent
the EC from converging on the wrong signal.  The idea is to stop
adaptation in the double-talk case not because it makes things better,
you do it so it doesn't make things worse.

>From what I understand that's not what you have (based on your 'no near
end speech' comment), so you shouldn't be freezing the taps. When you do your test in the real room, you assume that the echo paths will remain constant throughout the test. That would be the only way that the EC could maintain it's performance after you freeze the taps. I doubt that this is the case. Go back and check the weights on the tests that you did, are they identical for every run? Make another test, if you think your echo path(s) are constant then you EC should adapt to the exact same thing every time, no matter where in the test signal you start.... Again, I don't do this stuff for a living, I just know people who do, so take my ramblings for what they are worth.... ;)
Your best bet for a great Acoustic Echo Canceler is probably at:
http://www.Compandent.com/products_echokiller.htm
http://www.Compandent.com/EchoKillerFactSheet.pdf


johan kleuskens wrote:
> Hi, > > We are currently working on an acoustic echocanceller based on the well know > NLMS principle. This echocanceller works fine as long as we feed the > echocanceller with a echo signal that is generated by an audio processing > program. When using this ideal echo signal, freezing the FIR coefficients > works like it should: the echo is still cancelled because the FIR tabs > contain a representation of the impulse respons of the (virtual) room. > > Things are different in real life : when working with a real room, the echo > is cancelled as long as the tabs are not froozen. Echo Attenuation (ERLE?) > is as much as 40dB. However, as soons as the tabs are froozen, the echo > attenuation is reduced to 10-15 dB even with no near end speech! > > This raises some questions: > > - Maybe our code is wrong. We've tested the algorithm in C++ and Matlab, > and both behave the same. Below the Matlab code is include as a reference, > so if anyone sees a bug, please let me know. In this peace if > code you can see that we stop adapting the weights when half way through the > microphone and speaker file. > > - If this bad behaviour is due to the non linear impulse respons of the > room, and therefore is inherent to AEC, why is everybody talking about > freezing the tabs when double talk is active? > > With kind regards, > > Johan Kleusens