DSPRelated.com
Forums

Adaptive Filter Convergence

Started by RIH7777 September 9, 2013
Hello,
Some basic questions about behavior of a FIR LMS adaptive filter.  In Matlab I'm using their supplied adaptfilt.lms function.  

let:
mu = adaptation constant = .08 for all experiments
x(n) = filter input
d(n) = desired filter output
y(n) = filter output
e(n) = error signal = d(n) - y(n)
h(n) = impulse response of system to be modeled
afilt(n) = value of filter taps at end of data run
n1(n) = white noise, (randn(88200,1))
far(n) = 4s of my speech recorded at 22050 Hz , (88200 data pts).
FP = filter performance metric = 10*log10(var(d(n))/var(e(n)));

Experiment 1:
h(n) = 1 -.5 0 0 0 0 0 0 0 0
d(n) = h(n) convolved with x(n)
a.)x(n) = n1(n)
Results:  FP = 39.05 dB, afilt(n) = h(n) almost exactly. e(n) =0 after ~20ms
b.)x(n) = far(n)
Results:  FP = 5.27 dB
afilt(n) = 0.1772  0.1306  0.0984  0.0582  0.0275 0.0060 -0.0160 -0.0216 -0.0349 -0.0344  e(n) looks much like d(n)

Q1.)Why does filter work so well for white noise and so poorly for speech?

Experiment 2:
h(n) = 1 for n=1 to 10, h(n) = 0 for n=11 to 25
d(n) = h(n) convolved with x(n)
a.)x(n) = far(n)
Results:  FP = 11.94, e(n)starts out looking like d(n) but converges to zero after ~4s. See below for afilt(n) values
b.)x(n) = n1(n)
Results: Filter becomses unstable FP = -infinity, filter taps become huge.

Q2.)Why does filter perform 'reasonably' well for speech and blow up for white noise?  

Q3.)I understand that avoiding divergence with adaptive filters is something of an art.  What are some rule of thumb/guidelines for getting good convergence and avoiding divergence? 

Thx much






Exp 2a
afilt(n) = 
 0.9687
    1.0270
    1.0643
    1.0728
    1.0602
    1.0168
    0.9471
    0.8484
    0.7221
    0.5815
    0.4271
    0.2877
    0.1638
    0.0674
   -0.0014
   -0.0484
   -0.0713
   -0.0832
   -0.0780
   -0.0642
   -0.0395
   -0.0070
    0.0264
    0.0617
    0.0910



    
On 9/9/2013 4:26 PM, RIH7777 wrote:
> Hello, Some basic questions about behavior of a FIR LMS adaptive > filter. In Matlab I'm using their supplied adaptfilt.lms function. > > let: mu = adaptation constant = .08 for all experiments
For starters, set it to 1e-2...1e-3 range. Vladimir Vassilevsky DSP and Mixed Signal Designs www.abvolt.com
Unless you use the power-normalized lms update equation, the optimal mu will scale with the power of your input. Your white noise is coming directly from randn() so it has a variance of 1. Chances are high that your voice recording has a much lower average variance, so this is why the white noise did not converge and the voice input did.  If you scale down the variance of the white noise it should work again. This will have the same effect as reducing mu, as Vlad suggested. 

Most commercial implementations use the power-normalized update, so you don't have to worry about going unstable if you have a bad day and shout into the microphone :)

Bob
On Tuesday, September 10, 2013 9:26:13 AM UTC+12, RIH7777 wrote:
> Hello, > > Some basic questions about behavior of a FIR LMS adaptive filter. In Matlab I'm using their supplied adaptfilt.lms function. > > > > let: > > mu = adaptation constant = .08 for all experiments > > x(n) = filter input > > d(n) = desired filter output > > y(n) = filter output > > e(n) = error signal = d(n) - y(n) > > h(n) = impulse response of system to be modeled > > afilt(n) = value of filter taps at end of data run > > n1(n) = white noise, (randn(88200,1)) > > far(n) = 4s of my speech recorded at 22050 Hz , (88200 data pts). > > FP = filter performance metric = 10*log10(var(d(n))/var(e(n))); > > > > Experiment 1: > > h(n) = 1 -.5 0 0 0 0 0 0 0 0 > > d(n) = h(n) convolved with x(n) > > a.)x(n) = n1(n) > > Results: FP = 39.05 dB, afilt(n) = h(n) almost exactly. e(n) =0 after ~20ms > > b.)x(n) = far(n) > > Results: FP = 5.27 dB > > afilt(n) = 0.1772 0.1306 0.0984 0.0582 0.0275 0.0060 -0.0160 -0.0216 -0.0349 -0.0344 e(n) looks much like d(n) > > > > Q1.)Why does filter work so well for white noise and so poorly for speech? > > > > Experiment 2: > > h(n) = 1 for n=1 to 10, h(n) = 0 for n=11 to 25 > > d(n) = h(n) convolved with x(n) > > a.)x(n) = far(n) > > Results: FP = 11.94, e(n)starts out looking like d(n) but converges to zero after ~4s. See below for afilt(n) values > > b.)x(n) = n1(n) > > Results: Filter becomses unstable FP = -infinity, filter taps become huge. > > > > Q2.)Why does filter perform 'reasonably' well for speech and blow up for white noise? > > > > Q3.)I understand that avoiding divergence with adaptive filters is something of an art. What are some rule of thumb/guidelines for getting good convergence and avoiding divergence? > > > > Thx much > > > > > > > > > > > > > > Exp 2a > > afilt(n) = > > 0.9687 > > 1.0270 > > 1.0643 > > 1.0728 > > 1.0602 > > 1.0168 > > 0.9471 > > 0.8484 > > 0.7221 > > 0.5815 > > 0.4271 > > 0.2877 > > 0.1638 > > 0.0674 > > -0.0014 > > -0.0484 > > -0.0713 > > -0.0832 > > -0.0780 > > -0.0642 > > -0.0395 > > -0.0070 > > 0.0264 > > 0.0617 > > 0.0910
For non-stationary inputs like speech you really need normalised LMS ie careful monitoring of the variance and adjustment to the step size.
Hi,

Using normalized LMS, (adaptfilt.nlms), gives much better results for experiment 2.  With x(n) = far(n) get comparable results to those above using LMS.  With x(n) = n1(n) filter stays stable and performs beautifully: e(n) goes to zero quickly.

BUT for experiment 1, NLMS doesnt help much. Nor does decreasing mu for LMS. e.g. with mu_NLMS = 1-2, and x(n) = far(n), e(n) is only slightly attenuated version of x(n) and FP = ~3.76.  Using LMS with mu = .005 we see FP ~.98.  

These are just learning experiments for me.  For experiment 1, I picked h(n) = [1 -.5 0 0 0 0 0 0 0 0] arbitrarily as a simple system to model.  The results are highlighting my lack of understanding of the convergence process.  Is there some reason why this impulse response combined with speech would be so tough for an adaptive filter to track? 

Also, I assume 'power normalized update equation' is the same as 'normalized LMS'.  Is this correct?

Thanks again, all this feedback is very helpful. 
On Wednesday, September 11, 2013 7:29:42 AM UTC+12, RIH7777 wrote:
> Hi, > > > > Using normalized LMS, (adaptfilt.nlms), gives much better results for experiment 2. With x(n) = far(n) get comparable results to those above using LMS. With x(n) = n1(n) filter stays stable and performs beautifully: e(n) goes to zero quickly. > > > > BUT for experiment 1, NLMS doesnt help much. Nor does decreasing mu for LMS. e.g. with mu_NLMS = 1-2, and x(n) = far(n), e(n) is only slightly attenuated version of x(n) and FP = ~3.76. Using LMS with mu = .005 we see FP ~.98. > > > > These are just learning experiments for me. For experiment 1, I picked h(n) = [1 -.5 0 0 0 0 0 0 0 0] arbitrarily as a simple system to model. The results are highlighting my lack of understanding of the convergence process. Is there some reason why this impulse response combined with speech would be so tough for an adaptive filter to track? > > > > Also, I assume 'power normalized update equation' is the same as 'normalized LMS'. Is this correct? > > > > Thanks again, all this feedback is very helpful.
Look, these gradient descent algorithms can only be compared like for like. A white noise input is the ideal case and you can wind mu up pretty high and get great convergence. Non stationary noise is a different matter and you must eitehr set mu for teh worst possible case or use normalised LMS.