Reply by robert bristow-johnson August 3, 20112011-08-03
On 8/3/11 2:53 PM, glen herrmannsfeldt wrote:
...
> In math terms, the equations have an exponentially growing solution,
if the coefficient update is *leaky* (and assuming leaking "out", not a negative leak), i would be curious how that could happen.
> which isn't the one you are expecting, but the equations don't know > that. The first one I remember came from a recursive expansion > for Clebsch-Gordon coefficients, and even though the book said it > was unstable that way, I tried it anyway. It works fine for a while, > and then, what seems very sudden, they blow up. (They are supposed > to be less than one.)
this guy started two threads. i was hoping he might toss up the equations. lessee... K-1 FIR: y[n] = SUM{ h[k] * x[n-k] } k=0 LMS: e[n] = y[n] - d[n] d[n] = "desired signal". ill-named, maybe i would call it the "target signal" and is given. it's supposed to be some filtered version of x[n]. so John, what goes in between x[n] and d[n]? the gradient of ( e[n] )^2 w.r.t. h[k] is: (d/dh[k])( e[n] )^2 = 2*e[n] * x[n-k] non-normalized LMS nudges the coefficients down the gradient a little each sample period h[k] <-- h[k] - mu*( (d/dh[k])( e[n] )^2 ) = h[k] - 2*mu * e[n]*x[n-k] at time n. but the speed of nudging of these coefficients would be proportional to (besides the parameter mu) the magnitude (in some sense) of x[n]^2 (x[n-k] and e[n], with magnitude also proportional to x[n]). so we have to compute some gain factor that is something proportional to g[n] = (2*mu) / mean{ x[n]^2 } and there are a few different ways to figure that out. so normalized LMS is h[k] <-- h[k] - g[n] * e[n]*x[n-k] at time n. now, what precisely is leaking? is it the coefficient update or is it the calculation of the normalization gain g[n] (but we didn't nail down what g[n] is yet). h[k] <-- p*h[k] - g[n] * e[n]*x[n-k] at time n. where 0 < 1-p << 1 , and p is a first-order pole value. is that what you're doing, John? or is the leaking going on in g[n]? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by glen herrmannsfeldt August 3, 20112011-08-03
JohnPower <asdfghjkl.mh@googlemail.com> wrote:

> For my master's thesis I'm currently trying to implement an active > noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've > chosen the Normalized Leaky LMS algorithm to update (451) coefficients > of a FIR filter.
(snip)
> My system works quite well for a couple of seconds (sometimes seconds, > sometimes minutes...) but then the coefficients seem to shift out of > the FIRs scope and the cancellation decreases heavily. Unfortunately > I'm unable to have a live view on how the coefficients evolve, but > I've taken some snapshots by stopping the programm at various times:
A guess, without understanding most of the details of the system, is that the output eventually affects the input in a way that is not being compensated for. Your coefficients are based on the case without cancelation, but after some time of running the cancelation, the correction isn't quite right. In math terms, the equations have an exponentially growing solution, which isn't the one you are expecting, but the equations don't know that. The first one I remember came from a recursive expansion for Clebsch-Gordon coefficients, and even though the book said it was unstable that way, I tried it anyway. It works fine for a while, and then, what seems very sudden, they blow up. (They are supposed to be less than one.) -- glen
Reply by JohnPower August 3, 20112011-08-03
Hi guys,

For my master's thesis I'm currently trying to implement an active
noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've
chosen the Normalized Leaky LMS algorithm to update (451) coefficients
of a FIR filter. In contrast to conventional ANC Systems, the noise
I'm trying to suppress is highly repetitive and I have a trigger
signal which fires on each repetition. So what my program does, is
waiting for the first trigger, record one period of the noise and then
uses this template as a feed-forward source for the NLLMS algorith.

My system works quite well for a couple of seconds (sometimes seconds,
sometimes minutes...) but then the coefficients seem to shift out of
the FIRs scope and the cancellation decreases heavily. Unfortunately
I'm unable to have a live view on how the coefficients evolve, but
I've taken some snapshots by stopping the programm at various times:

this is taken a few seconds after the beginning when the cancellation
was still good: http://www.abload.de/img/coefficients_startv7w9.png

after a couple of minutes, the coefficients seem to drift to the left:
http://www.abload.de/img/coefficients_still_goog7iz.png

cancellation finally decreases and coefficients look like this:
http://www.abload.de/img/coefficients_gone_wildw7sr.png

maybe I should als add, that I measure the impulse response of my
system beforehand via an MLS sequence and apply this to the template
before feeding it to the adaption algorith. Also I've delayed my
template as to take delays coming from ADC/DAC etc. into account.

I've already dug through quite a few papers and books but I didn't
find an answer to my problem, except that the leakage factor prevents
the coefficients from growing too much... but this is imho not the
case here and I've also implemented leakage to conquer such issues.

Thanks for your help!
Markus