Hi guys, For my master's thesis I'm currently trying to implement an active noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've chosen the Normalized Leaky LMS algorithm to update (451) coefficients of a FIR filter. In contrast to conventional ANC Systems, the noise I'm trying to suppress is highly repetitive and I have a trigger signal which fires on each repetition. So what my program does, is waiting for the first trigger, record one period of the noise and then uses this template as a feed-forward source for the NLLMS algorith. My system works quite well for a couple of seconds (sometimes seconds, sometimes minutes...) but then the coefficients seem to shift out of the FIRs scope and the cancellation decreases heavily. Unfortunately I'm unable to have a live view on how the coefficients evolve, but I've taken some snapshots by stopping the programm at various times: this is taken a few seconds after the beginning when the cancellation was still good: http://www.abload.de/img/coefficients_startv7w9.png after a couple of minutes, the coefficients seem to drift to the left: http://www.abload.de/img/coefficients_still_goog7iz.png cancellation finally decreases and coefficients look like this: http://www.abload.de/img/coefficients_gone_wildw7sr.png maybe I should als add, that I measure the impulse response of my system beforehand via an MLS sequence and apply this to the template before feeding it to the adaption algorith. Also I've delayed my template as to take delays coming from ADC/DAC etc. into account. I've already dug through quite a few papers and books but I didn't find an answer to my problem, except that the leakage factor prevents the coefficients from growing too much... but this is imho not the case here and I've also implemented leakage to conquer such issues. Thanks for your help! Markus

# why are LMS coefficients wandering away?

Started by ●August 3, 2011

Reply by ●August 3, 20112011-08-03

On 8/3/11 8:04 AM, JohnPower wrote:> > For my master's thesis I'm currently trying to implement an active > noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've > chosen the Normalized Leaky LMS algorithm to update (451) coefficients > of a FIR filter.John, would you mind plopping down the relevant equations for the NLLMS? i know how NLMS works and i have an idea how to make the coefficient updating to be leaky, but i dunno how to get a handle on your problem without seeing the equations. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

Reply by ●August 3, 20112011-08-03

On Wed, 03 Aug 2011 05:04:32 -0700, JohnPower wrote:> Hi guys, > > For my master's thesis I'm currently trying to implement an active noise > cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've chosen the > Normalized Leaky LMS algorithm to update (451) coefficients of a FIR > filter. In contrast to conventional ANC Systems, the noise I'm trying to > suppress is highly repetitive and I have a trigger signal which fires on > each repetition. So what my program does, is waiting for the first > trigger, record one period of the noise and then uses this template as a > feed-forward source for the NLLMS algorith. > > My system works quite well for a couple of seconds (sometimes seconds, > sometimes minutes...) but then the coefficients seem to shift out of the > FIRs scope and the cancellation decreases heavily. Unfortunately I'm > unable to have a live view on how the coefficients evolve, but I've > taken some snapshots by stopping the programm at various times: > > this is taken a few seconds after the beginning when the cancellation > was still good: http://www.abload.de/img/coefficients_startv7w9.png > > after a couple of minutes, the coefficients seem to drift to the left: > http://www.abload.de/img/coefficients_still_goog7iz.png > > cancellation finally decreases and coefficients look like this: > http://www.abload.de/img/coefficients_gone_wildw7sr.png > > maybe I should als add, that I measure the impulse response of my system > beforehand via an MLS sequence and apply this to the template before > feeding it to the adaption algorith. Also I've delayed my template as to > take delays coming from ADC/DAC etc. into account. > > I've already dug through quite a few papers and books but I didn't find > an answer to my problem, except that the leakage factor prevents the > coefficients from growing too much... but this is imho not the case here > and I've also implemented leakage to conquer such issues.Is your noise microphone in the same space in which the cancellation is happening? Could you either be affecting the trigger signal with your cancellation, or could you have an unpredicted delay in your cancellation-to-microphone path that is causing you to register cancellation as a delayed signal? At times like these, when you're standing nose to tree wondering at the holes in the bark, it's good to set all your assumptions aside and re- analyze everything. Sometimes you find that your tree is next to a firing range, and all becomes clear... -- www.wescottdesign.com

Reply by ●August 10, 20112011-08-10

On 3 Aug., 15:36, robert bristow-johnson <r...@audioimagination.com> wrote:> On 8/3/11 8:04 AM, JohnPower wrote: > > > > > For my master's thesis I'm currently trying to implement an active > > noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've > > chosen the Normalized Leaky LMS algorithm to update (451) coefficients > > of a FIR filter. > > John, would you mind plopping down the relevant equations for the NLLMS? > > i know how NLMS works and i have an idea how to make the coefficient > updating to be leaky, but i dunno how to get a handle on your problem > without seeing the equations. > > -- > > r b-j � � � � � � � � �r...@audioimagination.com > > "Imagination is more important than knowledge."this being my first thread in a usenet group, I've somehow expected to get a notification on new messages... well, sorry for the delay (and also the double post...). my equation for the NLLMS is (vectors uppercase, scalars lowercase): W[n+1] = W[n] * l + a*e[n]*X[n] / (s + power(X[n]) where W - coefficient vector X - vector which containts last N samples from my template which was treated with the impulse response P I've measured via MLS l - leakage 0< l < 1, in my case it is 0.999 a - factor to influence speed of convergence, I get rather good results with setting this to -0.017 e - current error value from error microphone power(X[n]) - the power of X[n] estimated by doing a scalar multiplication of X[n] with itself, this is the "normalize" part s - small value to prevent coefficients getting too big when power(X[n]) is small after calculating W[n+1], I've got an FIR Filter doing K-1 y[n] = SUM{ W[k] * Xo[n-k] } k=0 where Xo - template which did not go through the impulse response P Maybe this overview helps to straighten out misunderstandings resulting from my lacking skills of the English language: http://www.abload.de/img/signal_flowgus5.png this was taken, and slightly modified, from "Active control of the volume acquisition noise in functional magnetic resonance imaging: Method and psychoacoustical evaluation " by John Chambers (unfortunately, one has to pay for the paper and I don't think the publishers would be happy with me sharing copies here. The follow up paper by the same author can be obtained here: http://www.coe.pku.edu.cn/tpic/2010913102933476.pdf ) Tim: the noise microphone is in fact in the same space in which the cancellation is happening (as can be seen in the picture a few lines above). But also, for testing purposes, I've tested my algorith with only electrical signals using an Op-Amp configured as adder to superimpose my noise signal with the one intended to cancel out the former.This was also the case when I took the above coefficient snapshots. Even if there were unexpected delays due to, say the DAC or ADC, isn't the LMS supposed to follow such changes in the system? And if not, how could I take care of it? Having said this, I could also think of the following kind of "unpredicted delay" in my system: My sampling frequency for the audio signal is 48kHz, so clearly I have a limited time-resolution. Now let the periodic length of the noise be some fractional number of samples... which is quite probable I'd suppose. So, my triggers are 40000.5 samples apart, resulting in a template of 40001 samples. This results in half a sample too much on every noise-iteration. Could this bring my algorith to shift the coefficients by half a sample every time until they are completely out of shape? I hope I was able to give a better insight into my system and will try to monitor this thread better ;) Thanks, JohnPower

Reply by ●August 10, 20112011-08-10

On 8/10/11 1:56 PM, JohnPower wrote:> On 3 Aug., 15:36, robert bristow-johnson<r...@audioimagination.com> > wrote: >> On 8/3/11 8:04 AM, JohnPower wrote: >> >> >> >>> For my master's thesis I'm currently trying to implement an active >>> noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've >>> chosen the Normalized Leaky LMS algorithm to update (451) coefficients >>> of a FIR filter. >> >> John, would you mind plopping down the relevant equations for the NLLMS? >> >> i know how NLMS works and i have an idea how to make the coefficient >> updating to be leaky, but i dunno how to get a handle on your problem >> without seeing the equations. >> >> -- >> >> r b-j r...@audioimagination.com >> >> "Imagination is more important than knowledge." > > this being my first thread in a usenet group, I've somehow expected to > get a notification on new messages... well, sorry for the delay (and > also the double post...). > > my equation for the NLLMS is (vectors uppercase, scalars lowercase): > > W[n+1] = W[n] * l + a*e[n]*X[n] / (s + power(X[n]) >i guessed pretty good: g[n] = (2*mu) / mean{ x[n]^2 } h[k] <-- p*h[k] - g[n] * e[n]*x[n-k] at time n. different symbols and i missed the "s" term.> where > W - coefficient vector > X - vector which contains last N samples from my template which was > treated with the impulse response P I've measured via MLSsince you're not using the shorthand i was using, then let's explicitly expand the notation so we know exactly what is being done. using C-like notation for 2-dim arrays: coef vector: W[n] = { w[0][n], w[1][n], ... w[k][n], ... w[K-1][n] } FIR input vector: X[n] = { x[n], x[n-1], ... x[n-k], ... x[n-K+1] } we need to do this, because i see an ambiguity in your equation below. can you confirm this?> l - leakage 0< l< 1, in my case it is 0.999which is like a first-order filter pole. just like my "p".> a - factor to influence speed of convergence, I get rather good > results with setting this to -0.017this, or something proportional to it, is called the "adaptation gain".> e - current error value from error microphoneso is the differencing done in the air? this does not make sense for an LMS. e[n] is, as far as i can tell, *computed* from the output of the FIR (what is defined by W[n] and what i thought is y[n]) and from subtracting the "desired" signal (what i would think comes from this microphone). you need to clear this up.> power(X[n]) - the power of X[n] estimated by doing a scalar > multiplication of X[n] with itself,i don't get this. "scalar multiplication" of a vector with a vector? do you mean dot product?: K-1 power(X[n]) = SUM{ x[n-k]^2 } k=0 this is what i called "mean{ x[n]^2 }". > this is the "normalize" part dividing by that is the "normalize" part, i think you mean.> s - small value to prevent coefficients getting too big when > power(X[n]) is smallwell, we don't ever want to divide by 0 but when the power is small, e[n] and X[n] should also be small. so when the power is small, the adaptation gets slower (which may be just fine, what you want). let's assume that power(X[n]) is large enough that s is negligible, okay? how are you computing the division? explicitly? sometimes, for NLMS, when division is expensive. it can be computed (tracked) with a multiply, compare, and negative feedback.> > after calculating W[n+1], I've got an FIR Filter doing > > K-1 > y[n] = SUM{ W[k] * Xo[n-k] } > k=0 > > where > Xo - template which did not go through the impulse response Pright here, i see a problem regarding the difference between the vector X[n] and, apparently, the scalar values Xo[n-k]. is the vector X[n] made up of the samples Xo[n-k]? if yes, your notation is bad (the elements of X[n] should be small-case, just like y[n] is). if no, this NLMS algorithm is messed up. is P the impulse response of the room or whatever other path (comparable to W[n])? so far, we do not have a clear definition of e[n]. how is e[n] computed?> Maybe this overview helps to straighten out misunderstandings > resulting from my lacking skills of the English language: > http://www.abload.de/img/signal_flowgus5.pngthere are serious problems with that schematic. are you trying to cancel the sound out of the loudspeaker? what is the "Recorded Audio"? how is it related to what's in the "Memory Bank"? why is this signal not derived from a common source?> this was taken, and slightly modified, from "Active control of the > volume acquisition noise in functional magnetic resonance imaging: > Method and psychoacoustical evaluation " by John Chambers > (unfortunately, one has to pay for the paper and I don't think the > publishers would be happy with me sharing copies here. The follow up > paper by the same author can be obtained here: > http://www.coe.pku.edu.cn/tpic/2010913102933476.pdf )so far, i dunno if i need it.> Tim: the noise microphone is in fact in the same space in which the > cancellation is happening (as can be seen in the picture a few lines > above). But also, for testing purposes, I've tested my algorith with > only electrical signals using an Op-Amp configured as adder to > superimpose my noise signal with the one intended to cancel out the > former.This was also the case when I took the above coefficient > snapshots. > Even if there were unexpected delays due to, say the DAC or ADC, isn't > the LMS supposed to follow such changes in the system? And if not, > how could I take care of it? > > Having said this, I could also think of the following kind of > "unpredicted delay" in my system: > My sampling frequency for the audio signal is 48kHz, so clearly I have > a limited time-resolution. Now let the periodic length of the noise be > some fractional number of samples... which is quite probable I'd > suppose. So, my triggers are 40000.5 samples apart, resulting in a > template of 40001 samples. This results in half a sample too much on > every noise-iteration. Could this bring my algorithm to shift the > coefficients by half a sample every time until they are completely out > of shape?oh, this is complicated. i have to read this over and over to try to figure out why you're doing what you're doing. the LMS alg should adapt itself in such a way to take care of fractional-sample delays.> I hope I was able to give a better insight into my system and will try > to monitor this thread better ;)don't sweat it. we're sorta amuzed that you got to senior or grad school level EE without having discovered and used USENET before. USENET is very old. older than the web. originally newsgroup postings were passed around between universities and such by means other than the internet. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

Reply by ●August 11, 20112011-08-11

On Aug 11, 3:16 am, robert bristow-johnson <r...@audioimagination.com> wrote:> On 8/10/11 1:56 PM, JohnPower wrote: > > > > > > > > > > > On 3 Aug., 15:36, robert bristow-johnson<r...@audioimagination.com> > > wrote: > >> On 8/3/11 8:04 AM, JohnPower wrote: > > >>> For my master's thesis I'm currently trying to implement an active > >>> noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've > >>> chosen the Normalized Leaky LMS algorithm to update (451) coefficients > >>> of a FIR filter. > > >> John, would you mind plopping down the relevant equations for the NLLMS? > > >> i know how NLMS works and i have an idea how to make the coefficient > >> updating to be leaky, but i dunno how to get a handle on your problem > >> without seeing the equations. > > >> -- > > >> r b-j r...@audioimagination.com > > >> "Imagination is more important than knowledge." > > > this being my first thread in a usenet group, I've somehow expected to > > get a notification on new messages... well, sorry for the delay (and > > also the double post...). > > > my equation for the NLLMS is (vectors uppercase, scalars lowercase): > > > W[n+1] = W[n] * l + a*e[n]*X[n] / (s + power(X[n]) > > i guessed pretty good: > > g[n] = (2*mu) / mean{ x[n]^2 } > > h[k] <-- p*h[k] - g[n] * e[n]*x[n-k] at time n. > > different symbols and i missed the "s" term. > > > where > > W - coefficient vector > > X - vector which contains last N samples from my template which was > > treated with the impulse response P I've measured via MLS > > since you're not using the shorthand i was using, then let's explicitly > expand the notation so we know exactly what is being done. using C-like > notation for 2-dim arrays: > > coef vector: W[n] = { w[0][n], w[1][n], ... w[k][n], ... w[K-1][n] } > > FIR input vector: X[n] = { x[n], x[n-1], ... x[n-k], ... x[n-K+1] } > > we need to do this, because i see an ambiguity in your equation below. > can you confirm this? >my FIR input vector Xo[n] has the same dimensions as the coefficients vector W[n]: Xo[n] = { x[0], x[1], ... , x[N] } W[n] = { w[0], w[1], ... , w[N] } where N is the filter order. When writing W[n+1] I mean the coefficient vector which is used to compute the next output. I've used the nomenclature from Diniz' book on 'Adaptive Filtering'> > l - leakage 0< l< 1, in my case it is 0.999 > > which is like a first-order filter pole. just like my "p". > > > a - factor to influence speed of convergence, I get rather good > > results with setting this to -0.017 > > this, or something proportional to it, is called the "adaptation gain". > > > e - current error value from error microphone > > so is the differencing done in the air? this does not make sense for an > LMS. e[n] is, as far as i can tell, *computed* from the output of the > FIR (what is defined by W[n] and what i thought is y[n]) and from > subtracting the "desired" signal (what i would think comes from this > microphone). you need to clear this up.yes, it's done in the air. The microphone serves two purposes: 1) record the 'desired' signal 2) measure the error betwen output signal and current noise It's something like a hybrid between a feedforward and feedback system, I guess this is what makes it so hard to understand - and explain :-/> > > power(X[n]) - the power of X[n] estimated by doing a scalar > > multiplication of X[n] with itself, > > i don't get this. "scalar multiplication" of a vector with a vector? > do you mean dot product?:sorry, my bad. Yes I mean dot product (in German, Skalarprodukt is the common term which I mixed up with scalar multiplication...)> > N-1 > power(X[n]) = SUM{ x[n-k]^2 } > k=0 > > this is what i called "mean{ x[n]^2 }". > > > this is the "normalize" part > > dividing by that is the "normalize" part, i think you mean.yes> > > s - small value to prevent coefficients getting too big when > > power(X[n]) is small > > well, we don't ever want to divide by 0 but when the power is small, > e[n] and X[n] should also be small. so when the power is small, the > adaptation gets slower (which may be just fine, what you want). let's > assume that power(X[n]) is large enough that s is negligible, okay? > > how are you computing the division? explicitly? sometimes, for NLMS, > when division is expensive. it can be computed (tracked) with a > multiply, compare, and negative feedback.my actual C code looks like this: tempUpdateFactor= a * e / templPower; As one can see, I've left out the s as it doesn't seem to be the problem here... So yes, I do explicit division. I suppose the "mulitply, compare, negative feedback" you've suggested would be to have the algorithm run faster? I haven't looked into this exact line for optimization yet because the vector calculations are much more time expensive.> > > > > after calculating W[n+1], I've got an FIR Filter doing > > > K-1 > > y[n] = SUM{ W[k] * Xo[n-k] } > > k=0 > > > where > > Xo - template which did not go through the impulse response P > > right here, i see a problem regarding the difference between the vector > X[n] and, apparently, the scalar values Xo[n-k]. is the vector X[n] > made up of the samples Xo[n-k]? if yes, your notation is bad (the > elements of X[n] should be small-case, just like y[n] is). if no, this > NLMS algorithm is messed up.I don't get it, if it was messed up this bad, it wouldn't work at all, would it? But my darned system works excellent for some time and then fades out gracefully... corrected notation: N-1 y[n] = SUM{ w[k] * xo[N-k] } k=0 I actually do this by using a library function to compute the dot product of the two vectors W[n] an Xo[n]. Xo and X are two different vectors. I've uploaded a new signal graph. It was revised to be coherent with the symbols in my equations: http://www.abload.de/img/signal_flow_coherentdusl.png> > is P the impulse response of the room or whatever other path (comparable > to W[n])?Yes, P is the impulse response of the room. To be more specific, it is the plant model of the system DAC->amplifier->transducer->microphone->amplifier->ADC> > so far, we do not have a clear definition of e[n]. how is e[n] computed? > > > Maybe this overview helps to straighten out misunderstandings > > resulting from my lacking skills of the English language: > >http://www.abload.de/img/signal_flowgus5.png > > there are serious problems with that schematic. are you trying to > cancel the sound out of the loudspeaker? what is the "Recorded Audio"? > how is it related to what's in the "Memory Bank"? why is this signal > not derived from a common source?Yes, I try to cancel the sound out of the loudspeaker. The term "Recorded audio" refers to the fact that, for testing, the noise and the triggers were recorded with a PC which now plays it back through the soundcard and serves as noise source. The "memory bank" is not related to this. The memory bank is used to store the first period of noise (recorded using the microphone), which is then used as a "feed forward" source for the LMS algorithm. So this is what my PC puts out: http://www.abload.de/img/trigger_and_noisevk6x.png> > > this was taken, and slightly modified, from "Active control of the > > volume acquisition noise in functional magnetic resonance imaging: > > Method and psychoacoustical evaluation " by John Chambers > > (unfortunately, one has to pay for the paper and I don't think the > > publishers would be happy with me sharing copies here. The follow up > > paper by the same author can be obtained here: > >http://www.coe.pku.edu.cn/tpic/2010913102933476.pdf) > > so far, i dunno if i need it. > > > > > > > > > > > Tim: the noise microphone is in fact in the same space in which the > > cancellation is happening (as can be seen in the picture a few lines > > above). But also, for testing purposes, I've tested my algorith with > > only electrical signals using an Op-Amp configured as adder to > > superimpose my noise signal with the one intended to cancel out the > > former.This was also the case when I took the above coefficient > > snapshots. > > Even if there were unexpected delays due to, say the DAC or ADC, isn't > > the LMS supposed to follow such changes in the system? And if not, > > how could I take care of it? > > > Having said this, I could also think of the following kind of > > "unpredicted delay" in my system: > > My sampling frequency for the audio signal is 48kHz, so clearly I have > > a limited time-resolution. Now let the periodic length of the noise be > > some fractional number of samples... which is quite probable I'd > > suppose. So, my triggers are 40000.5 samples apart, resulting in a > > template of 40001 samples. This results in half a sample too much on > > every noise-iteration. Could this bring my algorithm to shift the > > coefficients by half a sample every time until they are completely out > > of shape? > > oh, this is complicated. i have to read this over and over to try to > figure out why you're doing what you're doing. > > the LMS alg should adapt itself in such a way to take care of > fractional-sample delays. > > > I hope I was able to give a better insight into my system and will try > > to monitor this thread better ;) > > don't sweat it. we're sorta amuzed that you got to senior or grad > school level EE without having discovered and used USENET before. > USENET is very old. older than the web. originally newsgroup postings > were passed around between universities and such by means other than the > internet.Haha, glad to be of enjoyment :-) Not being a EE student, but a medical engineering one, maybe I just didn't dare posting on the usenet before :-D> > -- > > r b-j r...@audioimagination.com > > "Imagination is more important than knowledge."

Reply by ●August 12, 20112011-08-12

On Aug 11, 7:35�am, JohnPower <asdfghjkl...@googlemail.com> wrote:> On Aug 11, 3:16 am, robert bristow-johnson <r...@audioimagination.com> > wrote: > > > > > > > On 8/10/11 1:56 PM, JohnPower wrote: > > > > On 3 Aug., 15:36, robert bristow-johnson<r...@audioimagination.com> > > > wrote: > > >> On 8/3/11 8:04 AM, JohnPower wrote: > > > >>> For my master's thesis I'm currently trying to implement an active > > >>> noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've > > >>> chosen the Normalized Leaky LMS algorithm to update (451) coefficients > > >>> of a FIR filter. > > > >> John, would you mind plopping down the relevant equations for the NLLMS? > > > >> i know how NLMS works and i have an idea how to make the coefficient > > >> updating to be leaky, but i dunno how to get a handle on your problem > > >> without seeing the equations. > > > >> -- > > > >> r b-j � � � � � � � � �r...@audioimagination.com > > > >> "Imagination is more important than knowledge." > > > > this being my first thread in a usenet group, I've somehow expected to > > > get a notification on new messages... well, sorry for the delay (and > > > also the double post...). > > > > my equation for the NLLMS is (vectors uppercase, scalars lowercase): > > > > W[n+1] = W[n] * l + a*e[n]*X[n] / (s + power(X[n]) > > > i guessed pretty good: > > > � � �g[n] �= �(2*mu) / mean{ x[n]^2 } > > > � � �h[k] �<-- �p*h[k] �- �g[n] * e[n]*x[n-k] � � �at time n. > > > different symbols and i missed the "s" term. > > > > where > > > W - coefficient vector > > > X �- vector which contains last N samples from my template which was > > > treated with the impulse response P I've measured via MLS > > > since you're not using the shorthand i was using, then let's explicitly > > expand the notation so we know exactly what is being done. �using C-like > > notation for 2-dim arrays: > > > coef vector: W[n] = { w[0][n], w[1][n], ... w[k][n], ... w[K-1][n] } > > > FIR input vector: �X[n] = { x[n], x[n-1], ... x[n-k], ... x[n-K+1] } > > > we need to do this, because i see an ambiguity in your equation below. > > can you confirm this? > > my FIR input vector Xo[n] has the same dimensions as the coefficients > vector W[n]: > > Xo[n] = { x[0], x[1], ... , x[N] } > W[n] = { w[0], w[1], ... , w[N] } > > where N is the filter order. When writing W[n+1] I mean the > coefficient vector which is used > to compute the next output. I've used the nomenclature from Diniz' > book on 'Adaptive Filtering' > > > > > > > > l - leakage 0< �l< �1, in my case it is 0.999 > > > which is like a first-order filter pole. �just like my "p". > > > > a �- factor to influence speed of convergence, I get rather good > > > results with setting this to -0.017 > > > this, or something proportional to it, is called the "adaptation gain". > > > > e - current error value from error microphone > > > so is the differencing done in the air? �this does not make sense for an > > LMS. �e[n] is, as far as i can tell, *computed* from the output of the > > FIR (what is defined by W[n] and what i thought is y[n]) and from > > subtracting the "desired" signal (what i would think comes from this > > microphone). �you need to clear this up. > > yes, it's done in the air. > The microphone serves two purposes: > 1) record the 'desired' signal > 2) measure the error betwen output signal and current noise > > It's something like a hybrid between a feedforward and feedback > system, I guess this is > what makes it so hard to understand - and explain :-/ > > > > > > power(X[n]) - the power of X[n] estimated by doing a scalar > > > multiplication of X[n] with itself, > > > i don't get this. �"scalar multiplication" of a vector with a vector? > > do you mean dot product?: > > sorry, my bad. Yes I mean dot product (in German, Skalarprodukt is the > common term > which I mixed up with scalar multiplication...) > > > > > � � � � � � � � � � N-1 > > � � power(X[n]) �= �SUM{ x[n-k]^2 } > > � � � � � � � � � � k=0 > > > this is what i called "mean{ x[n]^2 }". > > > �> this is the "normalize" part > > > dividing by that is the "normalize" part, i think you mean. > > yes > > > > > > s - small value to prevent coefficients getting too big when > > > power(X[n]) is small > > > well, we don't ever want to divide by 0 but when the power is small, > > e[n] and X[n] should also be small. �so when the power is small, the > > adaptation gets slower (which may be just fine, what you want). �let's > > assume that power(X[n]) is large enough that s is negligible, okay? > > > how are you computing the division? �explicitly? �sometimes, for NLMS, > > when division is expensive. �it can be computed (tracked) with a > > multiply, compare, and negative feedback. > > my actual C code looks like this: > > tempUpdateFactor= � � � a * e / templPower; > > As one can see, I've left out the s as it doesn't seem to be the > problem here... > So yes, I do explicit division. I suppose the "mulitply, compare, > negative feedback" > you've suggested would be to have the algorithm run faster? I haven't > looked into > this exact line for optimization yet because the vector calculations > are much > more time expensive. > > > > > > > > > > after calculating W[n+1], I've got an FIR Filter doing > > > > � � � � �K-1 > > > y[n] �= �SUM{ W[k] * Xo[n-k] } > > > � � � � �k=0 > > > > where > > > Xo - template which did not go through the impulse response P > > > right here, i see a problem regarding the difference between the vector > > X[n] and, apparently, the scalar values Xo[n-k]. �is the vector X[n] > > made up of the samples Xo[n-k]? �if yes, your notation is bad (the > > elements of X[n] should be small-case, just like y[n] is). �if no, this > > NLMS algorithm is messed up. > > I don't get it, if it was messed up this bad, it wouldn't work at all, > would it? > But my darned system works excellent for some time and then fades out > gracefully... > > corrected notation: > > � � � � � N-1 > �y[n] �= �SUM{ w[k] * xo[N-k] } > � � � � � k=0 > > I actually do this by using a library function to > compute the dot product of the two vectors W[n] an Xo[n]. > > Xo and X are two different vectors. I've uploaded a new signal graph. > It was > revised to be coherent with the symbols in my equations:http://www.abload.de/img/signal_flow_coherentdusl.png > > > > > is P the impulse response of the room or whatever other path (comparable > > to W[n])? > > Yes, P is the impulse response of the room. > To be more specific, it is the plant model of the system > DAC->amplifier->transducer->microphone->amplifier->ADC > > > > > so far, we do not have a clear definition of e[n]. �how is e[n] computed? > > > > Maybe this overview �helps to straighten out misunderstandings > > > resulting from my lacking skills of the English language: > > >http://www.abload.de/img/signal_flowgus5.png > > > there are serious problems with that schematic. �are you trying to > > cancel the sound out of the loudspeaker? �what is the "Recorded Audio"? > > � how is it related to what's in the "Memory Bank"? �why is this signal > > not derived from a common source? > > Yes, I try to cancel the sound out of the loudspeaker. The term > "Recorded audio" > refers to the fact that, for testing, the noise and the triggers were > recorded with > a PC which now plays it back through the soundcard and serves as noise > source. > The "memory bank" is not related to this. The memory bank is used to > store the > first period of noise (recorded using the microphone), which is then > used > as a "feed forward" source for the LMS algorithm. > > So this is what my PC puts out:http://www.abload.de/img/trigger_and_noisevk6x.png > > > > > > > > > > this was taken, and slightly modified, from "Active control of the > > > volume acquisition noise in functional magnetic resonance imaging: > > > Method and psychoacoustical evaluation " by John Chambers > > > (unfortunately, one has to pay for the paper and I don't think the > > > publishers would be happy with me sharing copies here. The follow up > > > paper by the same author can be obtained here: > > >http://www.coe.pku.edu.cn/tpic/2010913102933476.pdf) > > > so far, i dunno if i need it. > > > > Tim: the noise microphone is in fact in the same space in which the > > > cancellation is happening (as can be seen in the picture a few lines > > > above). But also, for testing purposes, I've tested my algorith with > > > only electrical signals using an Op-Amp configured as adder to > > > superimpose my noise signal with the one intended to cancel out the > > > former.This was also the case when I took the above coefficient > > > snapshots. > > > Even if there were unexpected delays due to, say the DAC or ADC, isn't > > > the LMS supposed to follow such changes in the system? And if not, > > > how �could I take care of it? > > > > Having said this, I could also think of the following kind of > > > "unpredicted delay" in my system: > > > My sampling frequency for the audio signal is 48kHz, so clearly I have > > > a limited time-resolution. Now let the periodic length of the noise be > > > some fractional number of samples... which is quite probable I'd > > > suppose. So, my triggers are 40000.5 samples apart, resulting in a > > > template of 40001 samples. This results in half a sample too much on > > > every noise-iteration. Could this bring my algorithm to shift the > > > coefficients by half a sample every time until they are completely out > > > of shape? > > > oh, this is complicated. �i have to read this over and over to try to > > figure out why you're doing what you're doing. > > > the LMS alg should adapt itself in such a way to take care of > > fractional-sample delays. > > > > I hope I was able to give a better insight into my system and will try > > > to monitor this thread better ;) > > > don't sweat it. �we're sorta amuzed that you got to senior or grad > > school level EE without having discovered and used USENET before. > > USENET is very old. �older than the web. �originally newsgroup postings > > were passed around between universities and such by means other than the > > internet. > > Haha, glad to be of enjoyment :-) Not being a EE student, but a > medical > engineering one, maybe I just didn't dare posting on the usenet > before :-D >O.K., here is what may be happening. I took a quick look at the cited paper. Read the paper again. At the output of the adaptive filter you have an amplifier. You also state that P(z) is the room impulse response. I think you have made a mistake here. In the paper M(z) is the impulse response of the room, the speaker, etc. There is a H(z) where you have the amplifier at the output of the adaptive filter. H(z) represents the impulse response between the adaptive filter output to the speaker to the microphone (page 286 - *H(z) the transfer function between the control loudspeaker and the error microphone*). Then, P(z) is the same as H(z), not the room impulse response. Take another look at the paper. Maurice Givens

Reply by ●August 13, 20112011-08-13

On 8/12/11 11:33 AM, maury wrote:> On Aug 11, 7:35 am, JohnPower<asdfghjkl...@googlemail.com> wrote: >> On Aug 11, 3:16 am, robert bristow-johnson<r...@audioimagination.com> >> wrote: >> >> >> >> >> >>> On 8/10/11 1:56 PM, JohnPower wrote: >> >>>> On 3 Aug., 15:36, robert bristow-johnson<r...@audioimagination.com> >>>> wrote: >>>>> On 8/3/11 8:04 AM, JohnPower wrote: >> >>>>>> For my master's thesis I'm currently trying to implement an active >>>>>> noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've >>>>>> chosen the Normalized Leaky LMS algorithm to update (451) coefficients >>>>>> of a FIR filter. >> >>>>> John, would you mind plopping down the relevant equations for the NLLMS? >> >>>>> i know how NLMS works and i have an idea how to make the coefficient >>>>> updating to be leaky, but i dunno how to get a handle on your problem >>>>> without seeing the equations. >> >>>>> -- >> >>>>> r b-j r...@audioimagination.com >> >>>>> "Imagination is more important than knowledge." >> >>>> this being my first thread in a usenet group, I've somehow expected to >>>> get a notification on new messages... well, sorry for the delay (and >>>> also the double post...). >> >>>> my equation for the NLLMS is (vectors uppercase, scalars lowercase): >> >>>> W[n+1] = W[n] * l + a*e[n]*X[n] / (s + power(X[n]) >> >>> i guessed pretty good: >> >>> g[n] = (2*mu) / mean{ x[n]^2 } >> >>> h[k]<-- p*h[k] - g[n] * e[n]*x[n-k] at time n. >> >>> different symbols and i missed the "s" term. >> >>>> where >>>> W - coefficient vector >>>> X - vector which contains last N samples from my template which was >>>> treated with the impulse response P I've measured via MLS >> >>> since you're not using the shorthand i was using, then let's explicitly >>> expand the notation so we know exactly what is being done. using C-like >>> notation for 2-dim arrays: >> >>> coef vector: W[n] = { w[0][n], w[1][n], ... w[k][n], ... w[K-1][n] } >> >>> FIR input vector: X[n] = { x[n], x[n-1], ... x[n-k], ... x[n-K+1] } >> >>> we need to do this, because i see an ambiguity in your equation below. >>> can you confirm this? >> >> my FIR input vector Xo[n] has the same dimensions as the coefficients >> vector W[n]: >> >> Xo[n] = { x[0], x[1], ... , x[N] }this can't be right. i think you mean { x[n], x[n-1], ... x[n-N+1] }. and why isn't this vector named "X"? if this is not X, then what is in the X vector?>> W[n] = { w[0], w[1], ... , w[N] }this should have two indices on each vector element. W[n] = { w[0][n], w[1][n], ... , w[N-1][n] }>> >> where N is the filter order. When writing W[n+1] I mean the >> coefficient vector which is used >> to compute the next output. I've used the nomenclature from Diniz' >> book on 'Adaptive Filtering' >> >> >> >> >> >>>> l - leakage 0< l< 1, in my case it is 0.999 >> >>> which is like a first-order filter pole. just like my "p". >> >>>> a - factor to influence speed of convergence, I get rather good >>>> results with setting this to -0.017 >> >>> this, or something proportional to it, is called the "adaptation gain". >> >>>> e - current error value from error microphone >> >>> so is the differencing done in the air? this does not make sense for an >>> LMS. e[n] is, as far as i can tell, *computed* from the output of the >>> FIR (what is defined by W[n] and what i thought is y[n]) and from >>> subtracting the "desired" signal (what i would think comes from this >>> microphone). you need to clear this up. >> >> yes, it's done in the air. >> The microphone serves two purposes: >> 1) record the 'desired' signal >> 2) measure the error betwen output signal and current noise >> >> It's something like a hybrid between a feedforward and feedback >> system, I guess this is >> what makes it so hard to understand - and explain :-/ >> >> >> >>>> power(X[n]) - the power of X[n] estimated by doing a scalar >>>> multiplication of X[n] with itself, >> >>> i don't get this. "scalar multiplication" of a vector with a vector? >>> do you mean dot product?: >> >> sorry, my bad. Yes I mean dot product (in German, Skalarprodukt is the >> common term >> which I mixed up with scalar multiplication...) >> >> >> >>> N-1 >>> power(X[n]) = SUM{ x[n-k]^2 } >>> k=0 >> >>> this is what i called "mean{ x[n]^2 }". >> >>> > this is the "normalize" part >> >>> dividing by that is the "normalize" part, i think you mean. >> >> yes >> >> >> >>>> s - small value to prevent coefficients getting too big when >>>> power(X[n]) is small >> >>> well, we don't ever want to divide by 0 but when the power is small, >>> e[n] and X[n] should also be small. so when the power is small, the >>> adaptation gets slower (which may be just fine, what you want). let's >>> assume that power(X[n]) is large enough that s is negligible, okay? >> >>> how are you computing the division? explicitly? sometimes, for NLMS, >>> when division is expensive. it can be computed (tracked) with a >>> multiply, compare, and negative feedback. >> >> my actual C code looks like this: >> >> tempUpdateFactor= a * e / templPower; >> >> As one can see, I've left out the s as it doesn't seem to be the >> problem here... >> So yes, I do explicit division. I suppose the "mulitply, compare, >> negative feedback" >> you've suggested would be to have the algorithm run faster? I haven't >> looked into >> this exact line for optimization yet because the vector calculations >> are much >> more time expensive. >> >> >> >> >> >> >> >>>> after calculating W[n+1], I've got an FIR Filter doing >> >>>> K-1 >>>> y[n] = SUM{ W[k] * Xo[n-k] } >>>> k=0 >>here it's K-1>>>> where >>>> Xo - template which did not go through the impulse response P >> >>> right here, i see a problem regarding the difference between the vector >>> X[n] and, apparently, the scalar values Xo[n-k]. is the vector X[n] >>> made up of the samples Xo[n-k]? if yes, your notation is bad (the >>> elements of X[n] should be small-case, just like y[n] is). if no, this >>> NLMS algorithm is messed up. >> >> I don't get it, if it was messed up this bad, it wouldn't work at all, >> would it? >> But my darned system works excellent for some time and then fades out >> gracefully... >> >> corrected notation: >> >> N-1 >> y[n] = SUM{ w[k] * xo[N-k] } >> k=0 >>first it was K, then it's N, now it N-1. need to get the symbols down, John. i don't think you have it clear about Xo and X>> I actually do this by using a library function to >> compute the dot product of the two vectors W[n] an Xo[n]. >> >> Xo and X are two different vectors. I've uploaded a new signal graph. >> It was >> revised to be coherent with the symbols in my equations:http://www.abload.de/img/signal_flow_coherentdusl.png >> >> >> >>> is P the impulse response of the room or whatever other path (comparable >>> to W[n])? >> >> Yes, P is the impulse response of the room. >> To be more specific, it is the plant model of the system >> DAC->amplifier->transducer->microphone->amplifier->ADCso it's a system identification thing (which adaptive filters are supposed to do).> > O.K., here is what may be happening. I took a quick look at the cited > paper. Read the paper again. At the output of the adaptive filter you > have an amplifier. You also state that P(z) is the room impulse > response. I think you have made a mistake here. In the paper M(z) is > the impulse response of the room, the speaker, etc. There is a H(z) > where you have the amplifier at the output of the adaptive filter. > H(z) represents the impulse response between the adaptive filter > output to the speaker to the microphone (page 286 - *H(z) the transfer > function between the control loudspeaker and the error microphone*). > Then, P(z) is the same as H(z), not the room impulse response. Take > another look at the paper. >and i'm not convinced that the schematic diagram you refer us to is right (that is, if you're trying to do noise cancellation). i don't get any of this "trigger" stuff. doesn't belong there. i believe what you're trying to accomplish is pretty close to the standard LMS or NLMS (or LNLMS), but instead of trying to match the "plant", which is the path from the ambient noise microphone to the earpiece, and computing the "error" or difference signal e[n] in the DSP, you're computing the e[n] in the air. in both cases, the adaptive filter is trying to minimize e[n]. you can start with a clean LMS filter (or NLMS or LNLMS) and change the topology a little (it really isn't changing it, just drawing the border around e[n] differently). i'm gonna force changing the notation to something more conventional in the lit. the regular LMS problem is this: .-----------. y[n] .---->| h[k][n] |------------. | '-----------' | x[n] --->---| (+)----> e[n] = y[n] - d[n] | .--------. d[n] | '---->| p[k] |-------->[-1]--' '--------' vector H[n] = { h[0][n], h[1][n], ... h[K-1][n] } is the FIR vector P = { p[0], p[1], ... p[K-1] } is the plant. the whole idea of LMS adaptive filtering is to get your FIR impulse response (vector H[n]) to match the plant (vector P) and when the match is good, e[n] should start to get small in magnitude. sometimes "d[n]" is called the "desired" signal and you're trying to get the FIR output, y[n], to match the plant output d[n]. the error signal e[n] feeds back somehow to change the h[k] so that h[k] matches p[k] by some measure. your FIR computes y[n] as K-1 y[n] = SUM{ h[k][n] * x[n-k] } k=0 and your "plant" hypothetically creates d[n] as K-1 d[n] = SUM{ p[k] * x[n-k] } k=0 but no one is doing that second summation. the physics of the plant is doing it. now your LNLMS (leaking normalized least mean square) filter is updating the FIR coefficients h[k][n+1] = h[k][n]*(1-leak) - mu*e[n]*x[n-k]/mean{ x[n]^2 } where mu is the adaptation gain (small and positive) and 0 < leak << 1 K-1 mean{ x[n]^2 } = 1/K * SUM{ (x[n-k])^2 } k=0 the 1/K doesn't have to be computed if it gets incorporated into the mu coefficient. and this is a simple moving summation, after squaring, add one term in and subtract the term that is falling off the edge. there are other ways (like a first-order IIR) to calculate mean{x[n]^2}. you can add a small positive "s" term to this before dividing, if you want. you can show (using partial derivatives) that this term with the mu in it, "nudges" the coefficients in such a way as to drive down the opposite direction of the gradient of e[n]^2 as a function of h[k][n]. conventionally, you have a microphone for d[n] and you do the subtraction to of d[n] out of y[n] to get e[n]. now, John, first you need to get this mechanism understood. once it's understood, we can change it a little. first, since the subtraction will be done in the air, we recognize that it's really an addition that is physically done and fiddle with the signs a little: .-----------. y[n] .---->| h[k][n] |------------. | '-----------' | x[n] --->---| (+)----> e[n] = y[n] + d[n] | .--------. d[n] | '---->| p[k] |-------->------' '--------' now both y[n] and e[n] are the negatives of what they used to be. just a sign change, but we want e[n]^2 to be small just the same. the result is that h[k][n] should converge to the *negative* of p[k]. the FIR is the negative of the plant, so that when they add, it gets closer to zero. you do the same song-and-dance to get the gradient of e[n]^2 as a function of all of the h[k] and it comes out the same: (d/dh[k])( e[n] )^2 = 2*e[n] * x[n-k] so, to march down the gradient (with little steps), it's the same: h[k][n+1] = h[k][n]*(1-leak) - mu*e[n]*x[n-k]/mean{ x[n]^2 } and there is nothing wrong with drawing the boundary differently, let the addition be done in the air, and assume the microphone (at the earpiece) is measuring e[n] instead of d[n]. makes no difference. i'm wondering if you get the polarity on the mic wrong, what that will do is drive the h[k] coefficients into the opposite direction. but, independent of the mic polarity, the h[k] impulse response should come out to be the negative of the hypothetical p[k]. i wonder if that's a problem. Tim or Eric or someone, can you comment? can the wrong polarity in the e[n] mic make this diverge instead of converge? it seems to me that it should still be adjusting h[k] to drive down e[n]^2, whether e[n] became negated or not, but i might be wrong about that. if so, that might answer the OP's problem. hmmm, as i ponder this, i think that the mic polarity, relative to the headphone speaker, must be consistent or the sign in front of mu has to change in the filter adaptation equation. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

Reply by ●August 17, 20112011-08-17

Problem solved. The last few days I've been busy figuring out what the fractional delay introduced on each new period is causing. Well it just keeps adding up so that my recorded reference is ahead of the actual noise. For some time the LMS is able to adapt to this by changing the FIR's delay. I've solved this by resetting the position of my reference on each trigger. So now the LMS has to cope with a subsample delay and is able to compensate the noise pretty good. Long story short: When you've got a template of your noise, be sure to keep it in sync with the actual noise! Thanks for the critical remarks on my algorithm which brought me to carefully analyse it over and over again and to finally find my error :-)

Reply by ●August 17, 20112011-08-17

On 8/17/11 9:18 AM, JohnPower wrote:> Problem solved. >...> Long story short: When you've got a template of your noise, be sure to > keep it in sync with the actual noise! >why is there even such a thing as "a template of your noise"? why are you not using the very same noise signal (the "actual noise", i guess) for whatever purpose you have your template for? it's pretty hard to get the actual noise to be outa sync with itself. why is your LMS system topology anything different than what the normal topology is (whether the microphone is getting d[n] or e[n] is of no consequence)? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."