Reply by robert bristow-johnson August 19, 20112011-08-19
On 8/18/11 4:27 AM, JohnPower wrote:
> On 17 Aug., 23:37, robert bristow-johnson<r...@audioimagination.com> > wrote: >> >> .-----------. y[n] >> .---->| h[k][n] |------>-----. >> | '-----------' | >> x[n] --->---| (+)----> e[n] = y[n] + d[n] >> | .--------. d[n] | + v[n] >> '---->| p[k] |--->--(+)--->--' >> '--------' ^ >> | >> v[n] ---->--' >> >> because y[n] is not correlated to anything in v[n], the LMS will not and >> cannot do anything to match to it. the y[n] can cancel d[n] as the LMS >> hunts and converges to match h[k] to p[k], but v[n] will remain. this is >> actually the normal model for applications like speaker-phone feedback >> cancellation. >> >>> So, this is why I'm recording a template and use this template as a >>> feedforward source... >> >> but how can that template match the real noise in the real case? is the >> real noise that periodic and predictable? is the operation one where >> the patient has to hear the noise while you record it and adapt to it >> and then, a few seconds later, they can get relief? >> > > As I've mentioned earlier the real noise is in fact periodic and > predictable .
then, instead of a template, the x[n] going in should be the live noise and the LMS will adapt (perhaps with a period delay), not to the current period of the noise, but to a previous period, depending on those delays. but otherwise i just don't see how you keep the two synced. the recorded version you're playing back (and having the LMS adapt to) is like a delayed, but live, version. at least the delayed version will not play out of sync. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by JohnPower August 18, 20112011-08-18
On 17 Aug., 23:37, robert bristow-johnson <r...@audioimagination.com>
wrote:
> On 8/17/11 3:38 PM, JohnPower wrote: > > > > > > > On 17 Aug., 19:34, robert bristow-johnson<r...@audioimagination.com> > > wrote: > >> On 8/17/11 9:18 AM, JohnPower wrote: > > >>> Problem solved. > > >> ... > >>> Long story short: When you've got a template of your noise, be sure to > >>> keep it in sync with the actual noise! > > >> why is there even such a thing as "a template of your noise"? why are > >> you not using the very same noise signal (the "actual noise", i guess) > >> for whatever purpose you have your template for? it's pretty hard to > >> get the actual noise to be outa sync with itself. > > >> why is your LMS system topology anything different than what the normal > >> topology is (whether the microphone is getting d[n] or e[n] is of no > >> consequence)? > > >> -- > > >> r b-j r...@audioimagination.com > > >> "Imagination is more important than knowledge." > > > The problem with the normal topology is the delay introduced by the > > signal processing chain. > > i dunno what delay (in the DSP chain) you mean... >
mainly conversion time from ADC and DAC (approx 1.4ms in the case of the eval board I'm using. -> the feedforward microphone had to be place half a meter away from the patient...
> > My ANC system is meant to operate inside an > > MRI scanner so there's not enough space to place the microphone far > > away from the patient's ear to have enough time due to the sound's > > propagation in the air as it would be needed for a feedforward system. > > ... but i *do* know that you cannot have any ferro-magnetic material > inside or attached to the patient/subject inside an MRI. i understand > that you cannot have a conventional headphone nor microphone placed by > the patient's head in an MRI. (there was a story on NPR where some > researcher is studying the "creative brain" and they had to design a > music keyboard that was MRI safe.) > > so do you have plastic tubes delivering sound to the patient's ear > (sorta like the airlines had before the 1990s)? do you have another > tube placed there as a pickup (that eventually goes to a microphone)? > those tubes can certainly introduce delay (as well as other filtering). >
The microphone we use is an optical one connected with optic fibre. The headphone is special MRI equipment with electrostatic transducers. So there are no tubes causing any delay. As you've mentioned, there might be some additional delay from filtering in the third-party communication equipment.
> > Also the MRI scanner cannot be seen as a point source because it > > sourrounds the patient's head. > > so you would need "ambient" noise pickups placed at several places > around the patient's head to get noise signals to cancel. > > > Whereas a feedback system would also attempt to cancel out acoustic > > stimuli or communication between health personnel and the patient. > > so *those* signals are not part of x[n] (what goes into both the plant > *and* the LMS filter). then the LMS treats those as "noise" and cannot > focus a filter on them to cancel them. like this: > > .-----------. y[n] > .---->| h[k][n] |------>-----. > | '-----------' | > x[n] --->---| (+)----> e[n] = y[n] + d[n] > | .--------. d[n] | + v[n] > '---->| p[k] |--->--(+)--->--' > '--------' ^ > | > v[n] ---->--' > > because y[n] is not correlated to anything in v[n], the LMS will not and > cannot do anything to match to it. the y[n] can cancel d[n] as the LMS > hunts and converges to match h[k] to p[k], but v[n] will remain. this is > actually the normal model for applications like speaker-phone feedback > cancellation. > > > So, this is why I'm recording a template and use this template as a > > feedforward source... > > but how can that template match the real noise in the real case? is the > real noise that periodic and predictable? is the operation one where > the patient has to hear the noise while you record it and adapt to it > and then, a few seconds later, they can get relief? >
As I've mentioned earlier the real noise is in fact periodic and predictable . One MRI scan consists of the acquisition of several slices or volumes of the brain (of course there are more sophisticated techniques). Each slice or volume produces very similar noise (see http://h5.abload.de/img/trigger_and_noisevk6x.png). The duration of noise recording depends on the duration it takes to acquire one slice or volume and is in fact in the range of a few seconds.
> -- > > r b-j r...@audioimagination.com > > "Imagination is more important than knowledge."- Zitierten Text ausblenden -
Reply by robert bristow-johnson August 17, 20112011-08-17
On 8/17/11 3:38 PM, JohnPower wrote:
> On 17 Aug., 19:34, robert bristow-johnson<r...@audioimagination.com> > wrote: >> On 8/17/11 9:18 AM, JohnPower wrote: >> >>> Problem solved. >> >> ... >>> Long story short: When you've got a template of your noise, be sure to >>> keep it in sync with the actual noise! >> >> why is there even such a thing as "a template of your noise"? why are >> you not using the very same noise signal (the "actual noise", i guess) >> for whatever purpose you have your template for? it's pretty hard to >> get the actual noise to be outa sync with itself. >> >> why is your LMS system topology anything different than what the normal >> topology is (whether the microphone is getting d[n] or e[n] is of no >> consequence)? >> >> -- >> >> r b-j r...@audioimagination.com >> >> "Imagination is more important than knowledge." > > The problem with the normal topology is the delay introduced by the > signal processing chain.
i dunno what delay (in the DSP chain) you mean...
> My ANC system is meant to operate inside an > MRI scanner so there's not enough space to place the microphone far > away from the patient's ear to have enough time due to the sound's > propagation in the air as it would be needed for a feedforward system.
... but i *do* know that you cannot have any ferro-magnetic material inside or attached to the patient/subject inside an MRI. i understand that you cannot have a conventional headphone nor microphone placed by the patient's head in an MRI. (there was a story on NPR where some researcher is studying the "creative brain" and they had to design a music keyboard that was MRI safe.) so do you have plastic tubes delivering sound to the patient's ear (sorta like the airlines had before the 1990s)? do you have another tube placed there as a pickup (that eventually goes to a microphone)? those tubes can certainly introduce delay (as well as other filtering).
> Also the MRI scanner cannot be seen as a point source because it > sourrounds the patient's head.
so you would need "ambient" noise pickups placed at several places around the patient's head to get noise signals to cancel.
> Whereas a feedback system would also attempt to cancel out acoustic > stimuli or communication between health personnel and the patient.
so *those* signals are not part of x[n] (what goes into both the plant *and* the LMS filter). then the LMS treats those as "noise" and cannot focus a filter on them to cancel them. like this: .-----------. y[n] .---->| h[k][n] |------>-----. | '-----------' | x[n] --->---| (+)----> e[n] = y[n] + d[n] | .--------. d[n] | + v[n] '---->| p[k] |--->--(+)--->--' '--------' ^ | v[n] ---->--' because y[n] is not correlated to anything in v[n], the LMS will not and cannot do anything to match to it. the y[n] can cancel d[n] as the LMS hunts and converges to match h[k] to p[k], but v[n] will remain. this is actually the normal model for applications like speaker-phone feedback cancellation.
> So, this is why I'm recording a template and use this template as a > feedforward source...
but how can that template match the real noise in the real case? is the real noise that periodic and predictable? is the operation one where the patient has to hear the noise while you record it and adapt to it and then, a few seconds later, they can get relief? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by JohnPower August 17, 20112011-08-17
On 17 Aug., 19:34, robert bristow-johnson <r...@audioimagination.com>
wrote:
> On 8/17/11 9:18 AM, JohnPower wrote: > > > Problem solved. > > ... > > Long story short: When you've got a template of your noise, be sure to > > keep it in sync with the actual noise! > > why is there even such a thing as "a template of your noise"? &#4294967295;why are > you not using the very same noise signal (the "actual noise", i guess) > for whatever purpose you have your template for? &#4294967295;it's pretty hard to > get the actual noise to be outa sync with itself. > > why is your LMS system topology anything different than what the normal > topology is (whether the microphone is getting d[n] or e[n] is of no > consequence)? > > -- > > r b-j &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;r...@audioimagination.com > > "Imagination is more important than knowledge."
The problem with the normal topology is the delay introduced by the signal processing chain. My ANC system is meant to operate inside an MRI scanner so there's not enough space to place the microphone far away from the patient's ear to have enough time due to the sound's propagation in the air as it would be needed for a feedforward system. Also the MRI scanner cannot be seen as a point source because it sourrounds the patient's head. Whereas a feedback system would also attempt to cancel out acoustic stimuli or communication between health personnel and the patient. So, this is why I'm recording a template and use this template as a feedforward source...
Reply by robert bristow-johnson August 17, 20112011-08-17
On 8/17/11 9:18 AM, JohnPower wrote:
> Problem solved. >
...
> Long story short: When you've got a template of your noise, be sure to > keep it in sync with the actual noise! >
why is there even such a thing as "a template of your noise"? why are you not using the very same noise signal (the "actual noise", i guess) for whatever purpose you have your template for? it's pretty hard to get the actual noise to be outa sync with itself. why is your LMS system topology anything different than what the normal topology is (whether the microphone is getting d[n] or e[n] is of no consequence)? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by JohnPower August 17, 20112011-08-17
Problem solved.

The last few days I've been busy figuring out what the fractional
delay introduced on each new period is causing. Well it just keeps
adding up so that my recorded reference is ahead of the actual noise.
For some time the LMS is able to adapt to this by changing the FIR's
delay.
I've solved this by resetting the position of my reference on each
trigger. So now the LMS has to cope with a subsample delay and is able
to compensate the noise pretty good.

Long story short: When you've got a template of your noise, be sure to
keep it in sync with the actual noise!

Thanks for the critical remarks on my algorithm which brought me to
carefully analyse it over and over again and to finally find my
error :-)
Reply by robert bristow-johnson August 13, 20112011-08-13
On 8/12/11 11:33 AM, maury wrote:
> On Aug 11, 7:35 am, JohnPower<asdfghjkl...@googlemail.com> wrote: >> On Aug 11, 3:16 am, robert bristow-johnson<r...@audioimagination.com> >> wrote: >> >> >> >> >> >>> On 8/10/11 1:56 PM, JohnPower wrote: >> >>>> On 3 Aug., 15:36, robert bristow-johnson<r...@audioimagination.com> >>>> wrote: >>>>> On 8/3/11 8:04 AM, JohnPower wrote: >> >>>>>> For my master's thesis I'm currently trying to implement an active >>>>>> noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've >>>>>> chosen the Normalized Leaky LMS algorithm to update (451) coefficients >>>>>> of a FIR filter. >> >>>>> John, would you mind plopping down the relevant equations for the NLLMS? >> >>>>> i know how NLMS works and i have an idea how to make the coefficient >>>>> updating to be leaky, but i dunno how to get a handle on your problem >>>>> without seeing the equations. >> >>>>> -- >> >>>>> r b-j r...@audioimagination.com >> >>>>> "Imagination is more important than knowledge." >> >>>> this being my first thread in a usenet group, I've somehow expected to >>>> get a notification on new messages... well, sorry for the delay (and >>>> also the double post...). >> >>>> my equation for the NLLMS is (vectors uppercase, scalars lowercase): >> >>>> W[n+1] = W[n] * l + a*e[n]*X[n] / (s + power(X[n]) >> >>> i guessed pretty good: >> >>> g[n] = (2*mu) / mean{ x[n]^2 } >> >>> h[k]<-- p*h[k] - g[n] * e[n]*x[n-k] at time n. >> >>> different symbols and i missed the "s" term. >> >>>> where >>>> W - coefficient vector >>>> X - vector which contains last N samples from my template which was >>>> treated with the impulse response P I've measured via MLS >> >>> since you're not using the shorthand i was using, then let's explicitly >>> expand the notation so we know exactly what is being done. using C-like >>> notation for 2-dim arrays: >> >>> coef vector: W[n] = { w[0][n], w[1][n], ... w[k][n], ... w[K-1][n] } >> >>> FIR input vector: X[n] = { x[n], x[n-1], ... x[n-k], ... x[n-K+1] } >> >>> we need to do this, because i see an ambiguity in your equation below. >>> can you confirm this? >> >> my FIR input vector Xo[n] has the same dimensions as the coefficients >> vector W[n]: >> >> Xo[n] = { x[0], x[1], ... , x[N] }
this can't be right. i think you mean { x[n], x[n-1], ... x[n-N+1] }. and why isn't this vector named "X"? if this is not X, then what is in the X vector?
>> W[n] = { w[0], w[1], ... , w[N] }
this should have two indices on each vector element. W[n] = { w[0][n], w[1][n], ... , w[N-1][n] }
>> >> where N is the filter order. When writing W[n+1] I mean the >> coefficient vector which is used >> to compute the next output. I've used the nomenclature from Diniz' >> book on 'Adaptive Filtering' >> >> >> >> >> >>>> l - leakage 0< l< 1, in my case it is 0.999 >> >>> which is like a first-order filter pole. just like my "p". >> >>>> a - factor to influence speed of convergence, I get rather good >>>> results with setting this to -0.017 >> >>> this, or something proportional to it, is called the "adaptation gain". >> >>>> e - current error value from error microphone >> >>> so is the differencing done in the air? this does not make sense for an >>> LMS. e[n] is, as far as i can tell, *computed* from the output of the >>> FIR (what is defined by W[n] and what i thought is y[n]) and from >>> subtracting the "desired" signal (what i would think comes from this >>> microphone). you need to clear this up. >> >> yes, it's done in the air. >> The microphone serves two purposes: >> 1) record the 'desired' signal >> 2) measure the error betwen output signal and current noise >> >> It's something like a hybrid between a feedforward and feedback >> system, I guess this is >> what makes it so hard to understand - and explain :-/ >> >> >> >>>> power(X[n]) - the power of X[n] estimated by doing a scalar >>>> multiplication of X[n] with itself, >> >>> i don't get this. "scalar multiplication" of a vector with a vector? >>> do you mean dot product?: >> >> sorry, my bad. Yes I mean dot product (in German, Skalarprodukt is the >> common term >> which I mixed up with scalar multiplication...) >> >> >> >>> N-1 >>> power(X[n]) = SUM{ x[n-k]^2 } >>> k=0 >> >>> this is what i called "mean{ x[n]^2 }". >> >>> > this is the "normalize" part >> >>> dividing by that is the "normalize" part, i think you mean. >> >> yes >> >> >> >>>> s - small value to prevent coefficients getting too big when >>>> power(X[n]) is small >> >>> well, we don't ever want to divide by 0 but when the power is small, >>> e[n] and X[n] should also be small. so when the power is small, the >>> adaptation gets slower (which may be just fine, what you want). let's >>> assume that power(X[n]) is large enough that s is negligible, okay? >> >>> how are you computing the division? explicitly? sometimes, for NLMS, >>> when division is expensive. it can be computed (tracked) with a >>> multiply, compare, and negative feedback. >> >> my actual C code looks like this: >> >> tempUpdateFactor= a * e / templPower; >> >> As one can see, I've left out the s as it doesn't seem to be the >> problem here... >> So yes, I do explicit division. I suppose the "mulitply, compare, >> negative feedback" >> you've suggested would be to have the algorithm run faster? I haven't >> looked into >> this exact line for optimization yet because the vector calculations >> are much >> more time expensive. >> >> >> >> >> >> >> >>>> after calculating W[n+1], I've got an FIR Filter doing >> >>>> K-1 >>>> y[n] = SUM{ W[k] * Xo[n-k] } >>>> k=0 >>
here it's K-1
>>>> where >>>> Xo - template which did not go through the impulse response P >> >>> right here, i see a problem regarding the difference between the vector >>> X[n] and, apparently, the scalar values Xo[n-k]. is the vector X[n] >>> made up of the samples Xo[n-k]? if yes, your notation is bad (the >>> elements of X[n] should be small-case, just like y[n] is). if no, this >>> NLMS algorithm is messed up. >> >> I don't get it, if it was messed up this bad, it wouldn't work at all, >> would it? >> But my darned system works excellent for some time and then fades out >> gracefully... >> >> corrected notation: >> >> N-1 >> y[n] = SUM{ w[k] * xo[N-k] } >> k=0 >>
first it was K, then it's N, now it N-1. need to get the symbols down, John. i don't think you have it clear about Xo and X
>> I actually do this by using a library function to >> compute the dot product of the two vectors W[n] an Xo[n]. >> >> Xo and X are two different vectors. I've uploaded a new signal graph. >> It was >> revised to be coherent with the symbols in my equations:http://www.abload.de/img/signal_flow_coherentdusl.png >> >> >> >>> is P the impulse response of the room or whatever other path (comparable >>> to W[n])? >> >> Yes, P is the impulse response of the room. >> To be more specific, it is the plant model of the system >> DAC->amplifier->transducer->microphone->amplifier->ADC
so it's a system identification thing (which adaptive filters are supposed to do).
> > O.K., here is what may be happening. I took a quick look at the cited > paper. Read the paper again. At the output of the adaptive filter you > have an amplifier. You also state that P(z) is the room impulse > response. I think you have made a mistake here. In the paper M(z) is > the impulse response of the room, the speaker, etc. There is a H(z) > where you have the amplifier at the output of the adaptive filter. > H(z) represents the impulse response between the adaptive filter > output to the speaker to the microphone (page 286 - *H(z) the transfer > function between the control loudspeaker and the error microphone*). > Then, P(z) is the same as H(z), not the room impulse response. Take > another look at the paper. >
and i'm not convinced that the schematic diagram you refer us to is right (that is, if you're trying to do noise cancellation). i don't get any of this "trigger" stuff. doesn't belong there. i believe what you're trying to accomplish is pretty close to the standard LMS or NLMS (or LNLMS), but instead of trying to match the "plant", which is the path from the ambient noise microphone to the earpiece, and computing the "error" or difference signal e[n] in the DSP, you're computing the e[n] in the air. in both cases, the adaptive filter is trying to minimize e[n]. you can start with a clean LMS filter (or NLMS or LNLMS) and change the topology a little (it really isn't changing it, just drawing the border around e[n] differently). i'm gonna force changing the notation to something more conventional in the lit. the regular LMS problem is this: .-----------. y[n] .---->| h[k][n] |------------. | '-----------' | x[n] --->---| (+)----> e[n] = y[n] - d[n] | .--------. d[n] | '---->| p[k] |-------->[-1]--' '--------' vector H[n] = { h[0][n], h[1][n], ... h[K-1][n] } is the FIR vector P = { p[0], p[1], ... p[K-1] } is the plant. the whole idea of LMS adaptive filtering is to get your FIR impulse response (vector H[n]) to match the plant (vector P) and when the match is good, e[n] should start to get small in magnitude. sometimes "d[n]" is called the "desired" signal and you're trying to get the FIR output, y[n], to match the plant output d[n]. the error signal e[n] feeds back somehow to change the h[k] so that h[k] matches p[k] by some measure. your FIR computes y[n] as K-1 y[n] = SUM{ h[k][n] * x[n-k] } k=0 and your "plant" hypothetically creates d[n] as K-1 d[n] = SUM{ p[k] * x[n-k] } k=0 but no one is doing that second summation. the physics of the plant is doing it. now your LNLMS (leaking normalized least mean square) filter is updating the FIR coefficients h[k][n+1] = h[k][n]*(1-leak) - mu*e[n]*x[n-k]/mean{ x[n]^2 } where mu is the adaptation gain (small and positive) and 0 < leak << 1 K-1 mean{ x[n]^2 } = 1/K * SUM{ (x[n-k])^2 } k=0 the 1/K doesn't have to be computed if it gets incorporated into the mu coefficient. and this is a simple moving summation, after squaring, add one term in and subtract the term that is falling off the edge. there are other ways (like a first-order IIR) to calculate mean{x[n]^2}. you can add a small positive "s" term to this before dividing, if you want. you can show (using partial derivatives) that this term with the mu in it, "nudges" the coefficients in such a way as to drive down the opposite direction of the gradient of e[n]^2 as a function of h[k][n]. conventionally, you have a microphone for d[n] and you do the subtraction to of d[n] out of y[n] to get e[n]. now, John, first you need to get this mechanism understood. once it's understood, we can change it a little. first, since the subtraction will be done in the air, we recognize that it's really an addition that is physically done and fiddle with the signs a little: .-----------. y[n] .---->| h[k][n] |------------. | '-----------' | x[n] --->---| (+)----> e[n] = y[n] + d[n] | .--------. d[n] | '---->| p[k] |-------->------' '--------' now both y[n] and e[n] are the negatives of what they used to be. just a sign change, but we want e[n]^2 to be small just the same. the result is that h[k][n] should converge to the *negative* of p[k]. the FIR is the negative of the plant, so that when they add, it gets closer to zero. you do the same song-and-dance to get the gradient of e[n]^2 as a function of all of the h[k] and it comes out the same: (d/dh[k])( e[n] )^2 = 2*e[n] * x[n-k] so, to march down the gradient (with little steps), it's the same: h[k][n+1] = h[k][n]*(1-leak) - mu*e[n]*x[n-k]/mean{ x[n]^2 } and there is nothing wrong with drawing the boundary differently, let the addition be done in the air, and assume the microphone (at the earpiece) is measuring e[n] instead of d[n]. makes no difference. i'm wondering if you get the polarity on the mic wrong, what that will do is drive the h[k] coefficients into the opposite direction. but, independent of the mic polarity, the h[k] impulse response should come out to be the negative of the hypothetical p[k]. i wonder if that's a problem. Tim or Eric or someone, can you comment? can the wrong polarity in the e[n] mic make this diverge instead of converge? it seems to me that it should still be adjusting h[k] to drive down e[n]^2, whether e[n] became negated or not, but i might be wrong about that. if so, that might answer the OP's problem. hmmm, as i ponder this, i think that the mic polarity, relative to the headphone speaker, must be consistent or the sign in front of mu has to change in the filter adaptation equation. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by maury August 12, 20112011-08-12
On Aug 11, 7:35&#4294967295;am, JohnPower <asdfghjkl...@googlemail.com> wrote:
> On Aug 11, 3:16 am, robert bristow-johnson <r...@audioimagination.com> > wrote: > > > > > > > On 8/10/11 1:56 PM, JohnPower wrote: > > > > On 3 Aug., 15:36, robert bristow-johnson<r...@audioimagination.com> > > > wrote: > > >> On 8/3/11 8:04 AM, JohnPower wrote: > > > >>> For my master's thesis I'm currently trying to implement an active > > >>> noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've > > >>> chosen the Normalized Leaky LMS algorithm to update (451) coefficients > > >>> of a FIR filter. > > > >> John, would you mind plopping down the relevant equations for the NLLMS? > > > >> i know how NLMS works and i have an idea how to make the coefficient > > >> updating to be leaky, but i dunno how to get a handle on your problem > > >> without seeing the equations. > > > >> -- > > > >> r b-j &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;r...@audioimagination.com > > > >> "Imagination is more important than knowledge." > > > > this being my first thread in a usenet group, I've somehow expected to > > > get a notification on new messages... well, sorry for the delay (and > > > also the double post...). > > > > my equation for the NLLMS is (vectors uppercase, scalars lowercase): > > > > W[n+1] = W[n] * l + a*e[n]*X[n] / (s + power(X[n]) > > > i guessed pretty good: > > > &#4294967295; &#4294967295; &#4294967295;g[n] &#4294967295;= &#4294967295;(2*mu) / mean{ x[n]^2 } > > > &#4294967295; &#4294967295; &#4294967295;h[k] &#4294967295;<-- &#4294967295;p*h[k] &#4294967295;- &#4294967295;g[n] * e[n]*x[n-k] &#4294967295; &#4294967295; &#4294967295;at time n. > > > different symbols and i missed the "s" term. > > > > where > > > W - coefficient vector > > > X &#4294967295;- vector which contains last N samples from my template which was > > > treated with the impulse response P I've measured via MLS > > > since you're not using the shorthand i was using, then let's explicitly > > expand the notation so we know exactly what is being done. &#4294967295;using C-like > > notation for 2-dim arrays: > > > coef vector: W[n] = { w[0][n], w[1][n], ... w[k][n], ... w[K-1][n] } > > > FIR input vector: &#4294967295;X[n] = { x[n], x[n-1], ... x[n-k], ... x[n-K+1] } > > > we need to do this, because i see an ambiguity in your equation below. > > can you confirm this? > > my FIR input vector Xo[n] has the same dimensions as the coefficients > vector W[n]: > > Xo[n] = { x[0], x[1], ... , x[N] } > W[n] = { w[0], w[1], ... , w[N] } > > where N is the filter order. When writing W[n+1] I mean the > coefficient vector which is used > to compute the next output. I've used the nomenclature from Diniz' > book on 'Adaptive Filtering' > > > > > > > > l - leakage 0< &#4294967295;l< &#4294967295;1, in my case it is 0.999 > > > which is like a first-order filter pole. &#4294967295;just like my "p". > > > > a &#4294967295;- factor to influence speed of convergence, I get rather good > > > results with setting this to -0.017 > > > this, or something proportional to it, is called the "adaptation gain". > > > > e - current error value from error microphone > > > so is the differencing done in the air? &#4294967295;this does not make sense for an > > LMS. &#4294967295;e[n] is, as far as i can tell, *computed* from the output of the > > FIR (what is defined by W[n] and what i thought is y[n]) and from > > subtracting the "desired" signal (what i would think comes from this > > microphone). &#4294967295;you need to clear this up. > > yes, it's done in the air. > The microphone serves two purposes: > 1) record the 'desired' signal > 2) measure the error betwen output signal and current noise > > It's something like a hybrid between a feedforward and feedback > system, I guess this is > what makes it so hard to understand - and explain :-/ > > > > > > power(X[n]) - the power of X[n] estimated by doing a scalar > > > multiplication of X[n] with itself, > > > i don't get this. &#4294967295;"scalar multiplication" of a vector with a vector? > > do you mean dot product?: > > sorry, my bad. Yes I mean dot product (in German, Skalarprodukt is the > common term > which I mixed up with scalar multiplication...) > > > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; N-1 > > &#4294967295; &#4294967295; power(X[n]) &#4294967295;= &#4294967295;SUM{ x[n-k]^2 } > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=0 > > > this is what i called "mean{ x[n]^2 }". > > > &#4294967295;> this is the "normalize" part > > > dividing by that is the "normalize" part, i think you mean. > > yes > > > > > > s - small value to prevent coefficients getting too big when > > > power(X[n]) is small > > > well, we don't ever want to divide by 0 but when the power is small, > > e[n] and X[n] should also be small. &#4294967295;so when the power is small, the > > adaptation gets slower (which may be just fine, what you want). &#4294967295;let's > > assume that power(X[n]) is large enough that s is negligible, okay? > > > how are you computing the division? &#4294967295;explicitly? &#4294967295;sometimes, for NLMS, > > when division is expensive. &#4294967295;it can be computed (tracked) with a > > multiply, compare, and negative feedback. > > my actual C code looks like this: > > tempUpdateFactor= &#4294967295; &#4294967295; &#4294967295; a * e / templPower; > > As one can see, I've left out the s as it doesn't seem to be the > problem here... > So yes, I do explicit division. I suppose the "mulitply, compare, > negative feedback" > you've suggested would be to have the algorithm run faster? I haven't > looked into > this exact line for optimization yet because the vector calculations > are much > more time expensive. > > > > > > > > > > after calculating W[n+1], I've got an FIR Filter doing > > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;K-1 > > > y[n] &#4294967295;= &#4294967295;SUM{ W[k] * Xo[n-k] } > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;k=0 > > > > where > > > Xo - template which did not go through the impulse response P > > > right here, i see a problem regarding the difference between the vector > > X[n] and, apparently, the scalar values Xo[n-k]. &#4294967295;is the vector X[n] > > made up of the samples Xo[n-k]? &#4294967295;if yes, your notation is bad (the > > elements of X[n] should be small-case, just like y[n] is). &#4294967295;if no, this > > NLMS algorithm is messed up. > > I don't get it, if it was messed up this bad, it wouldn't work at all, > would it? > But my darned system works excellent for some time and then fades out > gracefully... > > corrected notation: > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; N-1 > &#4294967295;y[n] &#4294967295;= &#4294967295;SUM{ w[k] * xo[N-k] } > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=0 > > I actually do this by using a library function to > compute the dot product of the two vectors W[n] an Xo[n]. > > Xo and X are two different vectors. I've uploaded a new signal graph. > It was > revised to be coherent with the symbols in my equations:http://www.abload.de/img/signal_flow_coherentdusl.png > > > > > is P the impulse response of the room or whatever other path (comparable > > to W[n])? > > Yes, P is the impulse response of the room. > To be more specific, it is the plant model of the system > DAC->amplifier->transducer->microphone->amplifier->ADC > > > > > so far, we do not have a clear definition of e[n]. &#4294967295;how is e[n] computed? > > > > Maybe this overview &#4294967295;helps to straighten out misunderstandings > > > resulting from my lacking skills of the English language: > > >http://www.abload.de/img/signal_flowgus5.png > > > there are serious problems with that schematic. &#4294967295;are you trying to > > cancel the sound out of the loudspeaker? &#4294967295;what is the "Recorded Audio"? > > &#4294967295; how is it related to what's in the "Memory Bank"? &#4294967295;why is this signal > > not derived from a common source? > > Yes, I try to cancel the sound out of the loudspeaker. The term > "Recorded audio" > refers to the fact that, for testing, the noise and the triggers were > recorded with > a PC which now plays it back through the soundcard and serves as noise > source. > The "memory bank" is not related to this. The memory bank is used to > store the > first period of noise (recorded using the microphone), which is then > used > as a "feed forward" source for the LMS algorithm. > > So this is what my PC puts out:http://www.abload.de/img/trigger_and_noisevk6x.png > > > > > > > > > > this was taken, and slightly modified, from "Active control of the > > > volume acquisition noise in functional magnetic resonance imaging: > > > Method and psychoacoustical evaluation " by John Chambers > > > (unfortunately, one has to pay for the paper and I don't think the > > > publishers would be happy with me sharing copies here. The follow up > > > paper by the same author can be obtained here: > > >http://www.coe.pku.edu.cn/tpic/2010913102933476.pdf) > > > so far, i dunno if i need it. > > > > Tim: the noise microphone is in fact in the same space in which the > > > cancellation is happening (as can be seen in the picture a few lines > > > above). But also, for testing purposes, I've tested my algorith with > > > only electrical signals using an Op-Amp configured as adder to > > > superimpose my noise signal with the one intended to cancel out the > > > former.This was also the case when I took the above coefficient > > > snapshots. > > > Even if there were unexpected delays due to, say the DAC or ADC, isn't > > > the LMS supposed to follow such changes in the system? And if not, > > > how &#4294967295;could I take care of it? > > > > Having said this, I could also think of the following kind of > > > "unpredicted delay" in my system: > > > My sampling frequency for the audio signal is 48kHz, so clearly I have > > > a limited time-resolution. Now let the periodic length of the noise be > > > some fractional number of samples... which is quite probable I'd > > > suppose. So, my triggers are 40000.5 samples apart, resulting in a > > > template of 40001 samples. This results in half a sample too much on > > > every noise-iteration. Could this bring my algorithm to shift the > > > coefficients by half a sample every time until they are completely out > > > of shape? > > > oh, this is complicated. &#4294967295;i have to read this over and over to try to > > figure out why you're doing what you're doing. > > > the LMS alg should adapt itself in such a way to take care of > > fractional-sample delays. > > > > I hope I was able to give a better insight into my system and will try > > > to monitor this thread better ;) > > > don't sweat it. &#4294967295;we're sorta amuzed that you got to senior or grad > > school level EE without having discovered and used USENET before. > > USENET is very old. &#4294967295;older than the web. &#4294967295;originally newsgroup postings > > were passed around between universities and such by means other than the > > internet. > > Haha, glad to be of enjoyment :-) Not being a EE student, but a > medical > engineering one, maybe I just didn't dare posting on the usenet > before :-D >
O.K., here is what may be happening. I took a quick look at the cited paper. Read the paper again. At the output of the adaptive filter you have an amplifier. You also state that P(z) is the room impulse response. I think you have made a mistake here. In the paper M(z) is the impulse response of the room, the speaker, etc. There is a H(z) where you have the amplifier at the output of the adaptive filter. H(z) represents the impulse response between the adaptive filter output to the speaker to the microphone (page 286 - *H(z) the transfer function between the control loudspeaker and the error microphone*). Then, P(z) is the same as H(z), not the room impulse response. Take another look at the paper. Maurice Givens
Reply by JohnPower August 11, 20112011-08-11
On Aug 11, 3:16 am, robert bristow-johnson <r...@audioimagination.com>
wrote:
> On 8/10/11 1:56 PM, JohnPower wrote: > > > > > > > > > > > On 3 Aug., 15:36, robert bristow-johnson<r...@audioimagination.com> > > wrote: > >> On 8/3/11 8:04 AM, JohnPower wrote: > > >>> For my master's thesis I'm currently trying to implement an active > >>> noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've > >>> chosen the Normalized Leaky LMS algorithm to update (451) coefficients > >>> of a FIR filter. > > >> John, would you mind plopping down the relevant equations for the NLLMS? > > >> i know how NLMS works and i have an idea how to make the coefficient > >> updating to be leaky, but i dunno how to get a handle on your problem > >> without seeing the equations. > > >> -- > > >> r b-j r...@audioimagination.com > > >> "Imagination is more important than knowledge." > > > this being my first thread in a usenet group, I've somehow expected to > > get a notification on new messages... well, sorry for the delay (and > > also the double post...). > > > my equation for the NLLMS is (vectors uppercase, scalars lowercase): > > > W[n+1] = W[n] * l + a*e[n]*X[n] / (s + power(X[n]) > > i guessed pretty good: > > g[n] = (2*mu) / mean{ x[n]^2 } > > h[k] <-- p*h[k] - g[n] * e[n]*x[n-k] at time n. > > different symbols and i missed the "s" term. > > > where > > W - coefficient vector > > X - vector which contains last N samples from my template which was > > treated with the impulse response P I've measured via MLS > > since you're not using the shorthand i was using, then let's explicitly > expand the notation so we know exactly what is being done. using C-like > notation for 2-dim arrays: > > coef vector: W[n] = { w[0][n], w[1][n], ... w[k][n], ... w[K-1][n] } > > FIR input vector: X[n] = { x[n], x[n-1], ... x[n-k], ... x[n-K+1] } > > we need to do this, because i see an ambiguity in your equation below. > can you confirm this? >
my FIR input vector Xo[n] has the same dimensions as the coefficients vector W[n]: Xo[n] = { x[0], x[1], ... , x[N] } W[n] = { w[0], w[1], ... , w[N] } where N is the filter order. When writing W[n+1] I mean the coefficient vector which is used to compute the next output. I've used the nomenclature from Diniz' book on 'Adaptive Filtering'
> > l - leakage 0< l< 1, in my case it is 0.999 > > which is like a first-order filter pole. just like my "p". > > > a - factor to influence speed of convergence, I get rather good > > results with setting this to -0.017 > > this, or something proportional to it, is called the "adaptation gain". > > > e - current error value from error microphone > > so is the differencing done in the air? this does not make sense for an > LMS. e[n] is, as far as i can tell, *computed* from the output of the > FIR (what is defined by W[n] and what i thought is y[n]) and from > subtracting the "desired" signal (what i would think comes from this > microphone). you need to clear this up.
yes, it's done in the air. The microphone serves two purposes: 1) record the 'desired' signal 2) measure the error betwen output signal and current noise It's something like a hybrid between a feedforward and feedback system, I guess this is what makes it so hard to understand - and explain :-/
> > > power(X[n]) - the power of X[n] estimated by doing a scalar > > multiplication of X[n] with itself, > > i don't get this. "scalar multiplication" of a vector with a vector? > do you mean dot product?:
sorry, my bad. Yes I mean dot product (in German, Skalarprodukt is the common term which I mixed up with scalar multiplication...)
> > N-1 > power(X[n]) = SUM{ x[n-k]^2 } > k=0 > > this is what i called "mean{ x[n]^2 }". > > > this is the "normalize" part > > dividing by that is the "normalize" part, i think you mean.
yes
> > > s - small value to prevent coefficients getting too big when > > power(X[n]) is small > > well, we don't ever want to divide by 0 but when the power is small, > e[n] and X[n] should also be small. so when the power is small, the > adaptation gets slower (which may be just fine, what you want). let's > assume that power(X[n]) is large enough that s is negligible, okay? > > how are you computing the division? explicitly? sometimes, for NLMS, > when division is expensive. it can be computed (tracked) with a > multiply, compare, and negative feedback.
my actual C code looks like this: tempUpdateFactor= a * e / templPower; As one can see, I've left out the s as it doesn't seem to be the problem here... So yes, I do explicit division. I suppose the "mulitply, compare, negative feedback" you've suggested would be to have the algorithm run faster? I haven't looked into this exact line for optimization yet because the vector calculations are much more time expensive.
> > > > > after calculating W[n+1], I've got an FIR Filter doing > > > K-1 > > y[n] = SUM{ W[k] * Xo[n-k] } > > k=0 > > > where > > Xo - template which did not go through the impulse response P > > right here, i see a problem regarding the difference between the vector > X[n] and, apparently, the scalar values Xo[n-k]. is the vector X[n] > made up of the samples Xo[n-k]? if yes, your notation is bad (the > elements of X[n] should be small-case, just like y[n] is). if no, this > NLMS algorithm is messed up.
I don't get it, if it was messed up this bad, it wouldn't work at all, would it? But my darned system works excellent for some time and then fades out gracefully... corrected notation: N-1 y[n] = SUM{ w[k] * xo[N-k] } k=0 I actually do this by using a library function to compute the dot product of the two vectors W[n] an Xo[n]. Xo and X are two different vectors. I've uploaded a new signal graph. It was revised to be coherent with the symbols in my equations: http://www.abload.de/img/signal_flow_coherentdusl.png
> > is P the impulse response of the room or whatever other path (comparable > to W[n])?
Yes, P is the impulse response of the room. To be more specific, it is the plant model of the system DAC->amplifier->transducer->microphone->amplifier->ADC
> > so far, we do not have a clear definition of e[n]. how is e[n] computed? > > > Maybe this overview helps to straighten out misunderstandings > > resulting from my lacking skills of the English language: > >http://www.abload.de/img/signal_flowgus5.png > > there are serious problems with that schematic. are you trying to > cancel the sound out of the loudspeaker? what is the "Recorded Audio"? > how is it related to what's in the "Memory Bank"? why is this signal > not derived from a common source?
Yes, I try to cancel the sound out of the loudspeaker. The term "Recorded audio" refers to the fact that, for testing, the noise and the triggers were recorded with a PC which now plays it back through the soundcard and serves as noise source. The "memory bank" is not related to this. The memory bank is used to store the first period of noise (recorded using the microphone), which is then used as a "feed forward" source for the LMS algorithm. So this is what my PC puts out: http://www.abload.de/img/trigger_and_noisevk6x.png
> > > this was taken, and slightly modified, from "Active control of the > > volume acquisition noise in functional magnetic resonance imaging: > > Method and psychoacoustical evaluation " by John Chambers > > (unfortunately, one has to pay for the paper and I don't think the > > publishers would be happy with me sharing copies here. The follow up > > paper by the same author can be obtained here: > >http://www.coe.pku.edu.cn/tpic/2010913102933476.pdf) > > so far, i dunno if i need it. > > > > > > > > > > > Tim: the noise microphone is in fact in the same space in which the > > cancellation is happening (as can be seen in the picture a few lines > > above). But also, for testing purposes, I've tested my algorith with > > only electrical signals using an Op-Amp configured as adder to > > superimpose my noise signal with the one intended to cancel out the > > former.This was also the case when I took the above coefficient > > snapshots. > > Even if there were unexpected delays due to, say the DAC or ADC, isn't > > the LMS supposed to follow such changes in the system? And if not, > > how could I take care of it? > > > Having said this, I could also think of the following kind of > > "unpredicted delay" in my system: > > My sampling frequency for the audio signal is 48kHz, so clearly I have > > a limited time-resolution. Now let the periodic length of the noise be > > some fractional number of samples... which is quite probable I'd > > suppose. So, my triggers are 40000.5 samples apart, resulting in a > > template of 40001 samples. This results in half a sample too much on > > every noise-iteration. Could this bring my algorithm to shift the > > coefficients by half a sample every time until they are completely out > > of shape? > > oh, this is complicated. i have to read this over and over to try to > figure out why you're doing what you're doing. > > the LMS alg should adapt itself in such a way to take care of > fractional-sample delays. > > > I hope I was able to give a better insight into my system and will try > > to monitor this thread better ;) > > don't sweat it. we're sorta amuzed that you got to senior or grad > school level EE without having discovered and used USENET before. > USENET is very old. older than the web. originally newsgroup postings > were passed around between universities and such by means other than the > internet.
Haha, glad to be of enjoyment :-) Not being a EE student, but a medical engineering one, maybe I just didn't dare posting on the usenet before :-D
> > -- > > r b-j r...@audioimagination.com > > "Imagination is more important than knowledge."
Reply by robert bristow-johnson August 10, 20112011-08-10
On 8/10/11 1:56 PM, JohnPower wrote:
> On 3 Aug., 15:36, robert bristow-johnson<r...@audioimagination.com> > wrote: >> On 8/3/11 8:04 AM, JohnPower wrote: >> >> >> >>> For my master's thesis I'm currently trying to implement an active >>> noise cancellation (ANC) system on a ADSP-21369 EZ-KIT Lite. I've >>> chosen the Normalized Leaky LMS algorithm to update (451) coefficients >>> of a FIR filter. >> >> John, would you mind plopping down the relevant equations for the NLLMS? >> >> i know how NLMS works and i have an idea how to make the coefficient >> updating to be leaky, but i dunno how to get a handle on your problem >> without seeing the equations. >> >> -- >> >> r b-j r...@audioimagination.com >> >> "Imagination is more important than knowledge." > > this being my first thread in a usenet group, I've somehow expected to > get a notification on new messages... well, sorry for the delay (and > also the double post...). > > my equation for the NLLMS is (vectors uppercase, scalars lowercase): > > W[n+1] = W[n] * l + a*e[n]*X[n] / (s + power(X[n]) >
i guessed pretty good: g[n] = (2*mu) / mean{ x[n]^2 } h[k] <-- p*h[k] - g[n] * e[n]*x[n-k] at time n. different symbols and i missed the "s" term.
> where > W - coefficient vector > X - vector which contains last N samples from my template which was > treated with the impulse response P I've measured via MLS
since you're not using the shorthand i was using, then let's explicitly expand the notation so we know exactly what is being done. using C-like notation for 2-dim arrays: coef vector: W[n] = { w[0][n], w[1][n], ... w[k][n], ... w[K-1][n] } FIR input vector: X[n] = { x[n], x[n-1], ... x[n-k], ... x[n-K+1] } we need to do this, because i see an ambiguity in your equation below. can you confirm this?
> l - leakage 0< l< 1, in my case it is 0.999
which is like a first-order filter pole. just like my "p".
> a - factor to influence speed of convergence, I get rather good > results with setting this to -0.017
this, or something proportional to it, is called the "adaptation gain".
> e - current error value from error microphone
so is the differencing done in the air? this does not make sense for an LMS. e[n] is, as far as i can tell, *computed* from the output of the FIR (what is defined by W[n] and what i thought is y[n]) and from subtracting the "desired" signal (what i would think comes from this microphone). you need to clear this up.
> power(X[n]) - the power of X[n] estimated by doing a scalar > multiplication of X[n] with itself,
i don't get this. "scalar multiplication" of a vector with a vector? do you mean dot product?: K-1 power(X[n]) = SUM{ x[n-k]^2 } k=0 this is what i called "mean{ x[n]^2 }". > this is the "normalize" part dividing by that is the "normalize" part, i think you mean.
> s - small value to prevent coefficients getting too big when > power(X[n]) is small
well, we don't ever want to divide by 0 but when the power is small, e[n] and X[n] should also be small. so when the power is small, the adaptation gets slower (which may be just fine, what you want). let's assume that power(X[n]) is large enough that s is negligible, okay? how are you computing the division? explicitly? sometimes, for NLMS, when division is expensive. it can be computed (tracked) with a multiply, compare, and negative feedback.
> > after calculating W[n+1], I've got an FIR Filter doing > > K-1 > y[n] = SUM{ W[k] * Xo[n-k] } > k=0 > > where > Xo - template which did not go through the impulse response P
right here, i see a problem regarding the difference between the vector X[n] and, apparently, the scalar values Xo[n-k]. is the vector X[n] made up of the samples Xo[n-k]? if yes, your notation is bad (the elements of X[n] should be small-case, just like y[n] is). if no, this NLMS algorithm is messed up. is P the impulse response of the room or whatever other path (comparable to W[n])? so far, we do not have a clear definition of e[n]. how is e[n] computed?
> Maybe this overview helps to straighten out misunderstandings > resulting from my lacking skills of the English language: > http://www.abload.de/img/signal_flowgus5.png
there are serious problems with that schematic. are you trying to cancel the sound out of the loudspeaker? what is the "Recorded Audio"? how is it related to what's in the "Memory Bank"? why is this signal not derived from a common source?
> this was taken, and slightly modified, from "Active control of the > volume acquisition noise in functional magnetic resonance imaging: > Method and psychoacoustical evaluation " by John Chambers > (unfortunately, one has to pay for the paper and I don't think the > publishers would be happy with me sharing copies here. The follow up > paper by the same author can be obtained here: > http://www.coe.pku.edu.cn/tpic/2010913102933476.pdf )
so far, i dunno if i need it.
> Tim: the noise microphone is in fact in the same space in which the > cancellation is happening (as can be seen in the picture a few lines > above). But also, for testing purposes, I've tested my algorith with > only electrical signals using an Op-Amp configured as adder to > superimpose my noise signal with the one intended to cancel out the > former.This was also the case when I took the above coefficient > snapshots. > Even if there were unexpected delays due to, say the DAC or ADC, isn't > the LMS supposed to follow such changes in the system? And if not, > how could I take care of it? > > Having said this, I could also think of the following kind of > "unpredicted delay" in my system: > My sampling frequency for the audio signal is 48kHz, so clearly I have > a limited time-resolution. Now let the periodic length of the noise be > some fractional number of samples... which is quite probable I'd > suppose. So, my triggers are 40000.5 samples apart, resulting in a > template of 40001 samples. This results in half a sample too much on > every noise-iteration. Could this bring my algorithm to shift the > coefficients by half a sample every time until they are completely out > of shape?
oh, this is complicated. i have to read this over and over to try to figure out why you're doing what you're doing. the LMS alg should adapt itself in such a way to take care of fractional-sample delays.
> I hope I was able to give a better insight into my system and will try > to monitor this thread better ;)
don't sweat it. we're sorta amuzed that you got to senior or grad school level EE without having discovered and used USENET before. USENET is very old. older than the web. originally newsgroup postings were passed around between universities and such by means other than the internet. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."