DSPRelated.com
Forums

why are LMS coefficients wandering away?

Started by JohnPower August 3, 2011
On 17 Aug., 19:34, robert bristow-johnson <r...@audioimagination.com>
wrote:
> On 8/17/11 9:18 AM, JohnPower wrote: > > > Problem solved. > > ... > > Long story short: When you've got a template of your noise, be sure to > > keep it in sync with the actual noise! > > why is there even such a thing as "a template of your noise"? &#4294967295;why are > you not using the very same noise signal (the "actual noise", i guess) > for whatever purpose you have your template for? &#4294967295;it's pretty hard to > get the actual noise to be outa sync with itself. > > why is your LMS system topology anything different than what the normal > topology is (whether the microphone is getting d[n] or e[n] is of no > consequence)? > > -- > > r b-j &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;r...@audioimagination.com > > "Imagination is more important than knowledge."
The problem with the normal topology is the delay introduced by the signal processing chain. My ANC system is meant to operate inside an MRI scanner so there's not enough space to place the microphone far away from the patient's ear to have enough time due to the sound's propagation in the air as it would be needed for a feedforward system. Also the MRI scanner cannot be seen as a point source because it sourrounds the patient's head. Whereas a feedback system would also attempt to cancel out acoustic stimuli or communication between health personnel and the patient. So, this is why I'm recording a template and use this template as a feedforward source...
On 8/17/11 3:38 PM, JohnPower wrote:
> On 17 Aug., 19:34, robert bristow-johnson<r...@audioimagination.com> > wrote: >> On 8/17/11 9:18 AM, JohnPower wrote: >> >>> Problem solved. >> >> ... >>> Long story short: When you've got a template of your noise, be sure to >>> keep it in sync with the actual noise! >> >> why is there even such a thing as "a template of your noise"? why are >> you not using the very same noise signal (the "actual noise", i guess) >> for whatever purpose you have your template for? it's pretty hard to >> get the actual noise to be outa sync with itself. >> >> why is your LMS system topology anything different than what the normal >> topology is (whether the microphone is getting d[n] or e[n] is of no >> consequence)? >> >> -- >> >> r b-j r...@audioimagination.com >> >> "Imagination is more important than knowledge." > > The problem with the normal topology is the delay introduced by the > signal processing chain.
i dunno what delay (in the DSP chain) you mean...
> My ANC system is meant to operate inside an > MRI scanner so there's not enough space to place the microphone far > away from the patient's ear to have enough time due to the sound's > propagation in the air as it would be needed for a feedforward system.
... but i *do* know that you cannot have any ferro-magnetic material inside or attached to the patient/subject inside an MRI. i understand that you cannot have a conventional headphone nor microphone placed by the patient's head in an MRI. (there was a story on NPR where some researcher is studying the "creative brain" and they had to design a music keyboard that was MRI safe.) so do you have plastic tubes delivering sound to the patient's ear (sorta like the airlines had before the 1990s)? do you have another tube placed there as a pickup (that eventually goes to a microphone)? those tubes can certainly introduce delay (as well as other filtering).
> Also the MRI scanner cannot be seen as a point source because it > sourrounds the patient's head.
so you would need "ambient" noise pickups placed at several places around the patient's head to get noise signals to cancel.
> Whereas a feedback system would also attempt to cancel out acoustic > stimuli or communication between health personnel and the patient.
so *those* signals are not part of x[n] (what goes into both the plant *and* the LMS filter). then the LMS treats those as "noise" and cannot focus a filter on them to cancel them. like this: .-----------. y[n] .---->| h[k][n] |------>-----. | '-----------' | x[n] --->---| (+)----> e[n] = y[n] + d[n] | .--------. d[n] | + v[n] '---->| p[k] |--->--(+)--->--' '--------' ^ | v[n] ---->--' because y[n] is not correlated to anything in v[n], the LMS will not and cannot do anything to match to it. the y[n] can cancel d[n] as the LMS hunts and converges to match h[k] to p[k], but v[n] will remain. this is actually the normal model for applications like speaker-phone feedback cancellation.
> So, this is why I'm recording a template and use this template as a > feedforward source...
but how can that template match the real noise in the real case? is the real noise that periodic and predictable? is the operation one where the patient has to hear the noise while you record it and adapt to it and then, a few seconds later, they can get relief? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On 17 Aug., 23:37, robert bristow-johnson <r...@audioimagination.com>
wrote:
> On 8/17/11 3:38 PM, JohnPower wrote: > > > > > > > On 17 Aug., 19:34, robert bristow-johnson<r...@audioimagination.com> > > wrote: > >> On 8/17/11 9:18 AM, JohnPower wrote: > > >>> Problem solved. > > >> ... > >>> Long story short: When you've got a template of your noise, be sure to > >>> keep it in sync with the actual noise! > > >> why is there even such a thing as "a template of your noise"? why are > >> you not using the very same noise signal (the "actual noise", i guess) > >> for whatever purpose you have your template for? it's pretty hard to > >> get the actual noise to be outa sync with itself. > > >> why is your LMS system topology anything different than what the normal > >> topology is (whether the microphone is getting d[n] or e[n] is of no > >> consequence)? > > >> -- > > >> r b-j r...@audioimagination.com > > >> "Imagination is more important than knowledge." > > > The problem with the normal topology is the delay introduced by the > > signal processing chain. > > i dunno what delay (in the DSP chain) you mean... >
mainly conversion time from ADC and DAC (approx 1.4ms in the case of the eval board I'm using. -> the feedforward microphone had to be place half a meter away from the patient...
> > My ANC system is meant to operate inside an > > MRI scanner so there's not enough space to place the microphone far > > away from the patient's ear to have enough time due to the sound's > > propagation in the air as it would be needed for a feedforward system. > > ... but i *do* know that you cannot have any ferro-magnetic material > inside or attached to the patient/subject inside an MRI. i understand > that you cannot have a conventional headphone nor microphone placed by > the patient's head in an MRI. (there was a story on NPR where some > researcher is studying the "creative brain" and they had to design a > music keyboard that was MRI safe.) > > so do you have plastic tubes delivering sound to the patient's ear > (sorta like the airlines had before the 1990s)? do you have another > tube placed there as a pickup (that eventually goes to a microphone)? > those tubes can certainly introduce delay (as well as other filtering). >
The microphone we use is an optical one connected with optic fibre. The headphone is special MRI equipment with electrostatic transducers. So there are no tubes causing any delay. As you've mentioned, there might be some additional delay from filtering in the third-party communication equipment.
> > Also the MRI scanner cannot be seen as a point source because it > > sourrounds the patient's head. > > so you would need "ambient" noise pickups placed at several places > around the patient's head to get noise signals to cancel. > > > Whereas a feedback system would also attempt to cancel out acoustic > > stimuli or communication between health personnel and the patient. > > so *those* signals are not part of x[n] (what goes into both the plant > *and* the LMS filter). then the LMS treats those as "noise" and cannot > focus a filter on them to cancel them. like this: > > .-----------. y[n] > .---->| h[k][n] |------>-----. > | '-----------' | > x[n] --->---| (+)----> e[n] = y[n] + d[n] > | .--------. d[n] | + v[n] > '---->| p[k] |--->--(+)--->--' > '--------' ^ > | > v[n] ---->--' > > because y[n] is not correlated to anything in v[n], the LMS will not and > cannot do anything to match to it. the y[n] can cancel d[n] as the LMS > hunts and converges to match h[k] to p[k], but v[n] will remain. this is > actually the normal model for applications like speaker-phone feedback > cancellation. > > > So, this is why I'm recording a template and use this template as a > > feedforward source... > > but how can that template match the real noise in the real case? is the > real noise that periodic and predictable? is the operation one where > the patient has to hear the noise while you record it and adapt to it > and then, a few seconds later, they can get relief? >
As I've mentioned earlier the real noise is in fact periodic and predictable . One MRI scan consists of the acquisition of several slices or volumes of the brain (of course there are more sophisticated techniques). Each slice or volume produces very similar noise (see http://h5.abload.de/img/trigger_and_noisevk6x.png). The duration of noise recording depends on the duration it takes to acquire one slice or volume and is in fact in the range of a few seconds.
> -- > > r b-j r...@audioimagination.com > > "Imagination is more important than knowledge."- Zitierten Text ausblenden -
On 8/18/11 4:27 AM, JohnPower wrote:
> On 17 Aug., 23:37, robert bristow-johnson<r...@audioimagination.com> > wrote: >> >> .-----------. y[n] >> .---->| h[k][n] |------>-----. >> | '-----------' | >> x[n] --->---| (+)----> e[n] = y[n] + d[n] >> | .--------. d[n] | + v[n] >> '---->| p[k] |--->--(+)--->--' >> '--------' ^ >> | >> v[n] ---->--' >> >> because y[n] is not correlated to anything in v[n], the LMS will not and >> cannot do anything to match to it. the y[n] can cancel d[n] as the LMS >> hunts and converges to match h[k] to p[k], but v[n] will remain. this is >> actually the normal model for applications like speaker-phone feedback >> cancellation. >> >>> So, this is why I'm recording a template and use this template as a >>> feedforward source... >> >> but how can that template match the real noise in the real case? is the >> real noise that periodic and predictable? is the operation one where >> the patient has to hear the noise while you record it and adapt to it >> and then, a few seconds later, they can get relief? >> > > As I've mentioned earlier the real noise is in fact periodic and > predictable .
then, instead of a template, the x[n] going in should be the live noise and the LMS will adapt (perhaps with a period delay), not to the current period of the noise, but to a previous period, depending on those delays. but otherwise i just don't see how you keep the two synced. the recorded version you're playing back (and having the LMS adapt to) is like a delayed, but live, version. at least the delayed version will not play out of sync. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."