DSPRelated.com
Forums

LMS with quantized data?

Started by gct July 16, 2007
OK so I wrote an LMS algorithm using floating point numbers.  But I
switched the signals I'm operating on to ones that are complex-integer
(from a 12-bit ADC).  My problem is that the algorithm refuses to converge
now.  It acts like it's going to, but then it just hits a stead states and
oscillates.  Anyone know a good way to fix this?


On Jul 16, 8:15 pm, "gct" <smcal...@gmail.com> wrote:
> OK so I wrote an LMS algorithm using floating point numbers. But I > switched the signals I'm operating on to ones that are complex-integer > (from a 12-bit ADC). My problem is that the algorithm refuses to converge > now. It acts like it's going to, but then it just hits a stead states and > oscillates. Anyone know a good way to fix this?
You can try decreasing your step size. Your quantized system might be unstable with your current step size, whereas the floating-point version was. Jason
>On Jul 16, 8:15 pm, "gct" <smcal...@gmail.com> wrote: >> OK so I wrote an LMS algorithm using floating point numbers. But I >> switched the signals I'm operating on to ones that are complex-integer >> (from a 12-bit ADC). My problem is that the algorithm refuses to
converge
>> now. It acts like it's going to, but then it just hits a stead states
and
>> oscillates. Anyone know a good way to fix this? > >You can try decreasing your step size. Your quantized system might be >unstable with your current step size, whereas the floating-point >version was. > >Jason > >
Yeah I gave that a shot. No dice. I'm actually using an e-LMS with power normalization. So I've tried manipulating the step size, leakage factor and epsilon, every time it hits a steady state and stops converging =(
"gct" <smcallis@gmail.com> wrote in message
news:ivqdnbY2IPHrpgHbnZ2dnUVZ_gWdnZ2d@giganews.com...

> >> OK so I wrote an LMS algorithm using floating point numbers. But I > >> switched the signals I'm operating on to ones that are complex-integer > >> (from a 12-bit ADC). My problem is that the algorithm refuses to > converge > >> now.
> Yeah I gave that a shot. No dice. I'm actually using an e-LMS with power > normalization. So I've tried manipulating the step size, leakage factor > and epsilon, every time it hits a steady state and stops converging =(
LMS is known to be sensitive to quantization. If the step is small it won't converge because the gradient is buried under the quantization and the other noise. If the step is big, it won't converge because of the instability and the excessive noise gradients. Before going into the implementation, you need to do the careful analysis. Perhaps, you may need smaller step and better then single precision float accuracy. Unless there is a trivial bug somewhere. Vladimir Vassilevsky www.abvolt.com
>LMS is known to be sensitive to quantization. If the step is small it
won't
>converge because the gradient is buried under the quantization and the
other
>noise. If the step is big, it won't converge because of the instability
and
>the excessive noise gradients. Before going into the implementation, you >need to do the careful analysis. Perhaps, you may need smaller step and >better then single precision float accuracy. Unless there is a trivial
bug
>somewhere. > >Vladimir Vassilevsky >www.abvolt.com
I hadn't thought about the gradient getting buried int he quantization like that. I'm actually taking the quantized input and casting it back to float (it's a lot easier to do that in our framework) to run the algorithm, would that make a difference?
>I hadn't thought about the gradient getting buried int he quantization >like that. I'm actually taking the quantized input and casting it back
to
>float (it's a lot easier to do that in our framework) to run the
algorithm,
>would that make a difference? >
Also can you recommend other algorithms that would work better with quantized data?
On Jul 17, 1:07 am, "gct" <smcal...@gmail.com> wrote:
> >LMS is known to be sensitive to quantization. If the step is small it > won't > >converge because the gradient is buried under the quantization and the > other > >noise. If the step is big, it won't converge because of the instability > and > >the excessive noise gradients. Before going into the implementation, you > >need to do the careful analysis. Perhaps, you may need smaller step and > >better then single precision float accuracy. Unless there is a trivial > bug > >somewhere. > > >Vladimir Vassilevsky > >www.abvolt.com > > I hadn't thought about the gradient getting buried int he quantization > like that. I'm actually taking the quantized input and casting it back to > float (it's a lot easier to do that in our framework) to run the algorithm, > would that make a difference?
What are you using the LMS filter for? Maybe there's an alternative implementation that doesn't require the adaptive algorithm. Jason
On Jul 16, 8:15 pm, "gct" <smcal...@gmail.com> wrote:
> OK so I wrote an LMS algorithm using floating point numbers. But I > switched the signals I'm operating on to ones that are complex-integer > (from a 12-bit ADC). My problem is that the algorithm refuses to converge > now. It acts like it's going to, but then it just hits a stead states and > oscillates. Anyone know a good way to fix this?
Are you saying that you switched from real to complex inputs? If so, are you using a complex-lms filter? Dirk
>On Jul 16, 8:15 pm, "gct" <smcal...@gmail.com> wrote: >> OK so I wrote an LMS algorithm using floating point numbers. But I >> switched the signals I'm operating on to ones that are complex-integer >> (from a 12-bit ADC). My problem is that the algorithm refuses to
converge
>> now. It acts like it's going to, but then it just hits a stead states
and
>> oscillates. Anyone know a good way to fix this? > >Are you saying that you switched from real to complex inputs? > >If so, are you using a complex-lms filter? > >Dirk > >
I'm using the LMS as an adaptive beamformer. And the data is complex throughout. I've since accepted that I'm going to have excess MSE, so I raised my threshold for terminating the adaptation from .0001 to .01 and it converges well enough now to be useful.
You can get "under" the quantization and avoid the stalling with random 
rounding of the scaled error.
It acts like dithering.

David Shaw
"gct" <smcallis@gmail.com> wrote in message 
news:ifCdnRM7N-AX0wHbnZ2dnUVZ_vGinZ2d@giganews.com...
> >LMS is known to be sensitive to quantization. If the step is small it > won't >>converge because the gradient is buried under the quantization and the > other >>noise. If the step is big, it won't converge because of the instability > and >>the excessive noise gradients. Before going into the implementation, you >>need to do the careful analysis. Perhaps, you may need smaller step and >>better then single precision float accuracy. Unless there is a trivial > bug >>somewhere. >> >>Vladimir Vassilevsky >>www.abvolt.com > > I hadn't thought about the gradient getting buried int he quantization > like that. I'm actually taking the quantized input and casting it back to > float (it's a lot easier to do that in our framework) to run the > algorithm, > would that make a difference?