Reply by Jerry Avins July 15, 20102010-07-15
On 7/15/2010 8:23 AM, maury wrote:
> On Jul 14, 7:55 pm, Manny<mlou...@hotmail.com> wrote: >> On Jul 15, 1:23 am, Vladimir Vassilevsky<nos...@nowhere.com> wrote: >> >>> Manny wrote: >>>> What if you discovered by sheer luck that doing something wanky like >>>> skipping a beat works wonders to your signal? Sure this is uncool, >>>> inelegant, and can't be analyzed, only characterized. Do you scrap all >>>> this and wake up next day pretending that nothing has happened? >> >>> It depends. I am cautious about those empirical things. They tend to >>> work great in some cases and fail disgracefully in the other cases. >> >> Noted! Although for completeness---as I now realize I sounded too >> dramatic above---these ad hoc, empirical techniques are not as blind >> as they might come across really. I.e. if you know some properties >> about your expected signal, could rigour be substituted for intuition >> every blue moon? I don't know the answer for that, and I'm certainly >> not in a position to advocate for anything here. >> >> -Momo > > Manny, > It's certainly not unusual to discover something, and then spend years > trying to show why it works or prove why it works.
Maury, I'll push that along even further. A lot of junk science comes from people who make up an explanation for something they observe but don't understand.* Sometimes that's even rational. There are people won't believe /what/ unless there's an accompanying /how/. Jerry ______________________________________ * If you screw out a light bulb and put your finger in the hole, it hurts a lot. A spirit lives there and bites you if intrude. -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by maury July 15, 20102010-07-15
On Jul 14, 7:55&#4294967295;pm, Manny <mlou...@hotmail.com> wrote:
> On Jul 15, 1:23&#4294967295;am, Vladimir Vassilevsky <nos...@nowhere.com> wrote: > > > Manny wrote: > > > What if you discovered by sheer luck that doing something wanky like > > > skipping a beat works wonders to your signal? Sure this is uncool, > > > inelegant, and can't be analyzed, only characterized. Do you scrap all > > > this and wake up next day pretending that nothing has happened? > > > It depends. I am cautious about those empirical things. They tend to > > work great in some cases and fail disgracefully in the other cases. > > Noted! Although for completeness---as I now realize I sounded too > dramatic above---these ad hoc, empirical techniques are not as blind > as they might come across really. I.e. if you know some properties > about your expected signal, could rigour be substituted for intuition > every blue moon? I don't know the answer for that, and I'm certainly > not in a position to advocate for anything here. > > -Momo
Manny, It's certainly not unusual to discover something, and then spend years trying to show why it works or prove why it works. Maurice
Reply by Manny July 14, 20102010-07-14
On Jul 15, 1:23&#4294967295;am, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> Manny wrote: > > What if you discovered by sheer luck that doing something wanky like > > skipping a beat works wonders to your signal? Sure this is uncool, > > inelegant, and can't be analyzed, only characterized. Do you scrap all > > this and wake up next day pretending that nothing has happened? > > It depends. I am cautious about those empirical things. They tend to > work great in some cases and fail disgracefully in the other cases.
Noted! Although for completeness---as I now realize I sounded too dramatic above---these ad hoc, empirical techniques are not as blind as they might come across really. I.e. if you know some properties about your expected signal, could rigour be substituted for intuition every blue moon? I don't know the answer for that, and I'm certainly not in a position to advocate for anything here. -Momo
Reply by Vladimir Vassilevsky July 14, 20102010-07-14

Manny wrote:

> What if you discovered by sheer luck that doing something wanky like > skipping a beat works wonders to your signal? Sure this is uncool, > inelegant, and can't be analyzed, only characterized. Do you scrap all > this and wake up next day pretending that nothing has happened?
It depends. I am cautious about those empirical things. They tend to work great in some cases and fail disgracefully in the other cases. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by Manny July 14, 20102010-07-14
On Jul 14, 8:07&#4294967295;pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> maury wrote: > > On Jul 14, 9:35 am, Vladimir Vassilevsky <nos...@nowhere.com> wrote: > > >>maury wrote: > > >>>If you look at the normalized LMS (NLMS), then the maximum mu is > >>>limited to the sum of the number of filter coefficients. > > >>IMO what does matter is optimim rather then maximum. You need to > >>minimize the total error of the process. This error is because of > >>imperfect adaptation, noisy gradients, nonlinearity, ambient noise and > >>numeric artifacts. It also depends on the statistics and power of the > >>reference signal. So the algorithm must adapt Mu in pseudo Kalman way. > > >>Vladimir Vassilevsky > >>DSP and Mixed Signal Design Consultanthttp://www.abvolt.com > > > Hi Vlad, > > With respect to the NLMS, the effect of the power of the reference > > depends on the implmentation. For example, in a system idemtification > > implementation, the power of the reference is not important since the > > normalization of the LMS is performed over the signal power. The > > algorithm is self-adjusting for imput power. > > Yes, but the input is catching the ambient noise and double talk besides > &#4294967295; the reflected signal. Hence if the input power is low, the algorithm > should not crank Mu all the way up to infinity, as simple NLMS would do, > but do the exactly the opposite. > > > However!!! &#4294967295;The performance is very dependent on the model assumed. By > > nonlinearity, I take you to mean that a linear model will have errors > > if nonlinearities are present in the reference signal (the reference > > being some nonlinear inner-product of the signal and system vector). > > This just means the wrong model was chosen. If your model matches the > > system, then the squared error will not be great. > > System is given to you. Your algorithm has to work with whatever you got.
What if you discovered by sheer luck that doing something wanky like skipping a beat works wonders to your signal? Sure this is uncool, inelegant, and can't be analyzed, only characterized. Do you scrap all this and wake up next day pretending that nothing has happened? -Momo
Reply by Jerry Avins July 14, 20102010-07-14
On 7/14/2010 4:38 PM, maury wrote:
> On Jul 14, 2:07 pm, Vladimir Vassilevsky<nos...@nowhere.com> wrote: > >> System is given to you. Your algorithm has to work with whatever you got. > > Kobayashi maru solution!
Hah! another Trekke! -- Engineering is the art of making what you want from things you can get.
Reply by maury July 14, 20102010-07-14
On Jul 14, 2:07&#4294967295;pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:

> System is given to you. Your algorithm has to work with whatever you got.
Kobayashi maru solution!
Reply by Vladimir Vassilevsky July 14, 20102010-07-14

maury wrote:

> On Jul 14, 9:35 am, Vladimir Vassilevsky <nos...@nowhere.com> wrote: > >>maury wrote: >> >>>If you look at the normalized LMS (NLMS), then the maximum mu is >>>limited to the sum of the number of filter coefficients. >> >>IMO what does matter is optimim rather then maximum. You need to >>minimize the total error of the process. This error is because of >>imperfect adaptation, noisy gradients, nonlinearity, ambient noise and >>numeric artifacts. It also depends on the statistics and power of the >>reference signal. So the algorithm must adapt Mu in pseudo Kalman way. >> >>Vladimir Vassilevsky >>DSP and Mixed Signal Design Consultanthttp://www.abvolt.com > > > Hi Vlad, > With respect to the NLMS, the effect of the power of the reference > depends on the implmentation. For example, in a system idemtification > implementation, the power of the reference is not important since the > normalization of the LMS is performed over the signal power. The > algorithm is self-adjusting for imput power.
Yes, but the input is catching the ambient noise and double talk besides the reflected signal. Hence if the input power is low, the algorithm should not crank Mu all the way up to infinity, as simple NLMS would do, but do the exactly the opposite.
> However!!! The performance is very dependent on the model assumed. By > nonlinearity, I take you to mean that a linear model will have errors > if nonlinearities are present in the reference signal (the reference > being some nonlinear inner-product of the signal and system vector). > This just means the wrong model was chosen. If your model matches the > system, then the squared error will not be great.
System is given to you. Your algorithm has to work with whatever you got. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by maury July 14, 20102010-07-14
On Jul 14, 9:35&#4294967295;am, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> maury wrote: > > If you look at the normalized LMS (NLMS), then the maximum mu is > > limited to the sum of the number of filter coefficients. > > IMO what does matter is optimim rather then maximum. You need to > minimize the total error of the process. This error is because of > imperfect adaptation, noisy gradients, nonlinearity, ambient noise and > numeric artifacts. It also depends on the statistics and power of the > reference signal. So the algorithm must adapt Mu in pseudo Kalman way. > > Vladimir Vassilevsky > DSP and Mixed Signal Design Consultanthttp://www.abvolt.com
Hi Vlad, With respect to the NLMS, the effect of the power of the reference depends on the implmentation. For example, in a system idemtification implementation, the power of the reference is not important since the normalization of the LMS is performed over the signal power. The algorithm is self-adjusting for imput power. However!!! The performance is very dependent on the model assumed. By nonlinearity, I take you to mean that a linear model will have errors if nonlinearities are present in the reference signal (the reference being some nonlinear inner-product of the signal and system vector). This just means the wrong model was chosen. If your model matches the system, then the squared error will not be great. No matter what the model, the algorithm will give the minimum squared error for the model. This doesn't mean that the coefficients you end up with are close to the desired coefficients. The other important parameter is the condition of the input auto- correlation matrix. An ill-conditioned matrix will result in not being able to determine an optimum estimate of the system impulse response. This is the problem with the example used by the OP. An input of 50 Hz (single signal) will result in an ill-conditioned input auto- correlartion matrix. There is NO optimum solution in this case. In fact, there are an infinite number of solutions to this problem. But, your point is well taken that the maximum mu used will be based on the optimum performance required from the problem. That performance may be minimum misadjustment noise, or maximum speed of convergence. Each will ditate a different "maximum" mu. What I think the OP was looking for, is how to determine the maximum mu for convergence (often equated with stability). This value is different for LMS versus NLMS. Maurice
Reply by Vladimir Vassilevsky July 14, 20102010-07-14

maury wrote:


> If you look at the normalized LMS (NLMS), then the maximum mu is > limited to the sum of the number of filter coefficients.
IMO what does matter is optimim rather then maximum. You need to minimize the total error of the process. This error is because of imperfect adaptation, noisy gradients, nonlinearity, ambient noise and numeric artifacts. It also depends on the statistics and power of the reference signal. So the algorithm must adapt Mu in pseudo Kalman way. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com