DSPRelated.com
Forums

Affine Projection Algorithm

Started by HardySpicer March 8, 2008
What is the advantage if any of the APA over LMS or NLMS?


Hardy
>What is the advantage if any of the APA over LMS or NLMS? > > >Hardy >
********************************************** Hello Hardy, these adaptive algorithms, namely APA, LMS and NLMS, are all of them stochastic gradient approximations to the steepest-descent method, which tries to minimize the mean square error (MSE) between the desired signal d and the output of the adaptive filter uw, namely E{|d-uw|^2}. Now, APA is a more precise approximation than LMS and NLMS. The consequence is that the resulting minimum-mean-square error of APA is smaller than that of NLMS or LMS. Also APA has faster tracking capabilities than LMS and NLMS. Generally, APA has a better performance (steady state MSE/transient response) than LMS and NLMS. However, this better performance comes at the expense of a higher computational complexity. Manolis
On Mar 9, 11:04&#4294967295;am, "Manolis C. Tsakiris" <el01...@mail.ntua.gr>
wrote:
> >What is the advantage if any of the APA over LMS or NLMS?
...
> Now, APA is a > more precise approximation than LMS and NLMS. The consequence is that the > resulting minimum-mean-square error of APA is smaller than that of NLMS or > LMS. Also APA has faster tracking capabilities than LMS and NLMS. > Generally, APA has a better performance (steady state MSE/transient > response) than LMS and NLMS. However, this better performance comes at the > expense of a higher computational complexity.
...so APA exploits the information contributed by each new sample better, but requires more computations between samples? Rune
>=2E..so APA exploits the information contributed by each new sample >better, but requires more computations between samples? > >Rune >
******************************************** Hi Rune, Exactly. This can be understood by examining the rule that leads from the adaptive filter weights at time instant (i-1), namely w[i-1], to the updated adaptive filter weights at time (i), namely w[i]. At both LMS and NLMS the rule is that the following inequality must hold: d(i)-u[i]*w[i-1] > d(i) - u[i]*w[i] , u[i], w[i] are vectors, d(i) scalar that means that LMS and NLMS try to minimize (in an a-posteriori sense) the instantaneous estimation error e(i) of the adaptive filter and accordingly w[i] is chosen. On the other hand, a K-th order APA adaptive filter tries, by selecting w[i] accordingly, to minimize (in an a-posteriori sense) the previous K instantaneous errors! Manolis
On Mar 10, 12:00 am, "Manolis C. Tsakiris" <el01...@mail.ntua.gr>
wrote:
> >=2E..so APA exploits the information contributed by each new sample > >better, but requires more computations between samples? > > >Rune > > ******************************************** > Hi Rune, > > Exactly. This can be understood by examining the rule that leads from the > adaptive filter weights at time instant (i-1), namely w[i-1], to the > updated adaptive filter weights at time (i), namely w[i]. At both LMS and > NLMS the rule is that the following inequality must hold: > > d(i)-u[i]*w[i-1] > d(i) - u[i]*w[i] , u[i], w[i] are vectors, d(i) scalar > > that means that LMS and NLMS try to minimize (in an a-posteriori sense) > the instantaneous estimation error e(i) of the adaptive filter and > accordingly w[i] is chosen. > On the other hand, a K-th order APA adaptive filter tries, by selecting > w[i] accordingly, to minimize (in an a-posteriori sense) the previous K > instantaneous errors! > > Manolis
Thanks for that. Is there an equivalent Wiener filter for the APA method or is it just that the APA method gives a more accurate Wiener solution in the stationary noise case? ie they both minimize mse? Hardy
>Thanks for that. Is there an equivalent Wiener filter for the APA >method or is it just that the APA method gives a more accurate Wiener >solution in the stationary noise case? ie they both minimize mse? > >Hardy >****************************************************
The second, ie they both minimize the MSE. The Wiener filter for estimating d from u is w = inverse(Ru) * Rdu All of them, the LMS, NLMS and APA are approximations to the Wiener filter. Just as you said, APA is a better approximation than LMS or NLMS. Manolis
Rune Allnor wrote:
> On Mar 9, 11:04 am, "Manolis C. Tsakiris" <el01...@mail.ntua.gr> > wrote: >>> What is the advantage if any of the APA over LMS or NLMS? > ... >> Now, APA is a >> more precise approximation than LMS and NLMS. The consequence is that the >> resulting minimum-mean-square error of APA is smaller than that of NLMS or >> LMS. Also APA has faster tracking capabilities than LMS and NLMS. >> Generally, APA has a better performance (steady state MSE/transient >> response) than LMS and NLMS. However, this better performance comes at the >> expense of a higher computational complexity. > > ...so APA exploits the information contributed by each new sample > better, but requires more computations between samples?
The answers from APA bounce around less due to noise. In a noisy channel NLMS requires the feedback gain to be pretty low to ensure stable homing to the minimum. It may require a yet lower gain when you get near the minimum, to ensure you don't bounce around too much. At least for stationary noise, APA or FAP rides through the noise better, so the feedback can be more aggressive. This is a much beloved quality in papers on echo cancellation, where the initial adaption time is generally portrayed as make or break for the design. In practice, other factors tend to be more important for a robust canceler, but the use of FAP is effective for getting papers published. An even better way to get them published is to use quantum computing. Most techniques for homing on the best solution will only home to the local minimum. The quantum computing approach (e.g. the kind of thing D-Wave are doing) has the advantage that quantum tunneling might break through the local wall in the cost function, and lead you to the global minimum. Of course, their solution has the disadvantage of needing a cryogenic plant. :-) Steve
On Mar 9, 7:06 pm, Steve Underwood <ste...@dis.org> wrote:
> Rune Allnor wrote: > > On Mar 9, 11:04 am, "Manolis C. Tsakiris" <el01...@mail.ntua.gr> > > wrote: > >>> What is the advantage if any of the APA over LMS or NLMS? > > ... > >> Now, APA is a > >> more precise approximation than LMS and NLMS. The consequence is that the > >> resulting minimum-mean-square error of APA is smaller than that of NLMS or > >> LMS. Also APA has faster tracking capabilities than LMS and NLMS. > >> Generally, APA has a better performance (steady state MSE/transient > >> response) than LMS and NLMS. However, this better performance comes at the > >> expense of a higher computational complexity. > > > ...so APA exploits the information contributed by each new sample > > better, but requires more computations between samples? > > The answers from APA bounce around less due to noise. In a noisy channel > NLMS requires the feedback gain to be pretty low to ensure stable homing > to the minimum. It may require a yet lower gain when you get near the > minimum, to ensure you don't bounce around too much. At least for > stationary noise, APA or FAP rides through the noise better, so the > feedback can be more aggressive. This is a much beloved quality in > papers on echo cancellation, where the initial adaption time is > generally portrayed as make or break for the design. In practice, other > factors tend to be more important for a robust canceler, but the use of > FAP is effective for getting papers published. An even better way to get > them published is to use quantum computing. Most techniques for homing > on the best solution will only home to the local minimum. The quantum > computing approach (e.g. the kind of thing D-Wave are doing) has the > advantage that quantum tunneling might break through the local wall in > the cost function, and lead you to the global minimum. Of course, their > solution has the disadvantage of needing a cryogenic plant. :-)
looks like i need to read about Affine Projection Algorithm (since i hadn't heard about it before and hadn't known of a clean alternative to LMS and NLMS). where can i read about it, with a decent amount of technical content so i can see how it is similar and different from LMS? i s'pose in some IEEE. i want something that doesn't cost me money. r b-j
>looks like i need to read about Affine Projection Algorithm (since i >hadn't heard about it before and hadn't known of a clean alternative >to LMS and NLMS). where can i read about it, with a decent amount of >technical content so i can see how it is similar and different from >LMS? i s'pose in some IEEE. i want something that doesn't cost me >money. > >r b-j >****************************************
For an in depth analysis i strongly suggest "Fundamentals of Adaptive Filtering" from Ali Sayed, chapters 4 and 5. I can hardly imagine a better text than that in adaptive filtering. However, the book is quite expensive and it's mathematics are very sophisticated. Consequently you will need some time for studying from there. On the other hand, a possibly better, for your purposes, and on-hands solution, is to set-up a simulation scenario, such as channel-estimation or channel-equalization and run both LMS and APA and compare their behaviour. I think that is the better way in order to understand qualitatively their similarities and differences. Manolis
>>Thanks for that. Is there an equivalent Wiener filter for the APA >>method or is it just that the APA method gives a more accurate Wiener >>solution in the stationary noise case? ie they both minimize mse? >> >>Hardy >>**************************************************** >The second, ie they both minimize the MSE. >The Wiener filter for estimating d from u is > >w = inverse(Ru) * Rdu > >All of them, the LMS, NLMS and APA are approximations to the Wiener >filter. Just as you said, APA is a better approximation than LMS or
NLMS.
> >Manolis >
So does APA typically converge faster? Or is the MSE typically smaller with APA(as you say, a better approximation)?