Reply by Stan Pawlukiewicz July 10, 20032003-07-10
Jeff wrote:
> Hi, > I search on Amazon and find the following similar book. Is it what you mean? > > > Optimum Array Processing (Detection, Estimation, and Modulation Theory, Part > IV) -- Harry L. Van Trees; Hardcover > > > Thanks > > >
yup
Reply by Jeff White July 10, 20032003-07-10
"Jeff" <dsfdsaf@hotmail.com> wrote in message news:<A03Pa.14998$Tx.697786@news20.bellglobal.com>...
> Hi, > I search on Amazon and find the following similar book. Is it what you mean? > > > Optimum Array Processing (Detection, Estimation, and Modulation Theory, Part > IV) -- Harry L. Van Trees; Hardcover > > > Thanks
Take a look at. It's the author's site with Matlab examples, etc http://ite.gmu.edu/DetectionandEstimationTheory/OAP/index.htm
Reply by Jeff July 9, 20032003-07-09
Hi,
I search on Amazon and find the following similar book. Is it what you mean?


Optimum Array Processing (Detection, Estimation, and Modulation Theory, Part
IV) -- Harry L. Van Trees; Hardcover


Thanks



Reply by Peter Kootsookos July 8, 20032003-07-08
"Stan Pawlukiewicz" <stanp@nospam_mitre.org> wrote

> > I highly recommend H. L. Van Trees's new book on Array Signal Processing. >
High praise! I've been meaning to order it for a while, so I'll add it to my list. Ciao, Peter K. -- Peter J. Kootsookos "Na, na na na na na na, na na na na" - 'Hey Jude', Lennon/McCartney
Reply by Stan Pawlukiewicz July 8, 20032003-07-08
Jeff wrote:
> Hi, > Due to the unsatisfactory convergence of LMS under ill-condition of > covariance matrix, the least square based algorithms are more favorable in > adaptive antennas. The experiment research on adaptive antennas is not very > much, because of the high cost I think. Even though, there is some research > using SMI, RLS and QR-RLS algorithms. As we know, the computation of SMI is > proportional with M(O power 3) (M is the antenna element number). On the > other hand, RLS is proportional with M(O power 2). These two methods are > appropriate implementing using programmable DSP chips. I have seen at least > two articles published using SMI, not RLS algorithm in recent six years. SMI > is a algorithms processing in batch (block) mode. RLS recursively calculate > the estimate parameter without the calculating the inverse of covariance. > RLS is more calculating efficient, numerical robust ant convergent fast than > SMI. > > My question is why some researchers using SMI, not RLS. Can somebody tell > me? > > > Any comments are appreciated. > >
If your data gives you an ill conditioned LMS, I would be surprised if you could get SMI to work. You might list the some of the references you mention. If someone got SMI to work, I would bet beer that they either were working in a subspace and/or they diagonally loaded. Some people prefer block averaging to exponential averaging. The term SMI tends to mean slightly different things to different people. Would you call Dominant Mode Rejection (DMR) a SMI technique? I highly recommend H. L. Van Trees's new book on Array Signal Processing.
Reply by santosh nath July 8, 20032003-07-08
"Jeff" <dsfdsaf@hotmail.com> wrote in message news:<l55Oa.3861$ru2.159972@news20.bellglobal.com>...
> Hi, > Due to the unsatisfactory convergence of LMS under ill-condition of > covariance matrix, the least square based algorithms are more favorable in > adaptive antennas. The experiment research on adaptive antennas is not very > much, because of the high cost I think. Even though, there is some research > using SMI, RLS and QR-RLS algorithms. As we know, the computation of SMI is > proportional with M(O power 3) (M is the antenna element number). On the > other hand, RLS is proportional with M(O power 2). These two methods are > appropriate implementing using programmable DSP chips. I have seen at least > two articles published using SMI, not RLS algorithm in recent six years. SMI > is a algorithms processing in batch (block) mode. RLS recursively calculate > the estimate parameter without the calculating the inverse of covariance. > RLS is more calculating efficient, numerical robust ant convergent fast than > SMI. > > My question is why some researchers using SMI, not RLS. Can somebody tell > me? > > > Any comments are appreciated.
Hi, This is a bit gray area. I am not sure whether anbody has claimed practical implementation of Sample matrix inversion in adaptive antenna. As you mentioned practical difficulties with SMI is: 1. high computational complexity Solution: Highspeed dedicated ASIC i.e only HW implementation is feasible at the moment 2. Numerical instability Solution: Higher precision arithimatic -- higher bit floating point (not very attractive though!) 3. Large matrix inversion Solution: Avoid direct inversion; rather choose Cholesksy factorization and indirect solution. With all the above also it is not very attractive though! Only one reason I see could be the choice of SMI over RLS is convergence. I guess at low SNR RLS is not attractive compared to SMI- please check i.e LMS and RLS has the same order of convergence rate at low SNR. Santosh.
Reply by Jeff July 6, 20032003-07-06
Hi,
Due to the unsatisfactory convergence of LMS under ill-condition of
covariance matrix, the least square based algorithms are more favorable in
adaptive antennas. The experiment research on adaptive antennas is not very
much, because of the high cost I think. Even though, there is some research
using SMI, RLS and QR-RLS algorithms. As we know, the computation of SMI is
proportional with M(O power 3) (M is the antenna element number). On the
other hand, RLS is proportional with M(O power 2). These two methods are
appropriate implementing using programmable DSP chips. I have seen at least
two articles published using SMI, not RLS algorithm in recent six years. SMI
is a algorithms processing in batch (block) mode. RLS recursively calculate
the estimate parameter without the calculating the inverse of covariance.
RLS is more calculating efficient, numerical robust ant convergent fast than
SMI.

My question is why some researchers using SMI, not RLS. Can somebody tell
me?


Any comments are appreciated.