Hi Miguel: Thanks for the information! The linear equations Ra=-r, have a special property, that the vector on the RHS also appears as a vector in R. In the more general case, where the vector on the RHS is some other vector, say c, the set of linear equations can be solved recursively by introducing a second recursive equation to solve the more general linear equation Rb=c. The result is a Generalized Levinson-Durbin Algorithm. For certain classes of linear predictors (in the warped domain), we need to solve these general equations! Best Regards, ~Arijit -----Original Message----- From: miguel on behalf of Miguel Arjona Ramez Sent: Fri 4/23/2004 3:45 PM To: #ARIJIT BISWAS# Cc: ; Subject: Re: [speechcoding] Levinson v/s Levinson-Durbin Dear Arijit, Quoting from Makhoul's famous 1975 tutorial, for the autocorrelation set of normal equations Ra = -r vector r is unconstrained in Levinson's 1947 work while it's made up of autocorrelation coefficients in Durbin's 1960 paper. I have not gone into these originals to check it. Besides, I haven't read anyone else about this point. By the way, even Markel and Gray LP Bible call the method just after Levinson while Rabiner and Schafer in their 1978 best seller call it just after Durbin. Anyway, most people like you call it Levinson-Durbin or the other way around. I would appreciate knowing about the Generalized Levinson-Durbin algorithm you mentioned at least in its application to speech processing if at all. Best regards, Miguel #ARIJIT BISWAS# wrote: > > Hi: > > I have a doubt: > > I know the Levinson-Durbin Algorithm (including the Generalized Levinson-Durbin Algorithm). Can anyone tell me how does it differ from the previous version, i.e. the Levinson algorithm? > > Since, I don't want to read it, I will be glad if anyone can give me the basic difference. > Best Regards, > > ~Arijit |