```OK, I'm still being a bit thick!

What would anyone recommend as a simple test signal that I could use to
put through the L-D block in simulink and what should I expect as the
output?

If I have only one signal can I put that through the L-D?  I know that
the LD solves for a number of equations/polynomials.  So in simulink,
if I only select output A, I can have a scaler as an input, ie the
first vlaue of my signal.  Or I can buffer the signal and input, say
the first 5 samples and get out 5 values.  I think these then are the
'filter' coefficients.  But if I select to ouput K, I need to have a
vector going in, so I really don't know what K is all about?!

I've run a test using a ramp input.  If I input one value at a time,
then A always outputs 1.  I was thinking that A should be the next
value in the input signal and should also be a ramp, but if A is the
filter coefficients then, 1 convolved with the input signal.... hmm
just gives the input signal value?  Is that what is supposed to happen,
not the next value in the input signal?  Maybe it works better if the
input and thus output is a vector (window of smaples)?  However, the
prediction error, follows the input exactly.... say the input is 3 then
the error is 3 on the same iteration.

The signal(s) I really want to put through this are blood velocity
flows..... obtained from a couple of arteries behind the eye.

Thank you for all your help.

```
```
Steve Underwood wrote:

> Why only semi adaptive? You present data to the Levinson-Durbin
> algorithm (or the Schurr one for that matter). It recursively tunes a
> set of filter parameters to match the data as best it can. To me that
> sounds as adaptive as something like LMS.

It always finds the solution in a number of steps that can
be calculated from the size of the problem.

I don't really agree that a Toeplitz solver (which is the
kernal of Levinson-Durbin) tunes a set of parameters.  It
iteratively build a solution from successive partial
solutions in a fixed number of iterations.

Bob
--

"Things should be described as simply as possible, but no
simpler."

A. Einstein
```
```
Rune Allnor wrote:
> Bob Cain wrote:

>>I sure don't think of Levinson-Durbin as adaptive.  It takes
>>a pair of time domain sequences and finds a linear transform
>>(of a specified length) that takes one to the other with LMS
>>error but it is a deterministic calculation without the kind
>>of iterative improvement step that characterizes adaptive
>>systems.
>
>
> Heh, you really live up to your signature by phrasing in so few
> words what I tried (and failed) to put down if at least four
> times as many.

Thanks.  I'm used to hearing the exact opposite, but mainly
from people who don't generally understand "necessasary and
sufficient."  :-)

Bob
--

"Things should be described as simply as possible, but no
simpler."

A. Einstein
```
```Vicki wrote:
> Thanks for the replies.
>
> So all I need to do then is to put my signal through the
> Levinson-Durbin block in matlab and look at the outputs from that,
yes?

I'm not sure it will be that easy with matlab. Pre-whitening the
signal would involve finding an inverse filter and apply that to
your data. It's not difficult, but you need to know what to do.
If you typed wrong and meant simulink instead of matlab, it might
be that easy, yes. I don't know simulink well enough to comment
on that.

> Next questions is then:  Does using the L-D algorithm pre-whiten my
> signal?  (eventually, want to prewhiten then do some blind
convolution
> on it).

The error signal that comes out of your simulink block is the
pre-whitened signal.

> I'm not really sure why I'm wanting to use predictors or the L-D
> algorthim but as someone has suggested I do it, and I have no better
> ideas and need to move forward I want to try this and see whether it
> gives me anything useful!

What's the application? There is something called "predictive
deconvolution" that is used in the seismics industry. While it is
widely used, I don't know much about it except it is based on
the Levinson recursion and the theory behind AR models. The
techniques were developed in the 1970ies, and implemented in
processing packages in the 80ies, but no theory appears to be
available in the literature. There was a book by Treitel and
Robinson issued in 1980, but I have looked for a copy for ten
years whitout finding one. These days every processing package
contains a Predicive Decon operator. Since everybody knows what
it does and know how to use it, no one seem to care how it works.

So again, depending on the application, you might very well be on
a useful track.

Rune

```
```Vicki wrote:
> Hi All,
> I'm just confusing myself with predictors and the Levinson-Durbin
> algorithm.  I've got Haykin's book.  So... near the start of the book
> there are diagrams of the different adaptive filter models.  I am
> familar with the identification model and LMStype algorthims.  But I'm
> trying to understand the predictor and Levinson-Durbin algorthim.  Do
> these two go together?!
>
> In the predictor model there is the error signal (d(n)-y(n)) which is
> then feed back into the filter.  If I use the L-D algorthim is this
> error feed back into the algorthim?  I don't see an error term in the
> algorthim.....
>
> I'm using matlab, well simulink really.  There is a Levinson-Durbin
> block which has one input, and outputs can be the solutions to the L-D,
> and/or reflection coefficients (?) and/or also the output prediction
> error.  I think that if I put my input signal through the block that it
> is the first output that I want to look at??  Not sure what the others
> are!
>
> Any help on this would be very much appreciated.  I'm feeling quite
> stupid about all this right now:-(
>
> Many thanks
>

The niceset derivation of Levinson Durbin, IMHO is in Golub and Van
Loan's "Matrix Computations" books.
```
```Thanks for the replies.

So all I need to do then is to put my signal through the
Levinson-Durbin block in matlab and look at the outputs from that, yes?

Next questions is then:  Does using the L-D algorithm pre-whiten my
signal?  (eventually, want to prewhiten then do some blind convolution
on it).

I'm not really sure why I'm wanting to use predictors or the L-D
algorthim but as someone has suggested I do it, and I have no better
ideas and need to move forward I want to try this and see whether it
gives me anything useful!

```
```Bob Cain wrote:
> Vicki wrote:
> > Hi All,
> > I'm just confusing myself with predictors and the Levinson-Durbin
> > algorithm.  I've got Haykin's book.  So... near the start of the
book
> > there are diagrams of the different adaptive filter models.  I am
> > familar with the identification model and LMStype algorthims.  But
I'm
> > trying to understand the predictor and Levinson-Durbin algorthim.
Do
> > these two go together?!
>
> I sure don't think of Levinson-Durbin as adaptive.  It takes
> a pair of time domain sequences and finds a linear transform
> (of a specified length) that takes one to the other with LMS
> error but it is a deterministic calculation without the kind
> of iterative improvement step that characterizes adaptive
> systems.

Heh, you really live up to your signature by phrasing in so few
words what I tried (and failed) to put down if at least four
times as many.

> "Things should be described as simply as possible, but no
> simpler."

Rune

```
```Steve Underwood wrote:
> Rune Allnor wrote:
>
> >Vicki wrote:
> >
> >
> >>Hi All,
> >>I'm just confusing myself with predictors and the Levinson-Durbin
> >>algorithm.  I've got Haykin's book.  So... near the start of the
book
> >>there are diagrams of the different adaptive filter models.  I am
> >>familar with the identification model and LMStype algorthims.  But
> >>
> >>
> >I'm
> >
> >
> >>trying to understand the predictor and Levinson-Durbin algorthim.
Do
> >>these two go together?!
> >>
> >>
> >
> >The LD is "semi-adaptive" in that it uses the data to find
> >a set of filter coefficients, but when these coefficents have
> >been found, they are never updated. The data used to determine
> >the coefficients are "typical" for the data the filter will
> >operate on.
> >
> >
> Why only semi adaptive? You present data to the Levinson-Durbin
> algorithm (or the Schurr one for that matter). It recursively tunes a

> set of filter parameters to match the data as best it can. To me that

> sounds as adaptive as something like LMS.

Well, I have never dealed with "fully adaptive" stuff, i.e. systems
that adjust to the data "on the fly". Unlike LMS, to the diminishing
degree I know LMS, the Levinson algorithm solves for the filter
coefficients once and for all, once the training data are available.
The Levionson recursion requires all measured data to be available
(implicitly, as estimates of the autocorrelation sequence).
As far as I know, it's not possible for the Levinson recursion
to take advantage of more available data without starting over,
at the initialization stage.

As far as I know, the LMS algorithms starts out somewhere and
"tunes in" on the processed data, and improves as more data are
available.

Perhaps the difference is insignificant, but I interpret the two
approaches differently: The Levinson recursion uses a separate
training set, while the LMS tunes in to the data to be processed
as they become avaliable.

Rune

```
```Rune Allnor wrote:

>Vicki wrote:
>
>
>>Hi All,
>>I'm just confusing myself with predictors and the Levinson-Durbin
>>algorithm.  I've got Haykin's book.  So... near the start of the book
>>there are diagrams of the different adaptive filter models.  I am
>>familar with the identification model and LMStype algorthims.  But
>>
>>
>I'm
>
>
>>trying to understand the predictor and Levinson-Durbin algorthim.  Do
>>these two go together?!
>>
>>
>
>The LD is "semi-adaptive" in that it uses the data to find
>a set of filter coefficients, but when these coefficents have
>been found, they are never updated. The data used to determine
>the coefficients are "typical" for the data the filter will
>operate on.
>
>
Why only semi adaptive? You present data to the Levinson-Durbin
algorithm (or the Schurr one for that matter). It recursively tunes a
set of filter parameters to match the data as best it can. To me that
sounds as adaptive as something like LMS.

Regards,
Steve
```
```Vicki wrote:
> Hi All,
> I'm just confusing myself with predictors and the Levinson-Durbin
> algorithm.  I've got Haykin's book.  So... near the start of the book
> there are diagrams of the different adaptive filter models.  I am
> familar with the identification model and LMStype algorthims.  But
I'm
> trying to understand the predictor and Levinson-Durbin algorthim.  Do
> these two go together?!

The LD is "semi-adaptive" in that it uses the data to find
a set of filter coefficients, but when these coefficents have
been found, they are never updated. The data used to determine
the coefficients are "typical" for the data the filter will
operate on.

> In the predictor model there is the error signal (d(n)-y(n)) which is
> then feed back into the filter.  If I use the L-D algorthim is this
> error feed back into the algorthim?  I don't see an error term in the
> algorthim.....

Well, it's there during the design stages. The equations used
to compute the coefficients were derived in a way that minimizes
the error term. The reflection coefficient you mention below, says
something about how much the error term decreases from one filter
order to the next.

> I'm using matlab, well simulink really.  There is a Levinson-Durbin
> block which has one input, and outputs can be the solutions to the
L-D,
> and/or reflection coefficients (?) and/or also the output prediction
> error.  I think that if I put my input signal through the block that
it
> is the first output that I want to look at??  Not sure what the
others
> are!

The filter coefficients and reflection coefficients are two
different but equivalent ways of representing the filter.
The prediction error is very useful in signal compression
applications.

Rune

```