DSPRelated.com
Forums

FIR filter with a flat transfer function

Started by ecco December 20, 2006
Hi, Andor

Thanks again for your time.  I am trying to build a linear predictor
for the purposes of lossless data compression, i.e., I will store e[n]
(the error vector) rather than x[n] (the raw signal).  In reading
through the literature it seems that when linear prediction is
implemented for this purpose, the optimization objective is to flatten
the power spectrum of the error vector.  I have found that while, yes,
the linear predictors that I've tried have indeed flattened (or
'whitened') the power spectrum of the error vectors, the Shannon
entropy of e[n] is not notably less than the Shannon entropy of
(d/dt)x[n].

If you scroll up and look at one of my previous emails, you'll note
that I say that the schemes I've tried don't "perform" as well, where I
define performance with respect to entropy.

Any thoughts?  Does this still seem funny to you?

ec


Andor wrote:
> ecco wrote: > > Thanks, Andor > > > > I'll give Burg a shot. How about RLS (recursive lease squares)? My > > waveform is of biological origin. > > I can't help but think that you made an error somewhere in your Matlab > analysis ... any predictor of order >= 3 drastically reduces the > prediction error compared to your [0 1] predictor on almost any type of > signal. > > Note that for computing predictor filters, you never actually have the > 0 coefficient tap included. Ie. your prediction filter is actually [1] > and not [0 1]. See the help files for the AR modeling techniques in > Matlab. > > Regards, > Andor
Carlos Moreno wrote:
> Jerry Avins wrote:
...
> I hope this clears up the misunderstanding!
Yes. I thank you (and Fred M.). I came here to learn, and I'm learning. Jerry -- Engineering is the art of making what you want from things you can get. ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

ecco wrote:
> Hi, Andor > > Thanks again for your time. I am trying to build a linear predictor > for the purposes of lossless data compression, i.e., I will store e[n] > (the error vector) rather than x[n] (the raw signal). In reading > through the literature it seems that when linear prediction is > implemented for this purpose, the optimization objective is to flatten > the power spectrum of the error vector. I have found that while, yes, > the linear predictors that I've tried have indeed flattened (or > 'whitened') the power spectrum of the error vectors, the Shannon > entropy of e[n] is not notably less than the Shannon entropy of > (d/dt)x[n]. > > If you scroll up and look at one of my previous emails, you'll note > that I say that the schemes I've tried don't "perform" as well, where I > define performance with respect to entropy. > > Any thoughts? Does this still seem funny to you?
It could be that your ideal prediction error filter is indeed [1 -1], ie the first order difference. This happens if your signal is a random walk with uniformly distributed zero-mean and independent increments. You seem to know more about your signal than you are telling us, for example where did your estimates for the order of the linear predictor come from? Regards, Andor BTW: Notice how I make a point of specifying whether I'm talking about the *prediction filter* or the *prediction error filter*. You have caused some hubwuk by not differentiating between the two.
Andor wrote:

   ...

> Why would that depend on the sample rate? The ability to linearly > predict the next sample given the last p samples in a row only depends > only how much the samples are linearly correlated, and not at all on > the sample rate.
I wasn't thinking of correcting a guess and recording the result. I was thinking of predicting the next measurement. My background is in controlling things that move or change temperature. Sometimes, when samples are infrequent, it can be helpful to guess what is to come and act on that before it arrives. Think of it as a sort of feed-forward scheme. I was in a different play pen. Jerry -- Engineering is the art of making what you want from things you can get. ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

ecco wrote:
> Hi, Andor > > Thanks again for your time. I am trying to build a linear predictor > for the purposes of lossless data compression, i.e., I will store e[n] > (the error vector) rather than x[n] (the raw signal). In reading > through the literature it seems that when linear prediction is > implemented for this purpose, the optimization objective is to flatten > the power spectrum of the error vector. I have found that while, yes, > the linear predictors that I've tried have indeed flattened (or > 'whitened') the power spectrum of the error vectors, the Shannon > entropy of e[n] is not notably less than the Shannon entropy of > (d/dt)x[n].
By the way: the predictor filter must *not* have a flat magnitude response (as you suggested), but rather must produce a error sequence flat spectrum magnitude! Another source of confusion in this thread (mainly for me, it seems) ...

Jerry Avins wrote:
> Andor wrote: ... > > > Why would that depend on the sample rate? The ability to linearly > > predict the next sample given the last p samples in a row only depends > > only how much the samples are linearly correlated, and not at all on > > the sample rate. > I wasn't thinking of correcting a guess and recording the result. I was > thinking of predicting the next measurement.
So was I. Again: why is the ability to (linearly) predict the next measurement dependent on the sample rate? You aren't alluring to the common misconception that the "linear predictability" of a signal depends on its bandwidth, were you? There are many example of non-bandlimited functions, that, when sampled, yield perfectly predictable sequences. Exponentially damped sinusoids, non-bandlimited periodic functions with period a multiple of the sampling period, etc.
> My background is in > controlling things that move or change temperature. Sometimes, when > samples are infrequent, it can be helpful to guess what is to come and > act on that before it arrives. Think of it as a sort of feed-forward > scheme.
I can immagine that predictors can come in handy in control systems.
> I was in a different play pen.
I don't think so :-). Regards, Andor
> > Jerry > -- > Engineering is the art of making what you want from things you can get. > =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=
=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF= =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF
Andor wrote:
> > Jerry Avins wrote: >> Andor wrote: ... >> >>> Why would that depend on the sample rate? The ability to linearly >>> predict the next sample given the last p samples in a row only depends >>> only how much the samples are linearly correlated, and not at all on >>> the sample rate. >> I wasn't thinking of correcting a guess and recording the result. I was >> thinking of predicting the next measurement. > > So was I. Again: why is the ability to (linearly) predict the next > measurement dependent on the sample rate?
No, but when updates are relatively infrequent, predicting a probable intermediate guess is more urgent.
> You aren't alluring to the common misconception that the "linear > predictability" of a signal depends on its bandwidth, were you? There > are many example of non-bandlimited functions, that, when sampled, > yield perfectly predictable sequences. Exponentially damped sinusoids, > non-bandlimited periodic functions with period a multiple of the > sampling period, etc.
I like that typo. I find it alluring :-) But anyway, every legitimately sampled waveform must be bandlimited, no? ... Jerry -- Engineering is the art of making what you want from things you can get. ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯

Jerry Avins wrote:
> Andor wrote: > > > Jerry Avins wrote: > >> Andor wrote: ... > > >>> Why would that depend on the sample rate? The ability to linearly > >>> predict the next sample given the last p samples in a row only depends > >>> only how much the samples are linearly correlated, and not at all on > >>> the sample rate. > >> I wasn't thinking of correcting a guess and recording the result. I was > >> thinking of predicting the next measurement. > > > So was I. Again: why is the ability to (linearly) predict the next > > measurement dependent on the sample rate? > No, but when updates are relatively infrequent, predicting a probable > intermediate guess is more urgent.
Upon re-reading your post, this is what I thought you had meant as well. Thanks for clarifying.
> > You aren't alluring to the common misconception that the "linear > > predictability" of a signal depends on its bandwidth, were you? There > > are many example of non-bandlimited functions, that, when sampled, > > yield perfectly predictable sequences. Exponentially damped sinusoids, > > non-bandlimited periodic functions with period a multiple of the > > sampling period, etc. > I like that typo. I find it alluring :-) But anyway, every legitimately > sampled waveform must be bandlimited, no?
It's not a typo. Let's say you multiply a sinusoid with the function f(t) = exp(- t / tau) u(t), where u(t) is the step function. Thus you get an exponentially damped sinusoid, switched on at time t=0. Due to the modulation with f(t), the damped sinusoid is not bandlimited (it's very simple to compute the FT of the product f(t) cos(w t), 0 < w < pi). However, the result of sampling this non-bandlimited function at the positive integers is a sequence of numbers that are linearly predictable (of order 2, just like the non-damped sinusoid). Given any two consecutive values from that sequence, one can compute the rest of the sequence, because the sequence satisfies a linear second-order recurrence with constant coefficients. The same goes for non-bandlimited periodic functions. I wasn't talking about reconstruction, but of the linear predictability of sequences. In that context, bandlimitation prior to sampling is not necessary. Regards, Andor
Andor wrote:
> > Jerry Avins wrote: >> Andor wrote: >> >>> Jerry Avins wrote: >>>> Andor wrote: ... >>>>> Why would that depend on the sample rate? The ability to linearly >>>>> predict the next sample given the last p samples in a row only depends >>>>> only how much the samples are linearly correlated, and not at all on >>>>> the sample rate. >>>> I wasn't thinking of correcting a guess and recording the result. I was >>>> thinking of predicting the next measurement. >>> So was I. Again: why is the ability to (linearly) predict the next >>> measurement dependent on the sample rate? >> No, but when updates are relatively infrequent, predicting a probable >> intermediate guess is more urgent. > > Upon re-reading your post, this is what I thought you had meant as > well. Thanks for clarifying. > > >>> You aren't alluring to the common misconception that the "linear >>> predictability" of a signal depends on its bandwidth, were you? There >>> are many example of non-bandlimited functions, that, when sampled, >>> yield perfectly predictable sequences. Exponentially damped sinusoids, >>> non-bandlimited periodic functions with period a multiple of the >>> sampling period, etc. >> I like that typo. I find it alluring :-) But anyway, every legitimately >> sampled waveform must be bandlimited, no? > > It's not a typo. Let's say you multiply a sinusoid with the function > > f(t) = exp(- t / tau) u(t), > > where u(t) is the step function. Thus you get an exponentially damped > sinusoid, switched on at time t=0. Due to the modulation with f(t), > the damped sinusoid is not bandlimited (it's very simple to compute the > FT of the product f(t) cos(w t), 0 < w < pi). However, the result of > sampling this non-bandlimited function at the positive integers is a > sequence of numbers that are linearly predictable (of order 2, just > like the non-damped sinusoid). Given any two consecutive values from > that sequence, one can compute the rest of the sequence, because the > sequence satisfies a linear second-order recurrence with constant > coefficients. The same goes for non-bandlimited periodic functions. > > I wasn't talking about reconstruction, but of the linear predictability > of sequences. In that context, bandlimitation prior to sampling is not > necessary.
Andor, I'm abashed that I put you to that extra work. You wrote "alluring to" when you meant "alluding to". That was the alluring typo I alluded to. Sorry! jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;

Jerry Avins wrote:
> Andor wrote: > > > Jerry Avins wrote: > >> Andor wrote: > > >>> Jerry Avins wrote: > >>>> Andor wrote: ... > >>>>> Why would that depend on the sample rate? The ability to linearly > >>>>> predict the next sample given the last p samples in a row only depends > >>>>> only how much the samples are linearly correlated, and not at all on > >>>>> the sample rate. > >>>> I wasn't thinking of correcting a guess and recording the result. I was > >>>> thinking of predicting the next measurement. > >>> So was I. Again: why is the ability to (linearly) predict the next > >>> measurement dependent on the sample rate? > >> No, but when updates are relatively infrequent, predicting a probable > >> intermediate guess is more urgent. > > > Upon re-reading your post, this is what I thought you had meant as > > well. Thanks for clarifying. > > >>> You aren't alluring to the common misconception that the "linear > >>> predictability" of a signal depends on its bandwidth, were you? There > >>> are many example of non-bandlimited functions, that, when sampled, > >>> yield perfectly predictable sequences. Exponentially damped sinusoids, > >>> non-bandlimited periodic functions with period a multiple of the > >>> sampling period, etc. > >> I like that typo. I find it alluring :-) But anyway, every legitimately > >> sampled waveform must be bandlimited, no? > > > It's not a typo. Let's say you multiply a sinusoid with the function > > > f(t) = exp(- t / tau) u(t), > > > where u(t) is the step function. Thus you get an exponentially damped > > sinusoid, switched on at time t=0. Due to the modulation with f(t), > > the damped sinusoid is not bandlimited (it's very simple to compute the > > FT of the product f(t) cos(w t), 0 < w < pi). However, the result of > > sampling this non-bandlimited function at the positive integers is a > > sequence of numbers that are linearly predictable (of order 2, just > > like the non-damped sinusoid). Given any two consecutive values from > > that sequence, one can compute the rest of the sequence, because the > > sequence satisfies a linear second-order recurrence with constant > > coefficients. The same goes for non-bandlimited periodic functions. > > > I wasn't talking about reconstruction, but of the linear predictability > > of sequences. In that context, bandlimitation prior to sampling is not > > necessary. > > Andor, I'm abashed that I put you to that extra work. You wrote > "alluring to" when you meant "alluding to". That was the alluring typo I > alluded to. Sorry!
Oh well, I hope at least my tirade was readable. :-)