Started by February 27, 2007
```My terminology may be wrong, which might explain why I am having such
difficulty finding literature on the subject.

I have a symmetric LPF FIR filter (Type II) with N taps.  I take K
samples (K >> N), pause for a period of time, and then take K more
samples...this process repeats indefinitely.

The discontinuity in sampling worries me.  When sampling resumes, I
figure that my options are to:
(1) Simply wait (discard) N samples to process the output data

Is there any literature out there that discusses this problem of
possible to take advantage of FIR symmetry to reduce the delay to N/2
(with reasonably accurate results), but I don't quite grasp how this
would be accomplished.

I expect that much depends on implementation and context, but I
figured that I would start with the basics to point me in the right
direction.  Any help would be appreciated.

Cheers,
Dave

```
```Hi,

David Bonnell <dbonnell@gmail.com> wrote:
> Is there any literature out there that discusses this problem of

I don't see a problem here.

> A colleague has informed me that it is possible to take advantage of FIR
> symmetry to reduce the delay to N/2 (with reasonably accurate results),
> but I don't quite grasp how this would be accomplished.

What your colleague probably meant was that in case your impulse response
is symmetric (which is required for linear phase FIR filters), you can
spare half of the multiplications because you can simplify x * h + y * h to
(x + y) * h. This has nothing to do with the delay, however.

> I expect that much depends on implementation and context, but I
> figured that I would start with the basics to point me in the right
> direction.  Any help would be appreciated.

Unfortunately, you forgot to mention the actual problem.

Regards,
Clemens
```
```David Bonnell wrote:
> My terminology may be wrong, which might explain why I am having such
> difficulty finding literature on the subject.
>
> I have a symmetric LPF FIR filter (Type II) with N taps.  I take K
> samples (K >> N), pause for a period of time, and then take K more
> samples...this process repeats indefinitely.
>
> The discontinuity in sampling worries me.

Why? What's your purpose for filtering?

> When sampling resumes, I
> figure that my options are to:
> (1) Simply wait (discard) N samples to process the output data
>
> Is there any literature out there that discusses this problem of

If you convolve a signal x of length K with a signal h of length N,
you get a new signal y of length K + N -1. Why does that worry you?

> A colleague has informed me that it is
> possible to take advantage of FIR symmetry to reduce the delay to N/2
> (with reasonably accurate results), but I don't quite grasp how this
> would be accomplished.

Linear phase FIRs have a constant, frequency independent delay of
(N-1)/2 samples. So the part in y that "corresponds" to the unfiltered
signal x is

y( (N-1)/2 : (N-1)/2 + K -1 )

(using Matlab convention but assuming zero based indexing, and N odd).
Non-linear phase filters have a frequency dependent delay - so some
components can be delayed by a different time with respect to other
components.

> I expect that much depends on implementation and context, but I
> figured that I would start with the basics to point me in the right
> direction.  Any help would be appreciated.

Regards,
Andor

```
```Andor wrote:

...

> If you convolve a signal x of length K with a signal h of length N,
> you get a new signal y of length K + N - 1. Why does that worry you?

...

This is the crux of what worries (and will disappoint) Mr. Bonnell. Of
that number, the first and last N - 1 (assuming N to be the filter
length) will contain transients because of incomplete overlap. The
useful length is only K - N + 1 when samples before the first are
assumed zero. Bonnell wants to know if some values other than zero --
say, what's left in the buffer from last time -- improve matters.

In general, no. These "edge effects" are inevitable, in 1D, 2D, digital,
analog, and any other instance that can be imagined. In some cases it is
possible to find heuristics that improve the cosmetics. Usually, if the
unknown data can be predicted well enough, there's no need to have
measured anything.

Jerry
--
Engineering is the art of making what you want from things you can get.
&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;
```
```> > in case your impulse response is symmetric (which is required for linear phase FIR filters), you can spare half of the multiplications
This has nothing to do with the delay, however.

No arguments here.

> Unfortunately, you forgot to mention the actual problem.

You asked for it  :)  I will, however, try to be brief.

Not so much of a problem as a question.  I have two 64 tap filters (A,
B)in cascade.  The output of A (an anti-aliasing filter) feeds into
the B (LPF intended to determine DC offset).  I cannot reduce actual
delay through the system (i.e 64 + 64 samples).  However, I am
wondering if there are methods to compensate for the delay by

I want to AC-couple the signal by subtracting the output of B (DC
value) from the output of A.  Because B is delayed by 64 samples, I
need to buffer the output of A to sync the two signals.  Conceptually
pretty simple.

In this system, roughly 6000 samples are taken, followed by a 1000
sample 'break'.  This cycle repeats.  When resuming sampling (i.e.
after the 1000 sample break), I currently discard 128 samples (or 2%
of my data).  I also am unable to process the last 64 data points (at
the end of the sampling interval) because of the sync issue between A
and B.

Having said all that, I figured I could discard fewer samples by pre-
example, at the start of a sampling interval I could load the filters
with the average value obtained during the previous sampling
interval.  Clearly, there will be some error in the first 64 + 64
filter outputs, but my question would be 'how much error'?

I believe my colleague was stating that if I take this approach, I
could 'safely' discard the first 32 samples (instead of 64) and use
the next 32 samples as valid data, even though there would be some
error in the result.

That prompted me to look through my textbooks and search online (where
I found no information), and subsequently led me to post here for some
guidance.

HTH,
Dave

```
```> > The discontinuity in sampling worries me.

> Why? What's your purpose for filtering?

I have outlined this in another response.

> If you convolve a signal x of length K with a signal h of length N,
> you get a new signal y of length K + N -1. Why does that worry you?

Worry is perhaps a bit strong.  I want to avoid discarding samples
when sampling starts.  If sampling was continuous this wouldn't be a
problem, but I sample a block of data, pause, and repeat.  So I have
to discard samples at the beginning of every block.

> Linear phase FIRs have a constant, frequency independent delay of
> (N-1)/2 samples. So the part in y that "corresponds" to the unfiltered
> signal x is
>
> y( (N-1)/2 : (N-1)/2 + K -1 )
>

I believe this is what I'm failing to grasp.  I am using linear FIRs,
but I am not sure why the delay is (N-1)/2 instead of N-1.  Given a
101-tap filter, I'm not sure why/how the delay would be only 50

Cheers,
Dave

```
```> This is the crux of what worries (and will disappoint) Mr. Bonnell. Of
> that number, the first and last N - 1 (assuming N to be the filter
> length) will contain transients because of incomplete overlap. The
> useful length is only K - N + 1 when samples before the first are
> assumed zero. Bonnell wants to know if some values other than zero --
> say, what's left in the buffer from last time -- improve matters.

Yes, that's pretty much what I'm after  :)

> In general, no. These "edge effects" are inevitable, in 1D, 2D, digital,
> analog, and any other instance that can be imagined. In some cases it is
> possible to find heuristics that improve the cosmetics. Usually, if the
> unknown data can be predicted well enough, there's no need to have
> measured anything.
>

Arggh.  I guess there's a reason I couldn't find much literature on
the subject!

Cheers,
Dave

```
```David Bonnell wrote:
>>> in case your impulse response is symmetric (which is required for linear phase FIR filters), you can spare half of the multiplications
> This has nothing to do with the delay, however.
>
> No arguments here.
>
>> Unfortunately, you forgot to mention the actual problem.
>
> You asked for it  :)  I will, however, try to be brief.
>
> Not so much of a problem as a question.  I have two 64 tap filters (A,
> B)in cascade.  The output of A (an anti-aliasing filter) feeds into
> the B (LPF intended to determine DC offset).  I cannot reduce actual
> delay through the system (i.e 64 + 64 samples).  However, I am
> wondering if there are methods to compensate for the delay by

You are already off on the wrong foot. An anti-alias filter generally is
a low-pass filter, and it must come _before_ sampling tales place. It
will therefore necessarily be an analog filter.

> I want to AC-couple the signal by subtracting the output of B (DC
> value) from the output of A.  Because B is delayed by 64 samples, I
> need to buffer the output of A to sync the two signals.  Conceptually
> pretty simple.

A high-pass filter is even simpler. Subtracting the output of a low-pass
filter from the original signal is a roundabout way to low-pass filter.

> In this system, roughly 6000 samples are taken, followed by a 1000
> sample 'break'.  This cycle repeats.  When resuming sampling (i.e.
> after the 1000 sample break), I currently discard 128 samples (or 2%
> of my data).  I also am unable to process the last 64 data points (at
> the end of the sampling interval) because of the sync issue between A
> and B.
>
> Having said all that, I figured I could discard fewer samples by pre-
> loading the filter(s) prior to start of the sampling interval.  For
> example, at the start of a sampling interval I could load the filters
> with the average value obtained during the previous sampling
> interval.  Clearly, there will be some error in the first 64 + 64
> filter outputs, but my question would be 'how much error'?

matching fiction to reality, the nicer your results will look. Whatever
you do, justifying the validity will be hard.

> I believe my colleague was stating that if I take this approach, I
> could 'safely' discard the first 32 samples (instead of 64) and use
> the next 32 samples as valid data, even though there would be some
> error in the result.

In some weather, I can safely jump off my boat without a life jacket.
Weather can change.

> That prompted me to look through my textbooks and search online (where
> I found no information), and subsequently led me to post here for some
> guidance.

You asked for it   :)   I will, however, try to be brief.

You should low-pass your data before it's sampled. That filter can run
continuously, so whatever delay it imposes will not contribute to end
effects. The function of an anti-alias filter is removing those
frequencies that would alias if they were sampled. It is not possible to
separate aliases created by sampling after sampling takes place. Aliases
occupy the same frequency range as useful data. That's why they're
called aliases.

You will likely remove DC more efficiently with an IIR high-pass filter.
Google this group's archives for "DC Blocker" and "robert
bristow-johnson". Although not linear phase, the delay in the passband
will be small and nearly constant.

Jerry
--
Engineering is the art of making what you want from things you can get.
&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;
```
```"David Bonnell" <dbonnell@gmail.com> wrote in message
>> > The discontinuity in sampling worries me.
>
>> Why? What's your purpose for filtering?
>
> I have outlined this in another response.
>
>> If you convolve a signal x of length K with a signal h of length N,
>> you get a new signal y of length K + N -1. Why does that worry you?
>
> Worry is perhaps a bit strong.  I want to avoid discarding samples
> when sampling starts.  If sampling was continuous this wouldn't be a
> problem, but I sample a block of data, pause, and repeat.  So I have
> to discard samples at the beginning of every block.
>
>> Linear phase FIRs have a constant, frequency independent delay of
>> (N-1)/2 samples. So the part in y that "corresponds" to the unfiltered
>> signal x is
>>
>> y( (N-1)/2 : (N-1)/2 + K -1 )
>>
>
> I believe this is what I'm failing to grasp.  I am using linear FIRs,
> but I am not sure why the delay is (N-1)/2 instead of N-1.  Given a
> 101-tap filter, I'm not sure why/how the delay would be only 50
>
> Cheers,
> Dave

Dave,

"seeding" the filter simply means you fill it with data that proposes to
represent what *may have* been in it.  It's like guessing what your real
input data looked like *before* it started.  That seems risky.

It appears the trade is whether you just need to get just something out of
the filter or whether you need to get something sorta accurate out of the
filter.  If the purpose of the filter is to generate some kind of noise then
the first objective may be yours.  If the purpose of the filter is to do
data analysis with some accuracy then it's a dangerous thought I should
think.

That said, there can be special cases.
What if the real data has a large constant component?  Then, it may actually
be helpful to load the filter up with that constant value so that higher
frequency details might be seen "sooner".  This would be highly data
dependent and application dependent of course.

Fred

```
```David Bonnell wrote:
>>> The discontinuity

...

>> Linear phase FIRs have a constant, frequency independent delay of
>> (N-1)/2 samples. So the part in y that "corresponds" to the unfiltered
>> signal x is
>>
>> y( (N-1)/2 : (N-1)/2 + K -1 )
>>
>
> I believe this is what I'm failing to grasp.  I am using linear FIRs,
> but I am not sure why the delay is (N-1)/2 instead of N-1.  Given a
> 101-tap filter, I'm not sure why/how the delay would be only 50

A sample affects the output as soon as it is taken. The effect of equal
samples increases with their number. Analyze the step response of a
simple filter and you will see that the center of the output step occurs
when the first sample reaches the middle tap.

Consider a filter of length 3 and take as output the sum of the taps*.
After a long string of zeros, put in ones: 0 0 ... 0 1 1 1 1 .... What
is the output?

In  ... 0 1 1 1 1 1 1
Out ... 0 1 2 3 3 3 3

The middle of the output rise occurs one sample after the first 1 hits
the filter. (3 - 1)/2 = 1. Try it with other numbers. Even lengths will
have non-integer delays.

Jerry
____________________________________
* This is called a boxcar averager.
--
Engineering is the art of making what you want from things you can get.
&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;
```