Does the reliability of band-pass filter output for a finite-time signal vary with frequency?Started by 5 years ago●8 replies●latest reply 5 years ago●95 views
Does the reliability of band-pass filter output for a finite-time signal vary with frequency?
For example: Say I have a 60 second sample of experimental data. I apply a band-pass filter to yield the 0.2-0.25Hz components and then a similar one to yield the 5-5.5Hz components. Is one band "more reliable" in any way? Specifically, is the higher frequency band more reliable because of the higher number of cycles in the dataset?
You're going to need to define or clarify what you mean by "reliable". Filter implementations are generally deterministic in that you'll know ahead of time exactly how it will treat each frequency. As long as the filter coefficients don't change, that response won't change, so they're 100% reliable that way.
If you mean something different, you'll need to clarify.
When one uses the term "reliability" with regard to the output of a bandpass filter, one is not typically referring to the the constancy with which a fixed mathematical procedure will produce a specific output in response to a specific input. One is typically referring to the reliability of the bandpass filter to produce an output that *reliably* indicates the nature of the component of the input that spans the stated frequency range of the bandpass filter. Given that this user stated a 60-second span of input samples, and a frequency bin of 0.20 - 0.25 Hz, one's attention should be quickly drawn to the fact that in the entire input sequence, there are only 12 cycles of the lowest frequency component to make it past the BPF. This is not a concern *unless* one is feeding the BPF output to an adaptive filter of some sort. For adaptive filters, one typically needs a large number of cycles of the lowest frequency component of interest in the input sequence to the filter to properly adapt the filter. More cycles are always better. Granted, I am particularly sensitive to this point, since my current work involves adaptive IIR filters.
Slartibartfast is correct. Your use of the word "reliable" is puzzling. Does your word "reliable" mean "correct", or maybe "valid"?
I'll stick my neck out here and assume your "reliable" means "valid" in the following sense: For a digital filter's output sequence to be valid (meaningful) the time-duration of the filter's input sequence must be, say, five or ten times greater than the impulse response duration of the filter. (You need an input sequence that is many times longer than the transient-response of the filter.)
I once worked with a guy who was applying a 50-sample input sequence to a narrowband lowpass FIR filter whose impulse response duration was 500 samples! I failed in convincing this guy that his filter's output sequence was not "valid." (His filter output sequence never reached a "steady state"!)
Thank you for taking a stab at my novice language. "Valid" is what I'm looking for, as in not affected by noise, aliasing, or other artifacts.
What about the case of using an IIR filter such as a Butterworth?
If it is not a decimating filter then it won't cause aliasing. Generally "noise" comes from external sources and not the filter, although if one isn't careful in the design process quantization noise may be an issue. This is under the control of the designer, though. Basically, a filter doesn't distinguish the source of energy, and frequency dependence of the output is described by the frequency response of the filter.
Likewise a filter doesn't distinguish between a continuously-running sequence and a sequence with a time window applied (noting the subtleties of IIR filters, though). In other words, the behavior of the length of a short time sequence might be better understood by the effects of the applied time window than of response the filter.
As a first approximation, they will be the same, in spite of the different number of cycles in the dataset.
However, there are a bunch of second order effects that could arise in any one particular application. For instance, 1/f noise in the signal will contaminate the lower frequency bins more than the higher ones. Conversely, aliasing in the data acquisition will likely corrupt the higher frequencies bins more. Also, the bandpass filters, depending of which variety you use, may have unexpected differences at low, middle, and high frequencies.
You probably want to design the system, run computer generated white noise through it, and see if the output is flat and well behaved. If it isn't, you will need to dig in and see what is going on.
Yes, the term "reliable" is appropriate in this case for the following reason: Since your sample sequence spans only 60 seconds, the band of 0.20 - 0.25 Hz has at best 12 cycles of a signal component at 0.20 Hz (i.e., the lower edge). If you are using an FIR filter, the output will be a more reliable indication of the 0.20 - 0.25 Hz components of your input if the span of the tap weights is a smaller fraction of the span of your input samples. If, for the example of the FIR filter, you have more tap weights in your filter than you have actual input samples, then the filter output will indicate the 0.20 - 0.25 Hz components of a signal which consists of your actual signal followed by 0's.
If you apply an N-sample input sequence to a tapped-delay line FIR filter having 3N coefficients, your output sequence will have N+3N-1 = 4N-1 nonzero-valued samples. And those output samples will definitely NOT be a filtered version of the N-sample input sequence.