DSPRelated.com
Forums

IIR filter Implementation

Started by Par May 3, 2005
Par wrote:
>>You may learn whether that frequency was strong in the interval that the > > >>samples represent. You won't learn where within the interval, and you >>wont know about dominance unless you know the strengths of the other >>frequencies and can compare. >> >>Jerry >>-- > > > I don't want to compare the strength of that freq. with the other freqs. I > am not concerned about the other freqs. at all. I just want to see if there > are any particular portions within the interval of the samples where the > freq. is stronger than at other portions within that interval.
If your interval is the time to take 100 samples, then you can't localize within it. Your indication of strength will be an average over the entire 100-sample interval. When you wrote "dominant", what meaning did you have in mind?
> As I do not have any information about the signal outside that interval, I > understand I am very restricted as far as the freqs. I can identify, > particularly the low freqs. that spread out far beyond the range of the > samples to complete a full cycle. However, this is the problem with any > already-digitized sequence. If we do not know at exactly what freq. the > original signal was sampled, then the best we can do is to assume a > normalized sampling freq. to take a DFT, I think.
Using a normalized frequency is what you do in any case. The sampling frequency is merely a scale factor. Your project is similar to some of the pseudo science that market analysts dupe themselves into believing. The prime (and saving) difference is that your assumptions are right out in the open where they can be examined. (Theirs are usually buried under so much mathematical mumbo-jumbo that they aren't found.) Whether that examination will lead to clarifying or debunking is yet to be known. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
>If your interval is the time to take 100 samples, then you can't >localize within it. Your indication of strength will be an average over >the entire 100-sample interval. When you wrote "dominant", what meaning >did you have in mind? > > >Jerry >--
Why do you think I cannot localize within the interval of samples? Forgetting about filters for a moment, localizing in the time domain is what STFT about, right? In STFT, we apply a sliding window and take the DFT of each section. If I want to monitor the evolution of a single freq. using the STFT, say N/3, I can simply compute the DFT at N/3, slide the window further, compute DFT at N/3 again, and so forth. Then, I will get an idea of how N/3 evolves along the time domain. This would be equivalent to filtering out the single frequency using a digital filter, followed by decimation. The decimation factor will be depending on how many samples I slide the window through in the STFT. So, after filtering, I can compute the square of the time-domain signal to see the strength of the freq. along the time-domain. --- Par. This message was sent using the Comp.DSP web interface on www.DSPRelated.com
Par wrote:
>>If your interval is the time to take 100 samples, then you can't >>localize within it. Your indication of strength will be an average over >>the entire 100-sample interval. When you wrote "dominant", what meaning >>did you have in mind? >> >> >>Jerry >>-- > > > Why do you think I cannot localize within the interval of samples? > Forgetting about filters for a moment, localizing in the time domain is > what STFT about, right? > > In STFT, we apply a sliding window and take the DFT of each section. If I > want to monitor the evolution of a single freq. using the STFT, say N/3, I > can simply compute the DFT at N/3, slide the window further, compute DFT at > N/3 again, and so forth. Then, I will get an idea of how N/3 evolves along > the time domain. > > This would be equivalent to filtering out the single frequency using a > digital filter, followed by decimation. The decimation factor will be > depending on how many samples I slide the window through in the STFT. So, > after filtering, I can compute the square of the time-domain signal to see > the strength of the freq. along the time-domain.
A sliding DFT will come closest to providing what you want (recall that I recommended it). A sample doesn't represent a frequency, but rather an amplitude. There is also a sliding Goertzel, by the way. You will not get temporal localization within 100 samples if you use all 100 samples in the same DFT. If you use, say, 10 at a time, then slide along until they're all exhausted, you will have done three things: only the middle 80 results will be completely valid; you will have made the temporal resolution 10 times better; you will have made the frequency resolution 10 times coarser. For a given number of samples, the product of temporal and frequency resolutions is fixed. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
When your signal only has two frequency components, then in principle
you only need 6 data points to extract them, so comparing different
zero-paddings is kind of pointless...

In general, if you know a priori that your signal consists of some
small number << N of sinusoids (possibly exponentially decaying), then
there are much better ways to extract  the frequencies/amplitudes than
a DFT with naive peak identification.   (Such knowledge arises
naturally in many physical systems where you know that a signal stems
from a bounded number of resonant modes.)   See e.g.
http://ab-initio.mit.edu/harminv for one such method.

(Of course, it's not clear whether this helps the original poster,
since s/he didn't specify much about the nature of the signal to be
analyzed.)

Regards,
Steven G. Johnson

stev...@alum.mit.edu wrote:
> When your signal only has two frequency components, then in principle > you only need 6 data points to extract them, so comparing different > zero-paddings is kind of pointless... > > In general, if you know a priori that your signal consists of some > small number << N of sinusoids (possibly exponentially decaying),
then
> there are much better ways to extract the frequencies/amplitudes
than
> a DFT with naive peak identification. (Such knowledge arises > naturally in many physical systems where you know that a signal stems > from a bounded number of resonant modes.)
Agreed. I'd just want to add that the word "know" really means "know" in this particular context. "Has reason to believe" is not good enough, nor is "assume". If the properties of the signal do not completely comply with the requirements of the signal processing algorithm, the algorithm fails and the user might not be able to detect the faliure. Consequently, model-based frequency estimators are most useful in interactive applications confined to the lab, where a skilled user can inspect the data before he decides whether his knowledge of the data is sufficiently good to use the model-based method. Rune
stev...@alum.mit.edu wrote:

> When your signal only has two frequency components, then in principle > you only need 6 data points to extract them, so comparing different > zero-paddings is kind of pointless...
Except that, with any_ noise on the signal, the more points you have, the better the expected variance of the estimator of your frequencies. 6 points just won't cut it when the SNR is below -20dB. Ciao, Peter K.
stevenj@alum.mit.edu wrote:
> > When your signal only has two frequency components, then in principle > you only need 6 data points to extract them, so comparing different > zero-paddings is kind of pointless... > > In general, if you know a priori that your signal consists of some > small number << N of sinusoids (possibly exponentially decaying), then > there are much better ways to extract the frequencies/amplitudes than > a DFT with naive peak identification.
I agree. However, the other poster was claiming that zero extending a signal allowed the DFT to separate otherwise inseparable peaks. Erik -- +-----------------------------------------------------------+ Erik de Castro Lopo nospam@mega-nerd.com (Yes it's valid) +-----------------------------------------------------------+ "There are two kinds of large software systems: those that evolved from small systems and those that don't work." -- Seen on slashdot.org, then quoted by amk
Erik wrote:

>stev...@alum.mit.edu wrote: >> In general, if you know a priori that your signal consists of some >> small number << N of sinusoids (possibly exponentially decaying), >> then there are much better ways to extract the frequencies/ >> amplitudes than a DFT with naive peak identification. > >I agree. However, the other poster was claiming that zero extending a >signal allowed the DFT to separate otherwise inseparable peaks.
Erik, if by other poster you mean my posts from May 4, go back and read them again. This is not what I claimed, but I'm not going to repeat myselft a third time. Regards, Andor
Ronald H. Nicholson Jr. wrote:

> In article <4278BE7F.E498F49B@mega-nerd.com>, > Erik de Castro Lopo <nospam@mega-nerd.com> wrote:
>>The resolution of the FFT is limited by the length of the data. Zero >>padding the data to extend its length does not provide more resolution.
> Not true. If the noise level is low enough then the new bins provided > by zero padding the input to a larger FFT will increase the measurement > accuracy of single frequencies, and perhaps even pairwise resolution of > multiple frequencies. I, and at least on other poster in comp.dsp, have > tested this on simulated data in a noiseless background; and it works. > Realistically, the limit to resolution is eventually bounded by the > quantization noise of your data representation or acquisition.
> The reason this works is that zero padding is a computationally efficient > way of Sync interpolation of the frequency data; and Sync interpolation > (with a wide enough window) is more correct than linear or quadratic > interpolation.
It seems to me you are recovering information that would otherwise have been lost due to an insufficient number of bits in doing the FFT. Otherwise, there is no more information.
> The Sync function is "wrinkled" enough so that after > interpolation using a wide enough window, you might find more local > maxima/minima than in just the FFT result bins alone. But the new > "wrinkles" are small enough that they are easily overwhelmed by any noise.
There is a whole science of non-linear deconvolution. My favorite book on that subject is by Jansson. As an example, applying deconvolution to absorption or emission spectra, which can never be negative, the constraint on non-negativity implies a non-linear process. The FFT, and DFT in general, has periodic boundary conditions, which allow only discrete frequency solutions. If you use the FFT as a substitute for an infinite, but discrete, time transform it seems to me that you are doing a non-linear deconvolution problem. There are many equally good (mathematically) solutions to the problem, and you are selecting based on physical constraints. Sinc interpolation is based on the solution when the input is non-zero over a finite range of points. Consider a CD as a set of samples of an audio signal in a concert hall over 60 minutes. You have no information about the audio signal in that hall before and after the concert. There is no reason to assume that it is zero. It is only by making that assumption that you can do sinc interpolation. -- glen (P.S. Consider a CD recorded with a perfect, extremely sharp anti-aliasing filter and with much more than 16 bit samples. If you are careful with your transforms you should be able to extract some of the signal from the concert before and after the one recorded on the CD.)
In article <2JWdnXURT7pyX-LfRVn-3w@comcast.com>,
glen herrmannsfeldt  <gah@ugcs.caltech.edu> wrote:
>The FFT, and DFT in general, has periodic boundary conditions, which >allow only discrete frequency solutions.
An FFT is a computationally efficient transform for mapping N points into another set of N points in a different orthogonal basis. No assumption of periodicity is required to transform a complex vector from one basis to another. The assumption of periodicity is only needed for certain interpretations of the results. An, in fact, in the real world, the probability that some unknown sinusoidal signal is exactly synchronized to an arbitrary sampling window, thus providing the periodic boundary condition, is small, and but possibly much larger than the presence of some Sync-shaped distribution of FFT bin frequencies. By making some a priori assumptions about the data (e.g. some finite number of very narrow band signals above a certain noise floor) one can interpret the results of an FFT to produce more useful results (given that the a priori assumptions are likely met) than by assuming that only the discrete FFT bin frequencies are present. A sinusoidal tone-burst between FFT bins will never-the-less leave a clear signature in the FFT spectra. One is often able to calculate bounds, or statistically measure the likelyhood using other means, that this same between-the-bins FFT spectrum signature could be produced by noise in the system under consideration. One can then make a decision based on comparitive probabilities as to which interpretation to prefer. And my earlier comments about being able to more precisely measure frequency between bins was based on the a priori assumption of "sufficiently" low noise compared to the signal(s) of interest. IMHO. YMMV. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.