I have a rather preliminary question about filter design.. Am I right in saying that an implicit assumption in filter design is that the sampling is uniform ? Just like in FFT, where data has to be uniformly sampled. I have data which is non-uniformly sampled, and I have to develop a second-order HPF that can work on the data to reject low-freq. components. Is there a specific way to proceed with this? Thank you Dinkar
Designing Filters with non-uniform sampling
Started by ●July 27, 2004
Reply by ●July 27, 20042004-07-27
"dnb" <dbhat2@yahoo.com> wrote in message news:fa0deac8.0407271249.36ef54f9@posting.google.com...> [...] > I have data which is non-uniformly sampled, and I have to develop a > second-order HPF that can work on the data to reject low-freq. > components. Is there a specific way to proceed with this?The most practical approach would probably be to resample the data uniformly before filtering. If you really need to, you can regenerate the samples at non-uniform abscissas when you're done. This isolates the filter design from the major decisions you need to make about how to reconstruct the original signal from the non-uniform samples. -- Matt
Reply by ●July 28, 20042004-07-28
"Matt Timmermans" <mt0000@sympatico.nospam-remove.ca> wrote in message news:<mpENc.13316$BU4.690073@news20.bellglobal.com>...> "dnb" <dbhat2@yahoo.com> wrote in message > news:fa0deac8.0407271249.36ef54f9@posting.google.com... > > [...] > > I have data which is non-uniformly sampled, and I have to develop a > > second-order HPF that can work on the data to reject low-freq. > > components. Is there a specific way to proceed with this? > > The most practical approach would probably be to resample the data uniformly > before filtering. If you really need to, you can regenerate the samples at > non-uniform abscissas when you're done. This isolates the filter design > from the major decisions you need to make about how to reconstruct the > original signal from the non-uniform samples.The data input is inherently non-uniformly sampled (I dont have access to the source). I can interpolate if that is what you mean. If so, can you kindly mention type of interpolation you would recommed? Thank you Dinkar
Reply by ●July 28, 20042004-07-28
Dinkar, I believe that the method you use should be dependent on what kind of data you have. For example, linear interpolation will lead to a case where the second derivative may have very high values or even be undefined. Depending on how you HPF works this could be a big problem. I also suspect that if you shared more of the details of your project with the group you could get a better answer. By any chance, is this related to the stock market? Bob Sherry "dnb" <dbhat2@yahoo.com> wrote in message news:fa0deac8.0407280822.5f8a3cf4@posting.google.com...> "Matt Timmermans" <mt0000@sympatico.nospam-remove.ca> wrote inmessage news:<mpENc.13316$BU4.690073@news20.bellglobal.com>...> > "dnb" <dbhat2@yahoo.com> wrote in message > > news:fa0deac8.0407271249.36ef54f9@posting.google.com... > > > [...] > > > I have data which is non-uniformly sampled, and I have todevelop a> > > second-order HPF that can work on the data to reject low-freq. > > > components. Is there a specific way to proceed with this? > > > > The most practical approach would probably be to resample the datauniformly> > before filtering. If you really need to, you can regenerate thesamples at> > non-uniform abscissas when you're done. This isolates the filterdesign> > from the major decisions you need to make about how to reconstructthe> > original signal from the non-uniform samples. > > The data input is inherently non-uniformly sampled (I dont haveaccess to the> source). I can interpolate if that is what you mean. If so, can you > kindly mention type of interpolation you would recommed? > > Thank you > Dinkar
Reply by ●July 29, 20042004-07-29
"dnb" <dbhat2@yahoo.com> wrote in message news:fa0deac8.0407280822.5f8a3cf4@posting.google.com...> The data input is inherently non-uniformly sampled (I dont have access tothe> source). I can interpolate if that is what you mean.Yes.> If so, can you > kindly mention type of interpolation you would recommed?Ah, I see. I was leaving that decision to you, because it depends entirely on your application. When we use uniform sampling in DSP, we almost always ensure or assume that anti-aliasing filters have been applied to the signal before sampling, and that the sample rate has been chosen according the Nyquist criterion. These conditions imply the correct way to reconstruct the signal (i.e., interpolate) mathematically, tells us how the samples relate to the signal in the real world, and gives physical meaning to processes like high-pass filtering. There are no such common assumptions that usually apply to applications of non-uniform sampling, so you will have to tell us about the process that generates the samples before we could determine how interpolation would best be accomplished.
Reply by ●July 29, 20042004-07-29
"Matt Timmermans" <mt0000@sympatico.nospam-remove.ca> wrote in message news:<CSZNc.675$U_3.103756@news20.bellglobal.com>...> "dnb" <dbhat2@yahoo.com> wrote in message > news:fa0deac8.0407280822.5f8a3cf4@posting.google.com... > > The data input is inherently non-uniformly sampled (I dont have access to > the > > source). I can interpolate if that is what you mean. > > Yes. > > > If so, can you > > kindly mention type of interpolation you would recommed? > > Ah, I see. I was leaving that decision to you, because it depends entirely > on your application. When we use uniform sampling in DSP, we almost always > ensure or assume that anti-aliasing filters have been applied to the signal > before sampling, and that the sample rate has been chosen according the > Nyquist criterion. These conditions imply the correct way to reconstruct > the signal (i.e., interpolate) mathematically, tells us how the samples > relate to the signal in the real world, and gives physical meaning to > processes like high-pass filtering. > > There are no such common assumptions that usually apply to applications of > non-uniform sampling, so you will have to tell us about the process that > generates the samples before we could determine how interpolation would best > be accomplished.Bob and Matt, Thanks much for the reply. Bob - this is not a project for Wall Street, and I do not work in the financial world! I work in the digital video domain. Clock samples are sent in the stream by an encoder, but they are non-uniformly spaced. The samples may deviate from an ideal clock, and I am trying to measure how far the samples deviate from the ideal clock. Only the high-frequency components in the deviation from the ideal clock are a problem (hence the HPF). One constraint is that successive samples of the clock do not exceed a certain time (100ms). In other words, I can have samples as: a(t=0), a(t=50ms), a(t=75ms), a(t=99ms), a(t=120ms). The next sample cannot be a(t=225)ms. One other fact is one can find a good linear fit between the samples. Dinkar.
Reply by ●July 29, 20042004-07-29
Hi Dinkar, If you really want to do filtering, then you probably won't go too far wrong by using a simple cubic spline interpolation, resampling at a higher, uniform, rate, and filtering normally. (I would normally find you a good reference for the type of interpolation I mean, but I'm on dial-up access this week, and that makes the internet no fun). I don't know that the conclusions you would draw from the resulting data would be terribly useful, however. The trouble is that the nature of the highest frequency components in the interpolated signal will depend a lot on the method of interpolation, and there doesn't seem to be anything known about this non-ideal clock that would give you physical justification for choosing one particular method, so there's a good chance that you'll be analyzing information you invented instead of what's really in the signal. You would probably be better off using a low-pass filter on the interpolated data, with a cutoff frequency below 5Hz (derived from your 100ms constraint), and then analyzing the differences between each sample and the low-passed signal. The low-pass filter will remove most of the artifacts introduced by the choice of interpolation method. If your final application is trying to estimating buffering requirements based on packet jitter, or something like that, the above technique matches the application -- there is some part of the system with a low-pass response to the clock, leaving you with a requirement to handle the high-frequency residual. If that is the case, then you want the low-pass filter above to match the one in the system under test. If I'm way off about what you're trying to do, then you probably want to google for "jitter analysis". -- Matt
Reply by ●July 30, 20042004-07-30
In article <fa0deac8.0407290559.66cc0911@posting.google.com>, dnb <dbhat2@yahoo.com> wrote:>Thanks much for the reply. Bob - this is not a project for Wall >Street, and I do not work in the financial world!I've been lurking here for a while, and even posted a few times. I am curious about the statement above, because I have seen similar sentiments expressed in other posts. What is it about DSP applied to financial analysis that seems to turn folks off here? Me, my background is physics and I work in the defense industry, but I do find it interesting to invent filters and apply them to market data problems; in fact much of what I learned about digital filters for tracking targets in a military context, came from looking at such filters applied to market movements. So what's the problem? Shed some light please. -A
Reply by ●July 30, 20042004-07-30
axlq@spamcop.net (axlq) wrote in message news:<cecpt1$f99$1@blue.rahul.net>...> In article <fa0deac8.0407290559.66cc0911@posting.google.com>, > dnb <dbhat2@yahoo.com> wrote: > >Thanks much for the reply. Bob - this is not a project for Wall > >Street, and I do not work in the financial world! > > I've been lurking here for a while, and even posted a few times. I > am curious about the statement above, because I have seen similar > sentiments expressed in other posts. What is it about DSP applied > to financial analysis that seems to turn folks off here? Me, my > background is physics and I work in the defense industry, but I do > find it interesting to invent filters and apply them to market data > problems; in fact much of what I learned about digital filters for > tracking targets in a military context, came from looking at such > filters applied to market movements. > > So what's the problem? Shed some light please. > > -AThe problem, as I see it, is that financial data are completely random, while "classical" DSP problems (i.e. applications in communications or physics) have to obey some underlying system. One knows, for instance, that a communications link will have certain statistical properties and that these properties will be more or less stable when the link is in use. One knows that certain targets will stay in a combat theatre, and move smoothly under the laws of physics, until they leave or are destroyed. A 688 class sub will move differently than an F22 raptor, but both move within their respective physical limits which are constant. Compare this to, say, a stock market heavily influenced by Enron shares. The market index is suceptible to manipulation, and random crashes. There is no way of knowing whether a stock price or a market index is manipulated or not. There is no way of predicting news bulletins a la Enron, that completely changes the situation. There is no way of predicting and accounting for some political decision to initiate or remove various tax schemes. The main difference between "classical" DSP and finacial data is that it is virtually impossible to use past financial data in the design of future DSP systems. DSP termino�ogy and concepts can be used to describe past data. I don't see how DSP tools can be useful for making predictions on financial data. Rune
Reply by ●July 30, 20042004-07-30
axlq wrote:> In article <fa0deac8.0407290559.66cc0911@posting.google.com>, > dnb <dbhat2@yahoo.com> wrote: > >>Thanks much for the reply. Bob - this is not a project for Wall >>Street, and I do not work in the financial world! > > > I've been lurking here for a while, and even posted a few times. I > am curious about the statement above, because I have seen similar > sentiments expressed in other posts. What is it about DSP applied > to financial analysis that seems to turn folks off here? Me, my > background is physics and I work in the defense industry, but I do > find it interesting to invent filters and apply them to market data > problems; in fact much of what I learned about digital filters for > tracking targets in a military context, came from looking at such > filters applied to market movements. > > So what's the problem? Shed some light please. > > -AI happen to think that the fate of companies and the worth of their stock depends more on how they are managed and other real-world events far more than past history of ups and downs in price. "Technical analysis" tries to predict future prices by finding trends in past prices, with little or no attention to "outside" variables. I think it's a crock. That's not why I suspect DSP analyses of stock prices. The techniques of DSP analysis apply to bandlimited data. In order for a continuous signal to be validly sampled, it must be established that the signal contains no frequencies as high as half the sample rate. In some cases, that is known a priori by the nature of the signal. At all other cases, the continuous signal must be filtered to remove high frequencies before sampling takes place. In the market analyses I've seen, no such filtering is done. Indeed, the way the data are collected, it couldn't be. No allowance for this lapse or estimate of its effects has ever come to my attention. The Nyquist criterion is routinely violated. Data collected once a day are used to predict daily trends. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������