DSPRelated.com
Forums

Basis for digital filters

Started by Crandles October 1, 2012
So I was wondering the other day why is it that recombining past input
samples (and output)with different weights ends up creating a digital
filter? I vaguely remember asking a prof this once and his response was
something along the lines of "constructive and destructive interference".
Could anyone elaborate on this for me (or provide me a link to read)? Don't
be afraid to get too mathematical either. I figured this is kind of
important as it's the basis for a lot of work I do :)

Thanks,
Graham
On Mon, 01 Oct 2012 10:54:47 -0500, Crandles wrote:

> So I was wondering the other day why is it that recombining past input > samples (and output)with different weights ends up creating a digital > filter? I vaguely remember asking a prof this once and his response was > something along the lines of "constructive and destructive > interference". Could anyone elaborate on this for me (or provide me a > link to read)? Don't be afraid to get too mathematical either. I figured > this is kind of important as it's the basis for a lot of work I do :) > > Thanks, > Graham
The classic "link" is "Signals and Systems", by Oppenheim and Wilsky (sometimes others, too). This is _not_ a web-page length subject: you need a book. I honestly couldn't tell you if you could learn by yourself with Oppenheim -- but it's the one that the majority of US engineers of a certain age learned out of in school (EE-250, last quarter of sophomore year). "Understanding Digital Signal Processing" by Lyons may be helpful, but -- at least in the second edition -- it's not obvious to me that he ever backs off and says "this is what a filter does". He kind of assumes that you already know. So I wouldn't tag it as a "start from nothing and get all you need" resource. But then, I'm not sure that "Signals and Systems" does, either: they tell you all about the math, and leave it to you to infer what's actually going on in real-world terms. "Constructive and destructive interference" does cover it, but from a long, long ways away. Basically, your filter is a linear system that amplifies (or attenuates) signals at some frequencies more than others; as long as it is linear it works the same on any given signal component no matter what other the other signal components are doing, so you can use linear systems analysis to figure out exactly how it's going to behave. You use the above property along with Fourier's work showing that any signal can be broken down into a (possibly infinite) set of sinusoids; because you can (relatively) easily figure out what a given filter does to any given sine wave and because you can figure out what your signal is composed of (or at least talk about what it is likely to be composed of), you can analyze what that given filter will do to your signal. Here's a link to a web page that is both inadequate (because you do need a book-length work) and not on your topic (because it's about designing control systems, not about designing filters). Still, it may be a help: http://www.wescottdesign.com/articles/zTransform/z-transforms.html -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
one keyword to search for is "autocorrelation". The math may look a bit
scary, but the concept itself is rather straightforward: 

For a lowpass signal (for example), if I know the value at time t, I also
know  at least something about nearby values (the more, the closer to the
one I've got). The reason is that a "lowpass" signal cannot change totally
randomly. 

Another topic to look up is "Wiener-Khintchin-theorem". 
Again, crazy math on the surface, and hard to spell, but a simple idea
behind it: 
"A narrow-band signal changes slowly, thus its autocorrelation is wide".

This isn't meant to be a complete answer to your question, just some leads
to figure out the "constructive interference" you mentioned.

A simple example with a sine wave and a delay comes to mind, too.

As you mentioned "prof", you'll probably need those concepts sooner or
later in any case.
Crandles wrote:
> So I was wondering the other day why is it that recombining past input > samples (and output)with different weights ends up creating a digital > filter? I vaguely remember asking a prof this once and his response was > something along the lines of "constructive and destructive interference". > Could anyone elaborate on this for me (or provide me a link to read)? Don't > be afraid to get too mathematical either. I figured this is kind of > important as it's the basis for a lot of work I do :) > > Thanks, > Graham >
Analog filters made of inductors and capacitors also delay (components of ) signals as well. The "why" of a filter is "that is how they add back together." The phase and magnitude elements of the FFT vectors of the two signals just add in that way - and that's something you can demonstrate with FFTW and Excel. If it's a single-sine signal, you can even derive the phasor diagram in this way. I figure that's not a bad way, since the derivation of the Fourier Transform isn't too rough. This isn't exactly first principles, but it's *a* way. -- Les Cargill
I was going to answer somewhat on the lines of Les Cargill.
If you are OK with analog filters with R,L,C components and have a feel 
for how they work and how they are designed then you likely understand 
that they are described with differential equations and Laplace transforms.

If that works for you then you might consider things like switched 
capacitor filters which are similar to sampled data but analog summation

If so, then it's not too huge a leap to switch to difference equations 
for sampled data and z-transforms.

If that works for you then you might be able to see how a digital filter 
works .. because it's based on difference equations or expressions.

Admittedly this is a rather arm-waving set of statements without any 
mathematical rigor but if it helps push you in the right direction then 
.....

Another thing to consider is this:

Consider a transversal filter made up of analog summations over a bunch 
of equal-length delay lines.  The data is continuous and the filter is 
"discrete" in the sense that the delay lines are discrete.  With this 
structure, you are half way to a finite-impulse-response or FIR digital 
filter.
Now if you periodically sample-hold the input at intervals equal to the 
delay line delays you are closer to discrete time.
Now if you sample the output you are even closer to discrete time again.
Now if you A/D the output you have discrete time and discrete amplitude 
and are looking for all intents and purposes at the output of a FIR filter.

I will leave similar descriptions of an IIR filter as an exercise for 
the student.

Fred




On 10/1/12 1:23 PM, Les Cargill wrote:
> Crandles wrote: >> So I was wondering the other day why is it that recombining past input >> samples (and output)with different weights ends up creating a digital >> filter? I vaguely remember asking a prof this once and his response was >> something along the lines of "constructive and destructive interference". >> Could anyone elaborate on this for me (or provide me a link to read)?
...
> > Analog filters made of inductors and capacitors also delay (components > of ) signals as well. The "why" of a filter is "that is how they add > back together."
here is my philosophical spin on this: LTI Analog filters are made up of 3 building blocks: 1. Adders (signals are added together) 2. Scalers (signals are multiplied by a constant value) 3. Integrators (by use of capacitors) LTI Digial filters are made up of these 3 building blocks: 1. Adders (signals are added together) 2. Scalers (signals are multiplied by a constant value) 3. Delays you can see that 1. and 2. are the same in analog and digital filters. you can also see that neither 1. nor 2. do anything different for different frequencies. they do not discriminate between frequencies. this is because they are "memoryless". so you cannot make something that discriminates between frequencies with only memoryless components. if you view it on a time scale that is normalized to the period of a sinusoid, when low frequency or high frequency signals are added or scaled, there is no difference to their size nor phase whether they were low or high frequency. but this is not the case if they are integrated (the high frequency sinusoids will come out smaller than the low) nor when they are delayed (the high frequency sinusoid will experience a greater phase shift than the low). so, in order to make a linear, time-invariant *filter* that discriminates between sinusoids of different frequencies, you need a component inside that filter that discriminates and in an analog filter it's the integrator (usually made outa a capacitor) and in a digital filter it's a delay element, even a one-sample delay. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Tim Wescott <tim@seemywebsite.com> wrote:
> On Mon, 01 Oct 2012 10:54:47 -0500, Crandles wrote:
>> So I was wondering the other day why is it that recombining past input >> samples (and output)with different weights ends up creating a digital >> filter? I vaguely remember asking a prof this once and his response was >> something along the lines of "constructive and destructive >> interference". Could anyone elaborate on this for me (or provide me a >> link to read)? Don't be afraid to get too mathematical either. I figured >> this is kind of important as it's the basis for a lot of work I do :)
It is an interesting question. Well, the constructive and destructive interference explanation is used often in optics, and, yes, does work here, too. For me, I had a pretty easy time understanding Fourier series, and, after not so long Fourier (continuous) transforms. (I still remember not understanding Fourier transforms as explained by my physics TA at 9:00 AM Monday (or Wednesday) morning, which is why I do remember finally understanding them.) Still, even understanding that, the idea of FIR and IIR filters still seems a little less obvious. It shouldn't be so hard to understand enough that the result is a filter, but a little more that it is the filter that you want.
> The classic "link" is "Signals and Systems", by Oppenheim and Wilsky > (sometimes others, too). This is _not_ a web-page length subject: you > need a book. I honestly couldn't tell you if you could learn by yourself > with Oppenheim -- but it's the one that the majority of US engineers of a > certain age learned out of in school (EE-250, last quarter of sophomore > year).
> "Understanding Digital Signal Processing" by Lyons may be helpful, but -- > at least in the second edition -- it's not obvious to me that he ever > backs off and says "this is what a filter does". He kind of assumes that > you already know. So I wouldn't tag it as a "start from nothing and get > all you need" resource.
Well, I was wondering not so long ago, and even understanding linear systems, how it is that a single amplifier can amplify a bunch of different signals, such as the whole TV band, without getting them mixed up. I mean, even though one understands the theory, it can still be surprising sometimes. To see it another way, consider writing down all the aerodynamics equations you can, and then looking out the window of an airplane at 30,000 feet. You know that the equations apply, but can still be surprised that they keep you up. (Or look at a 747 on the ground.)
> But then, I'm not sure that "Signals and Systems" does, either: they tell > you all about the math, and leave it to you to infer what's actually > going on in real-world terms.
> "Constructive and destructive interference" does cover it, but from a > long, long ways away. Basically, your filter is a linear system that > amplifies (or attenuates) signals at some frequencies more than others; > as long as it is linear it works the same on any given signal component > no matter what other the other signal components are doing, so you can > use linear systems analysis to figure out exactly how it's going to > behave.
One way to start is to consider a moving average filter. It is at least slightly more obvious that it should start to attenuate as the frequency increases.
> You use the above property along with Fourier's work showing that any > signal can be broken down into a (possibly infinite) set of sinusoids; > because you can (relatively) easily figure out what a given filter does > to any given sine wave and because you can figure out what your signal is > composed of (or at least talk about what it is likely to be composed of), > you can analyze what that given filter will do to your signal.
(snip) -- glen
Tim Wescott <tim@seemywebsite.com> wrote:
> On Mon, 01 Oct 2012 10:54:47 -0500, Crandles wrote: > >> So I was wondering the other day why is it that recombining past input >> samples (and output)with different weights ends up creating a digital >> filter? I vaguely remember asking a prof this once and his response was >> something along the lines of "constructive and destructive >> interference". Could anyone elaborate on this for me (or provide me a >> link to read)? Don't be afraid to get too mathematical either. I figured >> this is kind of important as it's the basis for a lot of work I do :) >> >> Thanks, >> Graham > > The classic "link" is "Signals and Systems", by Oppenheim and Wilsky > (sometimes others, too). This is _not_ a web-page length subject: you > need a book. I honestly couldn't tell you if you could learn by yourself > with Oppenheim -- but it's the one that the majority of US engineers of a > certain age learned out of in school (EE-250, last quarter of sophomore > year). > > "Understanding Digital Signal Processing" by Lyons may be helpful, but -- > at least in the second edition -- it's not obvious to me that he ever > backs off and says "this is what a filter does". He kind of assumes that > you already know. So I wouldn't tag it as a "start from nothing and get > all you need" resource. > > But then, I'm not sure that "Signals and Systems" does, either: they tell > you all about the math, and leave it to you to infer what's actually > going on in real-world terms. > > "Constructive and destructive interference" does cover it, but from a > long, long ways away. Basically, your filter is a linear system that > amplifies (or attenuates) signals at some frequencies more than others; > as long as it is linear it works the same on any given signal component > no matter what other the other signal components are doing, so you can > use linear systems analysis to figure out exactly how it's going to > behave. > > You use the above property along with Fourier's work showing that any > signal can be broken down into a (possibly infinite) set of sinusoids; > because you can (relatively) easily figure out what a given filter does > to any given sine wave and because you can figure out what your signal is > composed of (or at least talk about what it is likely to be composed of), > you can analyze what that given filter will do to your signal. > > Here's a link to a web page that is both inadequate (because you do need > a book-length work) and not on your topic (because it's about designing > control systems, not about designing filters). Still, it may be a help: > > http://www.wescottdesign.com/articles/zTransform/z-transforms.html
And if you are a owner of an i-device, you can find the Oppenheims lecture "Signals and Systems" on itunesU too. If you don't own Apple hardware, I believe, you can use itunes on Windows too. -- Gruss, Mark
robert bristow-johnson wrote:
> On 10/1/12 1:23 PM, Les Cargill wrote: >> Crandles wrote: >>> So I was wondering the other day why is it that recombining past input >>> samples (and output)with different weights ends up creating a digital >>> filter? I vaguely remember asking a prof this once and his response was >>> something along the lines of "constructive and destructive >>> interference". >>> Could anyone elaborate on this for me (or provide me a link to read)? > ... >> >> Analog filters made of inductors and capacitors also delay (components >> of ) signals as well. The "why" of a filter is "that is how they add >> back together." > > here is my philosophical spin on this: > > LTI Analog filters are made up of 3 building blocks: > > 1. Adders (signals are added together) > 2. Scalers (signals are multiplied by a constant value) > 3. Integrators (by use of capacitors) > > > LTI Digial filters are made up of these 3 building blocks: > > 1. Adders (signals are added together) > 2. Scalers (signals are multiplied by a constant value) > 3. Delays > > > you can see that 1. and 2. are the same in analog and digital filters. > you can also see that neither 1. nor 2. do anything different for > different frequencies. they do not discriminate between frequencies. > this is because they are "memoryless". so you cannot make something > that discriminates between frequencies with only memoryless components. > > if you view it on a time scale that is normalized to the period of a > sinusoid, when low frequency or high frequency signals are added or > scaled, there is no difference to their size nor phase whether they were > low or high frequency. but this is not the case if they are integrated > (the high frequency sinusoids will come out smaller than the low) nor > when they are delayed (the high frequency sinusoid will experience a > greater phase shift than the low). > > so, in order to make a linear, time-invariant *filter* that > discriminates between sinusoids of different frequencies, you need a > component inside that filter that discriminates and in an analog filter > it's the integrator (usually made outa a capacitor) and in a digital > filter it's a delay element, even a one-sample delay. > >
Interesting. Still, there's a time constant delay for this "integrator" - they even specified emphasis filters in tape and LP processing by units of time. Indeed, I've thought about "fraction of a sample" digital delay as a model for capacitors. I'll have to think about that one. -- Les Cargill
On Mon, 01 Oct 2012 18:55:57 -0500, Les Cargill wrote:

> robert bristow-johnson wrote: >> On 10/1/12 1:23 PM, Les Cargill wrote: >>> Crandles wrote: >>>> So I was wondering the other day why is it that recombining past >>>> input samples (and output)with different weights ends up creating a >>>> digital filter? I vaguely remember asking a prof this once and his >>>> response was something along the lines of "constructive and >>>> destructive interference". >>>> Could anyone elaborate on this for me (or provide me a link to read)? >> ... >>> >>> Analog filters made of inductors and capacitors also delay (components >>> of ) signals as well. The "why" of a filter is "that is how they add >>> back together." >> >> here is my philosophical spin on this: >> >> LTI Analog filters are made up of 3 building blocks: >> >> 1. Adders (signals are added together) 2. Scalers (signals are >> multiplied by a constant value) 3. Integrators (by use of >> capacitors) >> >> >> LTI Digial filters are made up of these 3 building blocks: >> >> 1. Adders (signals are added together) 2. Scalers (signals are >> multiplied by a constant value) 3. Delays >> >> >> you can see that 1. and 2. are the same in analog and digital filters. >> you can also see that neither 1. nor 2. do anything different for >> different frequencies. they do not discriminate between frequencies. >> this is because they are "memoryless". so you cannot make something >> that discriminates between frequencies with only memoryless components. >> >> if you view it on a time scale that is normalized to the period of a >> sinusoid, when low frequency or high frequency signals are added or >> scaled, there is no difference to their size nor phase whether they >> were low or high frequency. but this is not the case if they are >> integrated (the high frequency sinusoids will come out smaller than the >> low) nor when they are delayed (the high frequency sinusoid will >> experience a greater phase shift than the low). >> >> so, in order to make a linear, time-invariant *filter* that >> discriminates between sinusoids of different frequencies, you need a >> component inside that filter that discriminates and in an analog filter >> it's the integrator (usually made outa a capacitor) and in a digital >> filter it's a delay element, even a one-sample delay. >> >> >> > Interesting. Still, there's a time constant delay for this "integrator" > - they even specified emphasis filters in tape and LP processing by > units of time. Indeed, I've thought about "fraction of a sample" digital > delay as a model for capacitors. > > I'll have to think about that one.
I suspect you're confusing a 1st-order lowpass filter's "time constant" with integral action. If you view an integrator as a 1st-order lowpass filter, it's time constant is infinite. A filter's time constant means something -- but it's different from a delay. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com