DSPRelated.com
Forums

Digitally compensating a low-pass (integrator)

Started by Peter Mairhofer September 24, 2010
Hi,

I have a signal f(t) with nyquist rate W, i.e. the maximum frequency is W/2.

This signal is filtered with an integrator (simulated in Simulink) the 
following way:

f_I(t) = \int_t^{t+1/W} f(t) dt

In words: I integrate the signal for a period of 1/W, then the 
integrator is reset. It is obvious that the signal won't be the same 
afterwards; however I only integrate for 1/nyquistrate long. So I think 
it should be possible to compensate this filter in digital domain.

The equation above should be the same as the following block diagram:

f(t)              +------------+
 >-+---------------+ Integrator +--------------+----> f_I(t)
   |               +------------+              ^ -
   |                                           |
   |  +-----------------+   +------------+     |
   +--+ exp(-j*Omega*T) +---+ Integrator +-----+
      +-----------------+   +------------+

Now I want to compensate the effect of this analog filter in digital 
domain (e.g. using MATLAB and filter()).

Using the equivalent circuit and the bilinear transform as tool I got 
something like H(z) = T/2 * (z+1)/z

So the inverse is 2/T * 1/(1 + z^-1)

Clearly this filter is unstable and therefore it does not work.


How could I compensate this filter digitally when I have the nyquist 
samples of f_I(t) ?

Thank you very much in advance.

Regards,
   Peter


PS: The whole thing should be equivalent to oversample f(t) with a 
factor of e.g. 100, that is, the sampling rate is W*100; and afterwards 
summing up 100 consecutive samples in digital domain ... At least I get 
the same "distortion" in this case as with the SIMULINK simulation



Peter Mairhofer wrote:

> Hi, > > I have a signal f(t) with nyquist rate W, i.e. the maximum frequency is > W/2. > > This signal is filtered with an integrator (simulated in Simulink) the > following way: > > f_I(t) = \int_t^{t+1/W} f(t) dt > > In words: I integrate the signal for a period of 1/W, then the > integrator is reset. It is obvious that the signal won't be the same > afterwards; however I only integrate for 1/nyquistrate long. So I think > it should be possible to compensate this filter in digital domain. > > The equation above should be the same as the following block diagram: > > f(t) +------------+ > >-+---------------+ Integrator +--------------+----> f_I(t) > | +------------+ ^ - > | | > | +-----------------+ +------------+ | > +--+ exp(-j*Omega*T) +---+ Integrator +-----+ > +-----------------+ +------------+ > > Now I want to compensate the effect of this analog filter in digital > domain (e.g. using MATLAB and filter()). > > Using the equivalent circuit and the bilinear transform as tool I got > something like H(z) = T/2 * (z+1)/z > > So the inverse is 2/T * 1/(1 + z^-1) > > Clearly this filter is unstable and therefore it does not work. > > > How could I compensate this filter digitally when I have the nyquist > samples of f_I(t) ?
It is not possible to invert this filter. However, it is possible to approximate the inverse to any given accuracy.
> Thank you very much in advance.
OK. Send me $1000 in advance.
> PS: The whole thing should be equivalent to oversample f(t) with a > factor of e.g. 100, that is, the sampling rate is W*100; and afterwards > summing up 100 consecutive samples in digital domain ... At least I get > the same "distortion" in this case as with the SIMULINK simulation
What is the goal of this exercise? VLV
On 09/24/2010 02:12 PM, Peter Mairhofer wrote:
> Hi, > > I have a signal f(t) with nyquist rate W, i.e. the maximum frequency is > W/2.
Nyquist did not say that: http://www.wescottdesign.com/articles/Sampling/sampling.html
> This signal is filtered with an integrator (simulated in Simulink) the > following way: > > f_I(t) = \int_t^{t+1/W} f(t) dt > > In words: I integrate the signal for a period of 1/W, then the > integrator is reset. It is obvious that the signal won't be the same > afterwards; however I only integrate for 1/nyquistrate long. So I think > it should be possible to compensate this filter in digital domain.
Yes an no. The filter has a zero at W/2, and so kills any information that may be in the signal at that frequency -- but that's also the alias frequency for DC, so killing that signal may be a good thing. Certainly you can correct for some
> The equation above should be the same as the following block diagram: > > f(t) +------------+ > >-+---------------+ Integrator +--------------+----> f_I(t) > | +------------+ ^ - > | | > | +-----------------+ +------------+ | > +--+ exp(-j*Omega*T) +---+ Integrator +-----+ > +-----------------+ +------------+ > > Now I want to compensate the effect of this analog filter in digital > domain (e.g. using MATLAB and filter()).
Hopefully using something real, at some point.
> Using the equivalent circuit and the bilinear transform as tool I got > something like H(z) = T/2 * (z+1)/z > > So the inverse is 2/T * 1/(1 + z^-1) > > Clearly this filter is unstable and therefore it does not work.
Well, it's metasable, and it won't work. You're also using the bilinear transform for behavior that's at a frequency well outside of where the approximation breaks down.
> How could I compensate this filter digitally when I have the nyquist > samples of f_I(t) ?
"Nyquist sample?" Is this term defined in writing anywhere in the universe other than your post just now?
> PS: The whole thing should be equivalent to oversample f(t) with a > factor of e.g. 100, that is, the sampling rate is W*100; and afterwards > summing up 100 consecutive samples in digital domain ... At least I get > the same "distortion" in this case as with the SIMULINK simulation
Well, no. The whole thing can be _approximated_ by oversampling, then doing a sum-and-dump instead of an integrate-and-dump, but it isn't the _same_. What are you really trying to _do_? Following an integrate-and-dump stage with a sampler is something that I might have done in a digital control loop ten or fifteen years ago, as a way of notching out all the noise that can alias down to DC without doing great violence to my low frequency amplitude or phase accuracy. But I don't see much utility in it anywhere else, and these days it's better to sample extra fast and do a sum-and-dump in the digital world. I don't see much point in doing this with, say, an audio signal -- it's a lot of work for a rather pedestrian anti-alias filter for that sort of signal. If it _is_ audio then (a) you need to learn more about the Nyquist-Shannon theorem and its implications, (b) it's probably not a good method, and (c) you can approximate an inverse filter, up to the actual frequency content of your signal, to whatever precision you're willing to choke out of the software and to add delay for. If it's a control loop, then you neither want to nor do you need to compensate for the effect explicitly -- just model it as part of your plant behavior, and tune around it. That'll be enough compensation right there. If it's something else yet again -- it depends. So, again, in harmony: what are you _really_ doing? -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" was written for you. See details at http://www.wescottdesign.com/actfes/actfes.html
Hi,

Thank you for your reply!

Am 25.09.2010 01:04, schrieb Vladimir Vassilevsky:
> Peter Mairhofer wrote: >> I have a signal f(t) with nyquist rate W, i.e. the maximum frequency >> is W/2. >> >> This signal is filtered with an integrator (simulated in Simulink) the >> following way: >> >> f_I(t) = \int_t^{t+1/W} f(t) dt >> [...] > > It is not possible to invert this filter. However, it is possible to > approximate the inverse to any given accuracy.
Do you have a pointer on "how to"?
>> Thank you very much in advance. > > OK. Send me $1000 in advance.
Sorry, I do not understand ;-) Is this sentence bad English? (I am not a native speaker)
>[...] > What is the goal of this exercise?
The goal is that I just have an analog filter of this type and I want to "correct" this distortion. And OK, [1] is what I am trying out. If I build this algorithm solely in MATLAB and have f(t) in sampled form the result is correct. However, as soon as f(t) is oversampled (or simulated it SIMULINK with integrate-and-dump) I get a distortion which pretty looks like a low-pass character: [2]. The blue line is f(t) - random frequencies of amplitude 1. The green line is the reconstructed signal according to [1] - or alternatively f_I(t). Regards, Peter [1] http://arxiv.org/PS_cache/arxiv/pdf/0902/0902.0026v2.pdf [2] http://img826.imageshack.us/img826/3772/lowpasscharacter.png
Dear Tim,

Thank you for taking the time to reply.

Am 25.09.2010 01:40, schrieb Tim Wescott:
> On 09/24/2010 02:12 PM, Peter Mairhofer wrote: >> Hi, >> >> I have a signal f(t) with nyquist rate W, i.e. the maximum frequency is >> W/2. > > Nyquist did not say that: > http://www.wescottdesign.com/articles/Sampling/sampling.html
Thank you for this link, I am going to read it afterwards. For now, I do not understand what is wrong. If it is "a signal with nyquist rate W": Ok, correction: "I have a signal f(t) whose highest frequncy is less than W/2"
>> This signal is filtered with an integrator (simulated in Simulink) the >> following way: >> >> f_I(t) = \int_t^{t+1/W} f(t) dt >> >> In words: I integrate the signal for a period of 1/W, then the >> integrator is reset. It is obvious that the signal won't be the same >> afterwards; however I only integrate for 1/nyquistrate long. So I think >> it should be possible to compensate this filter in digital domain. > > Yes an no. The filter has a zero at W/2, and so kills any information > that may be in the signal at that frequency -- but that's also the alias > frequency for DC, so killing that signal may be a good thing. > > Certainly you can correct for some
Hmm, ok. According to my equivalent circuit I derived the transfer function H(s) = 1/s * (1 - exp(-s*W/2)) for this filter. Is this correct? Yes, you are right, it is zero when s=W/2.
>> [...] >> Now I want to compensate the effect of this analog filter in digital >> domain (e.g. using MATLAB and filter()). > > Hopefully using something real, at some point.
What do you mean by something "real"?
>[...] > Well, it's metasable, and it won't work. You're also using the bilinear > transform for behavior that's at a frequency well outside of where the > approximation breaks down.
Yes I thought maybe I need to take a different transform than bilinear? Bilinear transform maps [0,\infty] to [0,\pi] ...
>> How could I compensate this filter digitally when I have the nyquist >> samples of f_I(t) ? > > "Nyquist sample?" Is this term defined in writing anywhere in the > universe other than your post just now?
Probably not - sorry ;-) "... if I have samples of f_I(t) at the nyquist rate"
>> PS: The whole thing should be equivalent to oversample f(t) with a >> factor of e.g. 100, that is, the sampling rate is W*100; and afterwards >> summing up 100 consecutive samples in digital domain ... At least I get >> the same "distortion" in this case as with the SIMULINK simulation > > Well, no. The whole thing can be _approximated_ by oversampling, then > doing a sum-and-dump instead of an integrate-and-dump, but it isn't the > _same_.
Ok, I see. I observed that the "distortion" becomes clearer with higher oversampling rate. However, it looks like in [2].
> [...] > If it's something else yet again -- it depends. > > So, again, in harmony: what are you _really_ doing?
Yes, it is something different. As also mentioned in my reply [3], I am playing around with an algorithm described in [1]. The clue about this algorithm is that I do NOT want to oversample but to sample at a rate as low as possible. In principle it works good except of the distortion introduced by the integrator. For example, in [2] you can see it: The blue bins are frequencies randomly chosen between [0, W/2] and the green bins is the reconstructed signal (or alternatively f_I(t)). So as a conclusion I would like to compensate this distortion - or maybe it even does not work? Regards, Peter [1] http://arxiv.org/PS_cache/arxiv/pdf/0902/0902.0026v2.pdf [2] http://img826.imageshack.us/img826/3772/lowpasscharacter.png [3] <i7kptg$6ha$1@news.albasani.net>
Am 25.09.2010 14:48, schrieb Peter Mairhofer:
> Am 25.09.2010 01:40, schrieb Tim Wescott: >> On 09/24/2010 02:12 PM, Peter Mairhofer wrote: >>> [...] >>> I have a signal f(t) with nyquist rate W, i.e. the maximum frequency is >>> W/2. >> >> Nyquist did not say that: >> http://www.wescottdesign.com/articles/Sampling/sampling.html > > Thank you for this link, I am going to read it afterwards. > > For now, I do not understand what is wrong. [...]
Ok, now I think I know what you meant: "[...] What the Nyquist theorem - absolutely and positively - does not say, is that you can design your system to operate right at the Nyquist rate, at least not with any reasonable chance of success." For sure, this is clear to me. However, in this stage I have to make certain assumptions (as can also be seen in my cited paper): In my case f(t) *is* perfectly BL to W/2 and all frequencies are integers (e.g. no leakage). BTW: The cited theorem is not the nyquist theorem but the WKS theorem (or at least the Nyquist-Shannon theorem). Nyquist did not say anything about sampling [1] - he "only" "determined that the number of independent pulses that could be put through a telegraph channel per unit time is limited to twice the bandwidth of the channel [...]. This rule is essentially a dual of what is now known as the Nyquist&ndash;Shannon sampling theorem" [2]. The sampling theorem was first mentioned by Kotelnikov (1933) and afterwards by Shannon (1949). Regards, Peter [1] http://www.astro.ufrgs.br/med/imagens/nyquist.pdf [2] http://en.wikipedia.org/wiki/Harry_Nyquist
Hi,

Am 25.09.2010 14:48, schrieb Peter Mairhofer:
> Am 25.09.2010 01:40, schrieb Tim Wescott: >> On 09/24/2010 02:12 PM, Peter Mairhofer wrote: >> [...] >>> This signal is filtered with an integrator (simulated in Simulink) the >>> following way: >>> >>> f_I(t) = \int_t^{t+1/W} f(t) dt >>> [...] >> >> Yes an no. The filter has a zero at W/2, and so kills any information >> that may be in the signal at that frequency -- but that's also the alias >> frequency for DC, so killing that signal may be a good thing. >> >> Certainly you can correct for some > > Hmm, ok. According to my equivalent circuit I derived the transfer function > > H(s) = 1/s * (1 - exp(-s*W/2)) > > for this filter. Is this correct? Yes, you are right, it is zero when > s=W/2.
Sorry, nonsense. It is zero when exp(...) = 1. Besides that, it should be exp(-s/W) ... Once again, this should be correct, shouldn't it? f_I(t) = \int_{t-1/W}^t f(t) dt f(t) +-----+ >-+------------+ 1/s +-----------+----> f_I(t) | +-----+ ^ - | | | +-----------+ +-----+ | +--+ exp(-s/W) +---+ 1/s +-----+ +-----------+ +-----+ ^^^^^^ Delay 1/W So: H(s) = 1/s - exp(-j*s/W)*1/s = 1/s * (1 - exp(-s/W)) But no zero at W/2 ... *confused*
> [...]
Regards, Peter
On 09/25/2010 05:48 AM, Peter Mairhofer wrote:
> Dear Tim, > > Thank you for taking the time to reply. > > Am 25.09.2010 01:40, schrieb Tim Wescott: >> On 09/24/2010 02:12 PM, Peter Mairhofer wrote: >>> Hi, >>> >>> I have a signal f(t) with nyquist rate W, i.e. the maximum frequency is >>> W/2. >> >> Nyquist did not say that: >> http://www.wescottdesign.com/articles/Sampling/sampling.html > > Thank you for this link, I am going to read it afterwards. > > For now, I do not understand what is wrong. If it is "a signal with > nyquist rate W": Ok, correction: "I have a signal f(t) whose highest > frequncy is less than W/2" > >>> This signal is filtered with an integrator (simulated in Simulink) the >>> following way: >>> >>> f_I(t) = \int_t^{t+1/W} f(t) dt >>> >>> In words: I integrate the signal for a period of 1/W, then the >>> integrator is reset. It is obvious that the signal won't be the same >>> afterwards; however I only integrate for 1/nyquistrate long. So I think >>> it should be possible to compensate this filter in digital domain. >> >> Yes an no. The filter has a zero at W/2, and so kills any information >> that may be in the signal at that frequency -- but that's also the alias >> frequency for DC, so killing that signal may be a good thing. >> >> Certainly you can correct for some > > Hmm, ok. According to my equivalent circuit I derived the transfer function > > H(s) = 1/s * (1 - exp(-s*W/2)) > > for this filter. Is this correct? Yes, you are right, it is zero when > s=W/2. > >>> [...] >>> Now I want to compensate the effect of this analog filter in digital >>> domain (e.g. using MATLAB and filter()). >> >> Hopefully using something real, at some point. > > What do you mean by something "real"? > >> [...] >> Well, it's metasable, and it won't work. You're also using the bilinear >> transform for behavior that's at a frequency well outside of where the >> approximation breaks down. > > Yes I thought maybe I need to take a different transform than bilinear? > > Bilinear transform maps [0,\infty] to [0,\pi] ... > >>> How could I compensate this filter digitally when I have the nyquist >>> samples of f_I(t) ? >> >> "Nyquist sample?" Is this term defined in writing anywhere in the >> universe other than your post just now? > > Probably not - sorry ;-) > > "... if I have samples of f_I(t) at the nyquist rate" > >>> PS: The whole thing should be equivalent to oversample f(t) with a >>> factor of e.g. 100, that is, the sampling rate is W*100; and afterwards >>> summing up 100 consecutive samples in digital domain ... At least I get >>> the same "distortion" in this case as with the SIMULINK simulation >> >> Well, no. The whole thing can be _approximated_ by oversampling, then >> doing a sum-and-dump instead of an integrate-and-dump, but it isn't the >> _same_. > > Ok, I see. I observed that the "distortion" becomes clearer with higher > oversampling rate. However, it looks like in [2]. > >> [...] >> If it's something else yet again -- it depends. >> >> So, again, in harmony: what are you _really_ doing? > > Yes, it is something different. As also mentioned in my reply [3], I am > playing around with an algorithm described in [1]. The clue about this > algorithm is that I do NOT want to oversample but to sample at a rate as > low as possible. > > In principle it works good except of the distortion introduced by the > integrator. > > For example, in [2] you can see it: The blue bins are frequencies > randomly chosen between [0, W/2] and the green bins is the reconstructed > signal (or alternatively f_I(t)). > > So as a conclusion I would like to compensate this distortion - or maybe > it even does not work? > > Regards, Peter > > > > [1] http://arxiv.org/PS_cache/arxiv/pdf/0902/0902.0026v2.pdf > [2] http://img826.imageshack.us/img826/3772/lowpasscharacter.png > [3] <i7kptg$6ha$1@news.albasani.net>
So choose an upper limit to the frequencies you want to correct that's short of W/2, and make a filter that's the reciprocal of the filtering introduced by the integrator. The closer you get to W/2 the sharper your cutoff will have to be, meaning that your filter will have to be longer. But aside from that, you should be set. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" was written for you. See details at http://www.wescottdesign.com/actfes/actfes.html
On 09/25/2010 06:33 AM, Peter Mairhofer wrote:
> Am 25.09.2010 14:48, schrieb Peter Mairhofer: >> Am 25.09.2010 01:40, schrieb Tim Wescott: >>> On 09/24/2010 02:12 PM, Peter Mairhofer wrote: >>>> [...] >>>> I have a signal f(t) with nyquist rate W, i.e. the maximum frequency is >>>> W/2. >>> >>> Nyquist did not say that: >>> http://www.wescottdesign.com/articles/Sampling/sampling.html >> >> Thank you for this link, I am going to read it afterwards. >> >> For now, I do not understand what is wrong. [...] > > Ok, now I think I know what you meant: "[...] What the Nyquist theorem - > absolutely and positively - does not say, is that you can design your > system to operate right at the Nyquist rate, at least not with any > reasonable chance of success." > > For sure, this is clear to me. However, in this stage I have to make > certain assumptions (as can also be seen in my cited paper): In my case > f(t) *is* perfectly BL to W/2 and all frequencies are integers (e.g. no > leakage).
Part of what's sticking sideways when I try to swallow your premise is just that perfection. If f(t) is _perfectly_ bandlimited to W/2 then it is of infinite extent in time. That's just a given. So either you sample it for an infinite amount of time, then correct (in which case I don't care because I'll be dust before you get any results), or you sample it for a _finite_ amount of time, which munges your _perfectly_ bandlimited signal into something that is no longer perfect. Since you're ignoring the necessary (unless you're immortal, and have a computer with infinite memory) approximation, I can't tell if you're going to luck out and get a system that works OK, or if the approximation is going to stomp on you. Ditto the "all frequencies are integers" claim. This all makes me feel like I'm in a room with smoke and mirrors, and I want to know what you're _really_ doing -- and telling me the shape of one or two of the mirrors isn't what I mean.
> BTW: The cited theorem is not the nyquist theorem but the WKS theorem > (or at least the Nyquist-Shannon theorem). Nyquist did not say anything > about sampling [1] - he "only" "determined that the number of > independent pulses that could be put through a telegraph channel per > unit time is limited to twice the bandwidth of the channel [...]. This > rule is essentially a dual of what is now known as the Nyquist&ndash;Shannon > sampling theorem" [2]. The sampling theorem was first mentioned by > Kotelnikov (1933) and afterwards by Shannon (1949).
There's a corrected version of that page somewhere, but apparently it's not on my site!!! -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" was written for you. See details at http://www.wescottdesign.com/actfes/actfes.html