DSPRelated.com
Forums

Magnitude Frequency Response

Started by maxyp February 26, 2005
Hi,

I need to plot the magnitude frequency response of a filter without using
the DFT method.

The input to the filter (2nd order IIR) is a sequence of the form cos(nw)
(n and w vary), and I want to plot the response as a function of the
frequency (w).

How would I go about this?
Thanks
Max


		
This message was sent using the Comp.DSP web interface on
www.DSPRelated.com
in article -9Odne7iy4Whp7zfRVn-1A@giganews.com, maxyp at akky_84@hotmail.com
wrote on 02/26/2005 22:10:

> I need to plot the magnitude frequency response of a filter without using > the DFT method. > > The input to the filter (2nd order IIR) is a sequence of the form cos(nw) > (n and w vary), and I want to plot the response as a function of the > frequency (w). > > How would I go about this?
i presume you can use double-precision floating point variables, no? are you coding this is C/C++ or similar? mathematically, you have b0 + b1*z^-1 + b2*z^-2 H(z) = ------------------------ a0 + a1*z^-1 + a2*z^-2 you have those coefficients, correct? you might find that a0 has been normalized to 1, but it doesn't have to be. you have to set z = exp(j*2*pi*f/Fs) where Fs is your sampling frequency. you plug that into H(z) and then compute the magnitude which is |H(z)|. that means: b0 + b1*exp(-j*2*pi*f/Fs) + b2*exp(-j*4*pi*f/Fs) H(exp(j*2*pi*f/Fs)) = -------------------------------------------------- a0 + a1*exp(-j*2*pi*f/Fs) + a2*exp(-j*4*pi*f/Fs) |b0 + b1*exp(-j*2*pi*f/Fs) + b2*exp(-j*4*pi*f/Fs)| |H(exp(j*2*pi*f/Fs))| = ---------------------------------------------------- |a0 + a1*exp(-j*2*pi*f/Fs) + a2*exp(-j*4*pi*f/Fs)| now consider the square either just the numerator or denominator: |a0 + a1*exp(-j*2*pi*f/Fs) + a2*exp(-j*4*pi*f/Fs)|^2 = | a0 + a1*cos(2*pi*f/Fs) + a2*cos(4*pi*f/Fs) - j*( a1*sin(2*pi*f/Fs) + a2*sin(4*pi*f/Fs) ) |^2 = ( a0 + a1*cos(2*pi*f/Fs) + a2*cos(4*pi*f/Fs) )^2 + ( a1*sin(2*pi*f/Fs) + a2*sin(4*pi*f/Fs) )^2 you can apply some more trig simplifications and square root when you're done. same for the numerator. now if your result is magnitude in dB, you can apply the log() to the magnitude squared of the numerator and subtract the log() of the magnitude squared of the denominator. for numerical reasons, i like to express this in terms of 1 - cos(2*pi*f/Fs) and (1 - cos(2*pi*f/Fs))^2 since cos(2*pi*f/Fs) gets so damn close to 1 when f << Fs, which is the case for the bottom 3 or 4 octaves in audio.
> Thanks
FWIW. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
"maxyp" <akky_84@hotmail.com> wrote in message 
news:-9Odne7iy4Whp7zfRVn-1A@giganews.com...
> Hi, > > I need to plot the magnitude frequency response of a filter without using > the DFT method. > > The input to the filter (2nd order IIR) is a sequence of the form cos(nw) > (n and w vary), and I want to plot the response as a function of the > frequency (w). > > How would I go about this? > Thanks > Max
The frequency response of the filter is its transfer function evaluated as at many frequency points as you desire. DFT has not much to do with it. In fact a Discrete Finite Fourier Transform isn't an appropriate tool because you would have to truncate the impulse response of the filter - treating a general IIR filter as FIR. I think you'll find that the Laplace transform evaluated at points of s=jw is more to the point rather than the Fourier Transform (of any type). Of course, they are closely related. An IIR filter will have a z transform which, if evaluated at points of z=e^jw, i.e. |z|=1 will give similar results but with a frequency response that is periodic as you'd expect with a discrete time filter. Homework? Fred
"robert bristow-johnson" <rbj@audioimagination.com> wrote in message
news:BE46B342.4CED%rbj@audioimagination.com...
> in article -9Odne7iy4Whp7zfRVn-1A@giganews.com, maxyp at akky_84@hotmail.com > wrote on 02/26/2005 22:10: > > i presume you can use double-precision floating point variables, no? are > you coding this is C/C++ or similar? > > mathematically, you have > > b0 + b1*z^-1 + b2*z^-2 > H(z) = ------------------------ > a0 + a1*z^-1 + a2*z^-2 >
<snip derivation>
> > now consider the square either just the numerator or denominator: > > > |a0 + a1*exp(-j*2*pi*f/Fs) + a2*exp(-j*4*pi*f/Fs)|^2 > > = | a0 + a1*cos(2*pi*f/Fs) + a2*cos(4*pi*f/Fs) > - j*( a1*sin(2*pi*f/Fs) + a2*sin(4*pi*f/Fs) ) |^2 > > = ( a0 + a1*cos(2*pi*f/Fs) + a2*cos(4*pi*f/Fs) )^2 > + ( a1*sin(2*pi*f/Fs) + a2*sin(4*pi*f/Fs) )^2 > > > you can apply some more trig simplifications and square root when you're > done. same for the numerator. now if your result is magnitude in dB, you > can apply the log() to the magnitude squared of the numerator and subtract > the log() of the magnitude squared of the denominator.
Have you cranked through all the algebra/trig and come up with a simplified formula you could post? It might be a handy formula to have in the toolbox.
in article 38hi3kF5minutU1@individual.net, Jon Harris at
goldentully@hotmail.com wrote on 02/28/2005 16:52:

> "robert bristow-johnson" <rbj@audioimagination.com> wrote in message > news:BE46B342.4CED%rbj@audioimagination.com... >> in article -9Odne7iy4Whp7zfRVn-1A@giganews.com, maxyp at akky_84@hotmail.com >> wrote on 02/26/2005 22:10: >> >> i presume you can use double-precision floating point variables, no? are >> you coding this is C/C++ or similar? >> >> mathematically, you have >> >> b0 + b1*z^-1 + b2*z^-2 >> H(z) = ------------------------ >> a0 + a1*z^-1 + a2*z^-2 >> > <snip derivation> >> >> now consider the square either just the numerator or denominator: >> >> >> |a0 + a1*exp(-j*2*pi*f/Fs) + a2*exp(-j*4*pi*f/Fs)|^2 >> >> = | a0 + a1*cos(2*pi*f/Fs) + a2*cos(4*pi*f/Fs) >> - j*( a1*sin(2*pi*f/Fs) + a2*sin(4*pi*f/Fs) ) |^2 >> >> = ( a0 + a1*cos(2*pi*f/Fs) + a2*cos(4*pi*f/Fs) )^2 >> + ( a1*sin(2*pi*f/Fs) + a2*sin(4*pi*f/Fs) )^2 >> >> >> you can apply some more trig simplifications and square root when you're >> done. same for the numerator. now if your result is magnitude in dB, you >> can apply the log() to the magnitude squared of the numerator and subtract >> the log() of the magnitude squared of the denominator. > > Have you cranked through all the algebra/trig and come up with a simplified > formula you could post? It might be a handy formula to have in the toolbox.
20*log10[|H(e^jw)|] = 10*log10[ (b0+b1+b2)^2 - 4*(b0*b1 + 4*b0*b2 + b1*b2)*phi + 16*b0*b2*phi^2 ] -10*log10[ (a0+a1+a2)^2 - 4*(a0*a1 + 4*a0*a2 + a1*a2)*phi + 16*a0*a2*phi^2 ] where phi = sin^2(w/2) so, with single precision floating point, you don't have a problem when w is in the bottom few octaves. i guarantee there are problems for cos(w) at the bottom octaves because it gets so close to one that the difference from one (the salient information) has so few decent bits. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
"robert bristow-johnson" <rbj@audioimagination.com> wrote in message
news:BE49F9AD.4DA7%rbj@audioimagination.com...
> > > > Have you cranked through all the algebra/trig and come up with a simplified > > formula you could post? It might be a handy formula to have in the toolbox. > > 20*log10[|H(e^jw)|] = > > 10*log10[ (b0+b1+b2)^2 - 4*(b0*b1 + 4*b0*b2 + b1*b2)*phi + 16*b0*b2*phi^2 ] > > -10*log10[ (a0+a1+a2)^2 - 4*(a0*a1 + 4*a0*a2 + a1*a2)*phi + 16*a0*a2*phi^2 ] > > where phi = sin^2(w/2) so, with single precision floating point, you don't > have a problem when w is in the bottom few octaves. i guarantee there are > problems for cos(w) at the bottom octaves because it gets so close to one > that the difference from one (the salient information) has so few decent > bits.
Thanks Robert--works great!
robert bristow-johnson wrote:

> ... i guarantee there are > problems for cos(w) at the bottom octaves because it gets so close to one > that the difference from one (the salient information) has so few decent > bits.
In another context, I've use the identity cos(w) = 1 - 2sin^2(w/2) to keep numerical accuracy with small w. (For very small w, that becomes 1 - w^2/2. Tayloe wins again!) Will that work here? Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
in article a4OdneQJabrNS7nfRVn-vw@rcn.net, Jerry Avins at jya@ieee.org wrote
on 03/01/2005 15:53:

> robert bristow-johnson wrote: > >> ... i guarantee there are >> problems for cos(w) at the bottom octaves because it gets so close to one >> that the difference from one (the salient information) has so few decent >> bits. > > In another context, I've use the identity cos(w) = 1 - 2sin^2(w/2) to > keep numerical accuracy with small w. (For very small w, that becomes 1 > - w^2/2. Tayloe wins again!) Will that work here?
it illustrates the problem, Jerry, but does not solve it (unless you use that identity and fanagle the whole thing to be a function of sin^2(w/2)). you can see why cos(w) is a problem because, even for floating-point, there is a real loss of precision when w << pi . that 2sin^2(w/2) contains all of the information of what w is but those bits get clipped when you subtract it from 1. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
"robert bristow-johnson" <rbj@audioimagination.com> wrote in message
news:BE4A4440.4DD8%rbj@audioimagination.com...
> in article a4OdneQJabrNS7nfRVn-vw@rcn.net, Jerry Avins at jya@ieee.org wrote > on 03/01/2005 15:53: > > > robert bristow-johnson wrote: > > > >> ... i guarantee there are > >> problems for cos(w) at the bottom octaves because it gets so close to one > >> that the difference from one (the salient information) has so few decent > >> bits. > > > > In another context, I've use the identity cos(w) = 1 - 2sin^2(w/2) to > > keep numerical accuracy with small w. (For very small w, that becomes 1 > > - w^2/2. Tayloe wins again!) Will that work here? > > it illustrates the problem, Jerry, but does not solve it (unless you use > that identity and fanagle the whole thing to be a function of sin^2(w/2)). > > you can see why cos(w) is a problem because, even for floating-point, there > is a real loss of precision when w << pi . that 2sin^2(w/2) contains all of > the information of what w is but those bits get clipped when you subtract it > from 1.
Yes, at issue is that with floating-point, the least precision you have is with values just below 1.0. So if you can finagle (good word) your system to use "1.0-value" instead of "value", you gain a lot of resolution. This works in other cases too, even where the values have nothing to do with sines/cosines. Example: with IEEE 32-bit floats, the next smallest value less than 1.0 is ~0.9999999404 or 1 - 5.96e-8. However, by using the "1.0-value" trick, the next smallest value is something like 1 - 1.0e-37, or if you allow "denormals", 1 - 1.401e-45.
in article 38k5shF5n13v1U1@individual.net, Jon Harris at
goldentully@hotmail.com wrote on 03/01/2005 16:42:

> "robert bristow-johnson" <rbj@audioimagination.com> wrote in message > news:BE4A4440.4DD8%rbj@audioimagination.com... >> in article a4OdneQJabrNS7nfRVn-vw@rcn.net, Jerry Avins at jya@ieee.org wrote >> on 03/01/2005 15:53: >> >>> robert bristow-johnson wrote: >>> >>>> ... i guarantee there are >>>> problems for cos(w) at the bottom octaves because it gets so close to one >>>> that the difference from one (the salient information) has so few decent >>>> bits. >>> >>> In another context, I've use the identity cos(w) = 1 - 2sin^2(w/2) to >>> keep numerical accuracy with small w. (For very small w, that becomes 1 >>> - w^2/2. Tayloe wins again!) Will that work here? >> >> it illustrates the problem, Jerry, but does not solve it (unless you use >> that identity and fanagle the whole thing to be a function of sin^2(w/2)).
Jerry, i misread what you said. the answer is "yes, it does work here."
>> you can see why cos(w) is a problem because, even for floating-point, there >> is a real loss of precision when w << pi . that 2sin^2(w/2) contains all of >> the information of what w is but those bits get clipped when you subtract it >> from 1. > > Yes, at issue is that with floating-point, the least precision you have is > with values just below 1.0. So if you can finagle (good word) your system > to use "1.0-value" instead of "value", you gain a lot of resolution. This > works in other cases too, even where the values have nothing to do with > sines/cosines. > > Example: with IEEE 32-bit floats, the next smallest value less than 1.0 is > ~0.9999999404 or 1 - 5.96e-8.
what is the value of w if cos(w) is that? it's 2*sin^2(w/2) = 5.96e-8 or sin^2(w/2) = 2.98e-8 or sin(w/2) = 0.000172 ~= w/2 or w ~= 0.00035 ~= 0.0001 * pi so there is *total* loss of precision when you are at 13 octaves below Nyquist and not a helluva lot of precision when you are even 10 or 8 octaves below Nyquist. in today's world of 96 kHz or 192 kHz sampling rate, this problem is even more pronounced.
> However, by using the "1.0-value" trick, the > next smallest value is something like 1 - 1.0e-37, or if you allow > "denormals", 1 - 1.401e-45.
there are some round-off issues even with this trick. perhaps a better representation is 20*log10[|H(e^jw)|] = 10*log10[ (b0+b1+b2)^2 + ( b0*b2*phi - (b1*(b0+b2) + 4*b0*b2) )*phi ] - 10*log10[ (a0+a1+a2)^2 + ( a0*a2*phi - (a1*(a0+a2) + 4*a0*a2) )*phi ] where phi = 4*sin^2(w/2) (note difference) this might be better. you can tell where the precision issues are when terms with phi are added to terms without phi. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."