DSPRelated.com
Forums

How can a filter impulse response be interpolated?

Started by fl August 21, 2017
robert bristow-johnson  <rbj@audioimagination.com> wrote:

>On Friday, August 25, 2017 at 3:15:00 PM UTC-4, Steve Pope wrote:
>> Yes, this is why I suggested a four-point Lagrangian as a first shot.
>another polynomial interpolation family is Hermite polynomials. the >Lagrange polynomials only guarantee continuity to the zeroth derivative. >Hermite gives you more continuous derivatives. a 3rd-order Hermite is >continuous to the 0th and 1st derivative. a 5th-order Hermite is >continuous to the 2nd derivative.
I've looked at Hermitian interpolation, but it requires the derivatives at each point as inputs. So ... I'm not sure how it applies to a typical DSP interpolation problem in which you just a sampled signal. Steve
On Sunday, August 27, 2017 at 8:47:59 AM UTC-4, SG wrote:
> Am Samstag, 26. August 2017 21:03:43 UTC+2 schrieb robert bristow-johnson=
:
> >=20 > > another polynomial interpolation family is Hermite polynomials. the La=
grange polynomials only guarantee continuity to the zeroth derivative. Her= mite gives you more continuous derivatives. a 3rd-order Hermite is continu= ous to the 0th and 1st derivative. a 5th-order Hermite is continuous to th= e 2nd derivative.
> >=20 >=20 > When it's about smooth piece-wise polynomial interpolations, I'm quite > a fan B-splines. There's a super efficient way of doing that in the > case of equidistant samples (like your usual digital signal): >=20 > http://bigwww.epfl.ch/publications/thevenaz9901.pdf >=20
SG, can you help me out a bit? in Figure 9 on page 20 of your cited pdf, there are two function plots. i = understand the one on the left as a B-spline (it is a rectangular pulse con= volved with itself 3 times to make a piecewise cubic function that is alway= s non-negative). how does the mathematical function on the right of Figure 9, (the "equivale= nt interpolant") come about? r b-j
On Sunday, August 27, 2017 at 4:14:52 PM UTC-4, Steve Pope wrote:
> robert bristow-johnson <rbj@audioimagination.com> wrote: > > >On Friday, August 25, 2017 at 3:15:00 PM UTC-4, Steve Pope wrote: > > >> Yes, this is why I suggested a four-point Lagrangian as a first shot. > > >another polynomial interpolation family is Hermite polynomials. the > >Lagrange polynomials only guarantee continuity to the zeroth derivative. > >Hermite gives you more continuous derivatives. a 3rd-order Hermite is > >continuous to the 0th and 1st derivative. a 5th-order Hermite is > >continuous to the 2nd derivative. > > I've looked at Hermitian interpolation, but it requires the > derivatives at each point as inputs. So ... I'm not sure how > it applies to a typical DSP interpolation problem in which you > just a sampled signal. >
for a 3rd-order Hermite, the derivatives at the splice points are derived from the two neighboring points (which are shared by adjacent Hermite polynomial splines, so the derivatives match at the splice). check this quarter-century old paper out and see if it's helpful: https://www.researchgate.net/publication/266675823_Performance_of_Low-Order_Polynomial_Interpolators_in_the_Presence_of_Oversampled_Input r b-j
Am Montag, 28. August 2017 19:22:24 UTC+2 schrieb robert bristow-johnson:
> On Sunday, August 27, 2017 at 8:47:59 AM UTC-4, SG wrote: > > > > When it's about smooth piece-wise polynomial interpolations, I'm quite > > a fan B-splines. There's a super efficient way of doing that in the > > case of equidistant samples (like your usual digital signal): > > > > http://bigwww.epfl.ch/publications/thevenaz9901.pdf > > SG, can you help me out a bit?
Sure!
> in Figure 9 on page 20 of your cited pdf, there are two function plots. i understand the one on the left as a B-spline (it is a rectangular pulse convolved with itself 3 times to make a piecewise cubic function that is always non-negative).
Correct.
> how does the mathematical function on the right of Figure 9, (the "equivalent interpolant") come about?
Cubic B-splines belong to the 2nd category of interpolation: "generalized interpolation". Instead of the actual values f_k of the function you want to interpolate you have a bunch of weighting coefficients c_k in equation (4) for the base functions phi(x-k) which in this case are the cubic B-splines. What's important is that phi(-3) = 0 phi(-2) = 0 phi(-1) = 1/6 phi( 0) = 4/6 phi( 1) = 1/6 phi( 2) = 0 phi( 3) = 0 So, phi itself is not an "interpolant". "Interpolants" evaluate to one at x=0 and to zero at any other non-zero integer. This is why we can't use the actual function values f_k as weights for interpolation. So, f_k != c_k. But the f_ks and c_ks values are connected via a convolution: f_k = 1/6 * c_{k-1} + 4/6 * c_k + 1/6 * c_{k+1} This is a linear equation system you can solve for the weighting coefficients c_k. The equation system has a spectial structure we can exploit. We basically need to invert the convolution with the kernel [1 4 1]/6. This can be done via bidirectional IIR filtering (see Matlab's filtfilt routine). The kernel has just to be factored into its minimum phase and maximum phase components which in this case looks like this: a = [1 2-sqrt(3)]; b = sum(a); c = filtfilt(f,b,a); If you do all this to interpolate the values [... 0 0 0 1 0 0 0 ...] you'll get the "equivalent interpolant" which has infinite support due to the bidirectional IIR filter. In this special case you can plot the "equivalent interpolant" in Matlab like this: x = -20:20; % this radius ought to be enough y = double(x==0); pp = spline(x,y); xx = -10:1/32:10; plot(xx,ppval(pp,xx)); (because spline does cubic spline interpolation and cubic B-splines are just a compact and convenient way to represent such a thing.) One nice property is that the resulting curve is twice continuously differentiable. Splines are the "most smooth" possible way to interpolate something using piece-wise polynomial functions. Another interesting property is that with increasing order of the B-splines this equivalent interpolant converges to a sinc. :-) Cheers! sg
On Monday, August 28, 2017 at 9:41:38 PM UTC-4, SG wrote:
> Am Montag, 28. August 2017 19:22:24 UTC+2 schrieb robert bristow-johnson: > > On Sunday, August 27, 2017 at 8:47:59 AM UTC-4, SG wrote: > > > > > > When it's about smooth piece-wise polynomial interpolations, I'm quite > > > a fan B-splines. There's a super efficient way of doing that in the > > > case of equidistant samples (like your usual digital signal): > > > > > > http://bigwww.epfl.ch/publications/thevenaz9901.pdf > > > > SG, can you help me out a bit? > > > in Figure 9 on page 20 of your cited pdf, there are two function plots. i understand the one on the left as a B-spline (it is a rectangular pulse convolved with itself 3 times to make a piecewise cubic function that is always non-negative). > > Correct. > > > how does the mathematical function on the right of Figure 9, (the "equivalent interpolant") come about? > > Cubic B-splines belong to the 2nd category of interpolation: > "generalized interpolation". Instead of the actual values f_k of the > function you want to interpolate you have a bunch of weighting > coefficients c_k in equation (4) for the base functions phi(x-k) > which in this case are the cubic B-splines. What's important is that > > phi(-3) = 0 > phi(-2) = 0 > phi(-1) = 1/6 > phi( 0) = 4/6 > phi( 1) = 1/6 > phi( 2) = 0 > phi( 3) = 0 >
so that's a LPF that looks like what you get when you convolve a rectangular pulse against itself 3 times.
> So, phi itself is not an "interpolant". "Interpolants" evaluate to > one at x=0 and to zero at any other non-zero integer.
as do Lagrange polynomials and Hermite polynomials.
> This is why > we can't use the actual function values f_k as weights for > interpolation. So, f_k != c_k. > > But the f_ks and c_ks values are connected via a convolution: > > f_k = 1/6 * c_{k-1} + 4/6 * c_k + 1/6 * c_{k+1} > > This is a linear equation system you can solve for the weighting > coefficients c_k. The equation system has a spectial structure we > can exploit. We basically need to invert the convolution with the > kernel [1 4 1]/6. This can be done via bidirectional IIR filtering > (see Matlab's filtfilt routine). The kernel has just to be factored > into its minimum phase and maximum phase components which in this > case looks like this: > > a = [1 2-sqrt(3)]; b = sum(a); > c = filtfilt(f,b,a); > > If you do all this to interpolate the values [... 0 0 0 1 0 0 0 ...] > you'll get the "equivalent interpolant" which has infinite support > due to the bidirectional IIR filter.
well, you normally don't do filtfilt() on real-time data because it's always FIFO. why not just define this as a polynomial and express the polynomial explicitly?
> In this special case you can plot the "equivalent interpolant" in > Matlab like this: > > x = -20:20; % this radius ought to be enough > y = double(x==0); > pp = spline(x,y); > xx = -10:1/32:10; > plot(xx,ppval(pp,xx)); > > (because spline does cubic spline interpolation and cubic B-splines > are just a compact and convenient way to represent such a thing.) > > One nice property is that the resulting curve is twice continuously > differentiable.
how is that different from a 3rd-order Hermite?
> Splines are the "most smooth" possible way to > interpolate something using piece-wise polynomial functions. Another > interesting property is that with increasing order of the B-splines > this equivalent interpolant converges to a sinc.
how is that different from a Hermite polynomial? i understand what the B-spline "function" (not the "interpolant") is good for. it is explained in the 20 year old paper that Duane Wise and i wrote for an AES convention. it's particularly good for content that has most of its energy at low frequencies. that means, in the ideal sampled data, most of the energy in the images are around the integer multiples of the sampling frequency. each rect() function you convolve with is a sinc() function in the frequency domain with the zeros happening at integer multiples of the sample rate. so if you convolve it N times, you get sinc^(N-1) in the frequency domain and the width of those notches get wider and wider (with increasing N) thus eliminating those images more and more. (the energy of those images folds back into the baseband when the interpolated function is resampled.) so because the B-spline functions LPF the baseband worse than the other interpolation kernels, then HPF compensation is used in the digital domain before interpolating with B-spline functions. is that right? if the compensating HPF is an IIR or a long FIR, that makes the net interpolation kernel very long also, even if the B-spline was short (like 4 sample periods in width). is that it? prefilter the discrete-time data and then apply the piecewise polynomial that is a rect() convolved against itself N times to get an Nth order interpolation kernel? r b-j
Am Dienstag, 29. August 2017 08:10:11 UTC+2 schrieb r b-j:
> > well, you normally don't do filtfilt() on real-time data because > it's always FIFO.
True. Although, the IIR filter's impulse response decays pretty quickly. So, one does not need a large look-ahead or look-back buffer.
> why not just define this as a polynomial and express the > polynomial explicitly?
You mean why not specify the equivalent interpolant of a cubic B-spline directly? Well, it's length is infinite like the sinc. I'd say doing the pre-filtering followed by convolution with the cubic B-spline is computationally rather cheap for what it offers in terms of quality.
> how is that different from a 3rd-order Hermite?
If "3rd-order Hermite" means what I think it means then, it suffers from jump discontinuities in its 2nd order derivative while cubic B-spline interpolation does not (I'll refer to that one as "simple Hermite" from here on). Anyhow, I would look into how the approaches compare with respect to the following two properties: (1) speed (e.g. number of FLOPs necessary for a specific use-case) (2) their respective interpolant's Fourier transform (like you did in your paper "Performance...Oversampled Input"). In the cubic B-spine case the Fourier transform of the interpolant can be plotted like this: f = 0:0.01:16; h = freqz([0 6 0],[1 4 1],f,2) .* sinc(f*0.5).^4; plot(f,20*log10(abs(h))); axis([0 16 -120 0]); If I compare this curve with your figures 9 and 12 from your paper, this B-spline thing here does a better job of rejecting the image frequencies (stronger suppression of the side lobes). This "2nd-order osculating interpolation" doesn't give you jump discontinuities either but it's side lobes are stiill higher than those of the cubic B-spline case from the above plot. IMHO it's also interesting to check out the pass band (0<=2f/fs<=1) with a linear Y axis to see how much "detail" the interpolant is preserving. This is not easy to see in your graphs, unfortunately. If you still have the code to plot your curves, you could add this B-spline thingy into the mix for comparison. I'd be interested in the results including linear pass-band plots. As for computational complexity, cubic B-spline interpolation is basically the cost of the prefiltering step (5 FLOPs per sample) plus the B-spline evaluation which is as cheap as the "simple Hermite" case.
> > Splines are the "most smooth" possible way to > > interpolate something using piece-wise polynomial functions. > > Another > > interesting property is that with increasing order of the > > B-splines > > this equivalent interpolant converges to a sinc. > > how is that different from a Hermite polynomial?
I'm not familiar with that family except for the simplest case were you end up with a piece-wise cubic polynomial function with jump discontinuities in its 2nd order derivative. So, I can't answer this one right now.
> so because the B-spline functions LPF the baseband worse than the > other interpolation kernels, then HPF compensation is used in the > digital domain before interpolating with B-spline functions. is > that right? if the compensating HPF is an IIR or a long FIR, that > makes the net interpolation kernel very long also, even if the > B-spline was short (like 4 sample periods in width). > > is that it? prefilter the discrete-time data and then apply the > piecewise polynomial that is a rect() convolved against itself N > times to get an Nth order interpolation kernel?
Yup. Pretty much. Now, B-spline interpolation is, of course, not limited to equidistant points on the time or space axis. You might have to solve very different equation systems to get the B-spline weights. But in this simple case it basically boils down to some kind of "pre filter" which amplifies higher frequencies to compensate for the intrinsic lowpass effect of order-N B-splines for N>1. And if you convolve this pre-filter's impulse response with the B-spline you get the "equivalent interpolant". This filtering step can be done efficiently in-place and with O(1) additional memory. Cheers! sg
On Friday, August 25, 2017 at 1:55:36 PM UTC-4, Eric Jacobsen wrote:
> ... > This technique often works, but not always. The interpolated impulse > response should be checked for frequency response. If the response > is not what is desired but close, then hand-tweaking the results may > be sufficient. > > But apparently I'm a bit of a weirdo for hand-tweaking coefficients, > so YMMV.
it's what makes this an art. r b-j
On Tuesday, August 29, 2017 at 7:25:18 AM UTC-4, SG wrote:
> > If "3rd-order Hermite" means what I think it means then, it suffers > from jump discontinuities in its 2nd order derivative while cubic > B-spline interpolation does not (I'll refer to that one as "simple > Hermite" from here on). > > Anyhow, I would look into how the approaches compare with respect > to the following two properties: > > (1) speed (e.g. number of FLOPs necessary for a specific > use-case)
if speed is the issue (and memory is cheap), then basic polyphase upsampling (with an upsample factor of at least 256 or 512) followed by linear interpolation (which is a 1st-order B-spline in addition to 1st-order Hermite or 1st-order Lagrange) does just fine. for a 32-tap FIR, that is 8K coefficients, 64 MACs (two adjacent points), and one linear-interpolation per sample. if it's a 16-tap FIR, it's 4K coefficients and 32 MACs and one linear interpolation. the number of coefficients is halved further if 256 polyphases are used instead of 512. in my opinion, filtfilt() will not help you much with speed in a real-time situation. you must make your blocks long enough to make the IIR filter die out sufficiently before flipping it around and filtering backwards. (theoretically, you should use truncated IIR, which is an implementation of FIR, and not just an ordinary IIR). doing all that messing around with filtfilt() and blocks is not worth it for the purpose of interpolation.
> (2) their respective interpolant's Fourier transform (like you > did in your paper "Performance...Oversampled Input").
what you want to do is put in deep and wide notches at integer multiples of the sample rate (except for the baseband image at 0 times Fs). for an interpolation polynomial of Nth order, there will be a sinc^(N+1) term, and with the B-spline, there will *only* be the sinc^(N+1) term, which makes the deepest and widest notches. with any other Nth-order polynomial, there will be other sinc() terms of less order that will add to your nice sinc^(N+1) term and make it a poorer notch.
> In the cubic B-spine case the Fourier transform of the interpolant > can be plotted like this: > > f = 0:0.01:16; > h = freqz([0 6 0],[1 4 1],f,2) .* sinc(f*0.5).^4;
the sinc(f*0.5).^4 is the spectrum of the piecewise-cubic which is the rect() convolved with itself 3 times. i get that. the freqz([0 6 0],[1 4 1],f,2) is just a simple 2nd-order IIR filter which, i presume, compensates a little bit for the drop in gain in the baseband that 4 sinc() functions will do to you. so it *is* as i said, interpolating with a piecewise cubic function that has a lot of low-pass filtering going on and trying to compensate that a little with an HPF in the baseband. but that is *not* your filtfilt() is it?
> plot(f,20*log10(abs(h))); > axis([0 16 -120 0]); > > If I compare this curve with your figures 9 and 12 from your paper, > this B-spline thing here does a better job of rejecting the image > frequencies (stronger suppression of the side lobes).
we know that. we did the B-spline (function, not the "interpolant") too. the notches for B-spline is better than anyone else of the same order and the reason is that the B-spline function is only the convolution of the rect() against itself N times for an Nth-order spline. (but the low-pass filteringin the baseband is severe, and i think we even mentioned HPF filtering to compensate, but i did not consider all of that to be comparable in cost to other polynomial interpolators like Lagrange or Hermite. and we were sorta trying to compare apples to apples.
> IMHO it's also interesting to check out the pass band (0<=2f/fs<=1) > with a linear Y axis to see how much "detail" the interpolant is > preserving. This is not easy to see in your graphs, unfortunately. > If you still have the code to plot your curves, you could add this > B-spline thingy into the mix for comparison. I'd be interested in > the results including linear pass-band plots.
hey, maybe you and me and/or Duane can do an updated paper. but Olli Niemitalo has already done a more detailed paper also about two decades ago. i call it his "Pink Elephant paper". have you seen that?
> As for computational complexity, cubic B-spline interpolation is > basically the cost of the prefiltering step (5 FLOPs per sample)
you're not doing filtfilt() with 5 FLOPs per sample.
> plus the B-spline evaluation which is as cheap as the "simple Hermite" case. > > > > Splines are the "most smooth" possible way to > > > interpolate something using piece-wise polynomial functions. > > > Another > > > interesting property is that with increasing order of the > > > B-splines > > > this equivalent interpolant converges to a sinc. > > > > how is that different from a Hermite polynomial? > > I'm not familiar with that family except for the simplest case were > you end up with a piece-wise cubic polynomial function with jump > discontinuities in its 2nd order derivative. So, I can't answer this > one right now.
for a given number of points and a given order (the number of input points is one more than the order), Hermite will only match the value and as many derivatives as possible at the two splice points on the left and right. Lagrange wants his polynomial to go through more of the input points on the left and right, but Hermite says "who cares". what is more important is that the spliced interpolated functions are smooth. (but what's more salient than the Hermite concern is that the images in the frequency domain are killed.)
> > so because the B-spline functions LPF the baseband worse than the > > other interpolation kernels, then HPF compensation is used in the > > digital domain before interpolating with B-spline functions. is > > that right? if the compensating HPF is an IIR or a long FIR, that > > makes the net interpolation kernel very long also, even if the > > B-spline was short (like 4 sample periods in width). > > > > is that it? prefilter the discrete-time data and then apply the > > piecewise polynomial that is a rect() convolved against itself N > > times to get an Nth order interpolation kernel? > > Yup. Pretty much.
that's what i though.
> Now, B-spline interpolation is, of course, not > limited to equidistant points on the time or space axis.
for audio resampling, i am not worried about the non-equidistant case. i just want a clean and cheap method to do sample rate conversion (that's when we want to kill the images) or precision delay (that's when we want an unchanging magnitude response w.r.t. fractional delay and we want the negative of phase to advance linearly with increasing delay). if i want to be anal-retentive about doing this interpolation and i have some memory for a coefficient table, the best solution is an optimally designed polyphase resampling table for equidistant fractional delays and linearly interpolating between that. but sometimes i have an application where the client will not grant me a 4K table of coefficients (and has plenty of MIPS to burn), and then a *pure* polynomial interpolation is best. and B-spline with a little bit of HPF compensation with the input data is probably the best. but that HPF compensation is not cheaply nor simply done with filtfilt() and sample blocking in a real-time case.
> You might > have to solve very different equation systems to get the B-spline > weights. But in this simple case it basically boils down to some > kind of "pre filter" which amplifies higher frequencies to compensate > for the intrinsic lowpass effect of order-N B-splines for N>1.
there's even some lowpass effect for N=1 or N=0. those are linear interpolation and "drop-sample" interpolation.
> And if you convolve this pre-filter's impulse response with the B-spline > you get the "equivalent interpolant". This filtering step can be done > efficiently in-place and with O(1) additional memory.
it should be (for efficiency) an IIR HPF (or high-shelf) and no filtfilt() because the blocking and reversing is a pain in the ass. i would think a 4th-order optimally designed IIR HPF would be good enough for a 3rd-order or 5th-order B-spline "function" interpolating. not sure how to optimally design that IIR HPF. i think a judgement call would be made on that. and without doing filtfilt(), your prefiltering would not be phase linear, but it might be decently phase linear for the low frequencies. the higher frequencies in the baseband might be messed up a little regarding phase (and also amplitude as the HPF deviates from exactly what you need to compensate the sinc^(N+1)). send me an email if you wanna discuss any of this offline. bestest r b-j
On Thursday, August 31, 2017 at 6:00:06 PM UTC+2, robert bristow-johnson wrote:
> On Tuesday, August 29, 2017 at 7:25:18 AM UTC-4, SG wrote: > > [...] > > In the cubic B-spine case the Fourier transform of the interpolant > > can be plotted like this: > > > > f = 0:0.01:16; > > h = freqz([0 6 0],[1 4 1],f,2) .* sinc(f*0.5).^4; > [...] > so it *is* as i said, interpolating with a piecewise cubic function > that has a lot of low-pass filtering going on and trying to > compensate that a little with an HPF in the baseband. but that is > *not* your filtfilt() is it?
But it is. freqz([0 6 0],[1 4 1]) describes exactly what the filtfilt step from before did. freqz just doesn't care about causality. :)
> [...] > but sometimes i have an application where the client will not grant > me a 4K table of coefficients (and has plenty of MIPS to burn), and > then a *pure* polynomial interpolation is best. and B-spline with a > little bit of HPF compensation with the input data is probably the > best. but that HPF compensation is not cheaply nor simply done with > filtfilt() and sample blocking in a real-time case. > [...] > it should be (for efficiency) an IIR HPF (or high-shelf) and no > filtfilt() because the blocking and reversing is a pain in the ass. > i would think a 4th-order optimally designed IIR HPF would be good > enough for a 3rd-order or 5th-order B-spline "function" > interpolating. not sure how to optimally design that IIR HPF. i > think a judgement call would be made on that. and without doing > filtfilt(), your prefiltering would not be phase linear, but it > might be decently phase linear for the low frequencies. the higher > frequencies in the baseband might be messed up a little regarding > phase (and also amplitude as the HPF deviates from exactly what you > need to compensate the sinc^(N+1)).
This was a thread about interpolating a bunch of FIR coefficients. In this setting filtfilt would work. For low-latency real-time applications one could use the causal version of this filter at the cost of a nonlinear phase response: 1.6077 H(z) = ----------------------------- 1 + 0.5359 z^-1 + 0.0718 z^-2 Another alternative would be to use a low order FIR filter instead of the above IIR filter. For example, the following 3-tap filter [-1 8 -1] ./ 6 compensates for the negative curvature of sinc(f*0.5)^4 at f=0 when applied in the baseband. This results in a "more flat" response at least for the lower frequencies.
> > IMHO it's also interesting to check out the passband > > with a linear Y axis to see how much "detail" the interpolant is > > preserving. This is not easy to see in your graphs, unfortunately. > > If you still have the code to plot your curves, you could add this > > B-spline thingy into the mix for comparison. I'd be interested in > > the results including linear pass-band plots. > > hey, maybe you and me and/or Duane can do an updated paper. but > Olli Niemitalo has already done a more detailed paper also about two > decades ago. i call it his "Pink Elephant paper". have you seen > that?
I wasn't aware but thanks for pointing it out! Seems very interesting! He even covers "pre-emphasis" in its own chapter (8) which is pretty much the same as what I was doing with filtfilt or the above FIR. The filtfilt thing basically falls out of the problem of making the B- spline curve go exactly through your data points. And I can't take any credit for it. Th&eacute;venaz et al brought this to my attention with their image interpolation paper.
> > I'm not familiar with that family except for the simplest case > > were you end up with a piece-wise cubic polynomial function with > > jump discontinuities in its 2nd order derivative. So, I can't > > answer this one right now. > > for a given number of points and a given order (the number of input > points is one more than the order), Hermite will only match the > value and as many derivatives as possible at the two splice points > on the left and right. [...]
Thanks for clearing that up! Cheers! sg
Hi.  If your original FIR coefficients are B, then try the following in MATLAB:

   B1 = resample(B,5,4);

and see how well the B1 coefficients work for you. The B1 coeffs will have a gain greater than unity. So, if you wish, you can compute unity-gain coeffs using:

   B2 = B1/sum(B1);