I am currently tasked with optimizing FIR filters as lower order IIR filters so that a micro-controller can perform the necessary DSP. This is performed in the time domain rather than the frequency domain as the computation time for the number of FFT's required would be too long, preventing us from doing the other required processing. I am aware of certain optimization techniques to perform the task such as treating the optimization as a Conic or Quadratic programming problem and constrained and unconstrained reduction techniques based on the BMT method. I am not a DSP expert and so the maths in these techniques is difficult to understand and apply to my problem. Therefore is anyone aware of any software or subset of software than can perform this task. Many thanks in advance for any help you may give
Optimizing FIR filters as lower order IIR's
Started by ●November 28, 2008
Reply by ●November 28, 20082008-11-28
On 28 Nov, 16:40, "luke999" <luke_hollingwo...@hotmail.com> wrote:> I am currently tasked with optimizing FIR filters as lower order IIR > filters so that a micro-controller can perform the necessary DSP. This is > performed in the time domain rather than the frequency domain as the > computation time for the number of FFT's required would be too long, > preventing us from doing the other required processing.I'm a bit confused. If you want to optimize the filters 'on the fly' as you process the data, that's what is known as an 'adaptive filter', and there is a vast literature on those. The term 'optimize a filter' is used for the task of designing a fixed-coefficient filter, which will not change during operations. Since the optimization of filter coefficients is a part of the system design there are no constraints on computing power; you can use your desktop computer to find the coefficients. So you might want to clarify which of these contexts you are working in.> I am aware of certain optimization techniques to perform the task such as > treating the optimization as a Conic or Quadratic programming problem and > constrained and unconstrained reduction techniques based on the BMT > method. > > I am not a DSP expert and so the maths in these techniques is difficult to > understand and apply to my problem. Therefore is anyone aware of any > software or subset of software than can perform this task.The 'easy-to-get' software for filter design would be matlab with filter design toolbox, which would set you back some $2000-$2500. There might be matlab clones out there that has similar functionality, but I have no personal experience with those. If you are into adaptive filters you might have no choise but to deal with the problem from scratch. Or hire somebody who can do that for you. Rune
Reply by ●November 28, 20082008-11-28
luke wrote:> I am currently tasked with optimizing FIR filters as lower order IIR > filters so that a micro-controller can perform the necessary DSP.I guess you mean that you want to approximate (not optimize) FIR filters with IIR filters. I further assume that by IIR filter you really mean filters with a rational transfer function (delayed sinc filters are also IIR, but cannot be expressed with rational transfer function with finite order). This is always possible (consider that purely transversal filters are a subset of filters with rational transfer functions). However, whether you can actually reduce the number of coefficients depends on how well you want the approximation to be. This is essentially a filter design problem. For the solution I would suggest FDLS (search the archives) modified to minimize the l_infinity error (as opposed to the l_2 error). The only difference lies in how you solve the linear regression equation. For l_2 error minimization, you solve the normal equations. For l_infinity minimization, you rewrite the regression equations into a linear programming problem. Check out any book on robust regression on how to do this. If you want to search around a bit, try "order reduction" and "iir filter design". Regards, Andor
Reply by ●November 28, 20082008-11-28
luke wrote:> I am currently tasked with optimizing FIR filters as lower order IIR > filters so that a micro-controller can perform the necessary DSP.I guess you mean that you want to approximate (not optimize) FIR filters with IIR filters. I further assume that by IIR filter you really mean filters with a rational transfer function (delayed sinc filters are also IIR, but cannot be expressed with rational transfer function with finite order). This is always possible (consider that purely transversal filters are a subset of filters with rational transfer functions). However, whether you can actually reduce the number of coefficients depends on how well you want the approximation to be. This is essentially a filter design problem. For the solution I would suggest FDLS (search the archives) modified to minimize the l_infinity error (as opposed to the l_2 error). The only difference lies in how you solve the linear regression equation. For l_2 error minimization, you solve the normal equations. For l_infinity minimization, you rewrite the regression equations into a linear programming problem. Check out any book on robust regression on how to do this. If you want to search around a bit, try "order reduction" and "iir filter design". Regards, Andor
Reply by ●November 28, 20082008-11-28
Andor <andor.bariska@gmail.com> writes:> [...] > For the solution I would suggest FDLS (search the archives) modified > to minimize the l_infinity error (as opposed to the l_2 error). The > only difference lies in how you solve the linear regression equation. > For l_2 error minimization, you solve the normal equations. For > l_infinity minimization, you rewrite the regression equations into a > linear programming problem. Check out any book on robust regression on > how to do this.Hi Andor, Would you please cite a reference or two that presents the theory of l_infinity and l_2 minimization? My understanding is weak to non-existent in this area and I'd like to bone up. -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://www.digitalsignallabs.com
Reply by ●November 28, 20082008-11-28
On 28 Nov., 17:47, Randy Yates <ya...@ieee.org> wrote:> Andor <andor.bari...@gmail.com> writes: > > [...] > > For the solution I would suggest FDLS (search the archives) modified > > to minimize the l_infinity error (as opposed to the l_2 error). The > > only difference lies in how you solve the linear regression equation. > > For l_2 error minimization, you solve the normal equations. For > > l_infinity minimization, you rewrite the regression equations into a > > linear programming problem. Check out any book on robust regression on > > how to do this. > > Hi Andor, > > Would you please cite a reference or two that presents the theory of > l_infinity and l_2 minimization? My understanding is weak to > non-existent in this area and I'd like to bone up.Hi Randy A nice article available online is [1]. It discusses numerical aspects of l_1 and l_2 minimization (for small n, l_1 is actually faster than l_2!). l_1 minimization is solved by reformulating the regression equation as a linear programming problem. Finding the l_infinity solution is done in exactly the same manner, except that the linear programing reformulation is slightly different. I expanded on this here some time ago: http://groups.google.ch/group/comp.dsp/browse_frm/thread/d8aaa0438e620cf2/151a77cd29dc12c9?#151a77cd29dc12c9 Regards, Andor [1] S. Portnoy, R. Koeneker, The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute-error estimators, Statistical Science, Vol. 12, Nr. 4, 279-300, 1997. Online: http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.ss/1030037960&page=record
Reply by ●November 28, 20082008-11-28
On Fri, 28 Nov 2008 08:32:12 -0800 (PST), Andor <andor.bariska@gmail.com> wrote:>For the solution I would suggest FDLS (search the archives) modified >to minimize the l_infinity error (as opposed to the l_2 error). The >only difference lies in how you solve the linear regression equation. >For l_2 error minimization, you solve the normal equations. For >l_infinity minimization, you rewrite the regression equations into a >linear programming problem. Check out any book on robust regression on >how to do this.I sure do wish that you'd publish this, for those of us who don't know diddly about linear programming. Greg
Reply by ●November 28, 20082008-11-28
Andor <andor.bariska@gmail.com> writes:> On 28 Nov., 17:47, Randy Yates <ya...@ieee.org> wrote: >> Andor <andor.bari...@gmail.com> writes: >> > [...] >> > For the solution I would suggest FDLS (search the archives) modified >> > to minimize the l_infinity error (as opposed to the l_2 error). The >> > only difference lies in how you solve the linear regression equation. >> > For l_2 error minimization, you solve the normal equations. For >> > l_infinity minimization, you rewrite the regression equations into a >> > linear programming problem. Check out any book on robust regression on >> > how to do this. >> >> Hi Andor, >> >> Would you please cite a reference or two that presents the theory of >> l_infinity and l_2 minimization? My understanding is weak to >> non-existent in this area and I'd like to bone up. > > Hi Randy > > A nice article available online is [1]. It discusses numerical aspects > of l_1 and l_2 minimization (for small n, l_1 is actually faster than > l_2!). l_1 minimization is solved by reformulating the regression > equation as a linear programming problem. Finding the l_infinity > solution is done in exactly the same manner, except that the linear > programing reformulation is slightly different. I expanded on this > here some time ago: > > http://groups.google.ch/group/comp.dsp/browse_frm/thread/d8aaa0438e620cf2/151a77cd29dc12c9?#151a77cd29dc12c9 > > Regards, > Andor > > [1] S. Portnoy, R. Koeneker, The Gaussian hare and the Laplacian > tortoise: computability of squared-error versus absolute-error > estimators, Statistical Science, Vol. 12, Nr. 4, 279-300, 1997. > > Online: >http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.ss/1030037960&page=record Thanks Andor! I'll have to take some time to digest this, so nothing intelligent to respond with for now... -- % Randy Yates % "So now it's getting late, %% Fuquay-Varina, NC % and those who hesitate %%% 919-577-9882 % got no one..." %%%% <yates@ieee.org> % 'Waterfall', *Face The Music*, ELO http://www.digitalsignallabs.com
Reply by ●November 28, 20082008-11-28
Greg Berchin wrote: (snip)> I sure do wish that you'd publish this, for those of us who don't know > diddly about linear programming.Teaching linear programming doesn't seem very popular lately. My daughter is learning it in high school, but I don't know how many other schools are teaching it. -- glen
Reply by ●November 28, 20082008-11-28
On Fri, 28 Nov 2008 16:12:10 -0700, Glen Herrmannsfeldt <gah@ugcs.caltech.edu> wrote:>Teaching linear programming doesn't seem very popular lately. >My daughter is learning it in high school, but I don't know how >many other schools are teaching it.All the programming I know, I taught myself. I may have to do the same with linear programming. Greg






