Reply by Rune Allnor April 18, 20052005-04-18
David Kirkland wrote:

> I played with Jake Janovetz's code. I modified it to work in double > precision. In comparing it to the matlab results, when the filters
you
> require get longer then the differences show up. > > For Jake's code I was able to design filters around 1000 taps. Using > matlab I was able to get filters around 1600-1800 taps. > > If you are able to find an older version of matlab the low level
design
> function is implemented as an m file. At the start of version 6 they
put
> it in as a built in function.
In Matlab 7 the remez function is tagged as "obsolete". Don't be surprised if it is replaced by something in the Signal Processing Toolbox when Matlab 8 is released.
> Even in matlab there are differences in different versions. On an
older
> version I wouldn't get convergence - on the newer version it appears
to
> converge just fine.
I checked out the recent Parks/McClellan article Stan mentioned. The authors are very clear on that they used some sort of interpolation routine they claim to be particularly robust in the first place, and that they fiddeld a bit with it, to ensure convergence. You should check the paper out. It's available through IEEExplore. Rune
Reply by Fred Marshall April 15, 20052005-04-15
"cford" <cameron.p.ford@L-3com.com> wrote in message 
news:hMCdnYudQojWYsLfRVn-tA@giganews.com...
> All, > Once again thanks for your help. I have been able to duplicated the > coefficients almost perfectly for the few small Low Pass filters that I > have tried so far. It actually was quite surprizing to see that it > duplicats Matlab so closely. I thought there would be some variation. > Maybe I will see differences as I try more complicated filters? When I > first posted I was seeing quite a deviation (which is why I posted), but > that was due to a bug where the Lagrange interpolation was being done on > one too many points as well as trying to use the same coefficients for the > delta calculation and the interpolation calculation. I finally realized > that these coefficient equations look identical except that the delta > coefficients equation goes one step further than the interpolation > coefficient equation.
...... Don't be surprised. In principle they all should agree exactly. It's not as if the approach is different or if the basis functions were different. There is one and only one best minimax solution to the problem that is posed. This comes in the proof that the method(s) converges and that the solution gets better at each iteration. So, variations have to be in roundoff errors and the stopping criterion - still heading toward the same unique solution. Fred
Reply by David Kirkland April 15, 20052005-04-15
Fred Marshall wrote:
> "Rune Allnor" <allnor@tele.ntnu.no> wrote in message > news:1113473399.503871.135950@o13g2000cwo.googlegroups.com... > >>cford wrote: >> >> >>>Fred and Stan, >>>Thanks for your replies, they have been helpful. The filters that I >> >>am >> >>>looking at creating do not need to be extremely long, but the program >>>needs to be able to create filters equivalent to Matlab's "firpm" for >> >>a >> >>>variety of different kinds of filters. I will be working on it for >> >>the >> >>>next several days, but I just wanted to respond to let you know I >>>appreciate your kind responses. >> >>You should be aware that there are plenty reasons why you would not >>get the exact numbers from your own program: >> >>- The number formats may be differnt (float/double) >>- Some functions like sin, cos, exp, ... could give >> slightly different results if different libraries are used >>- Search algorithms may be different >>- Even if they aren't, error limits and iteration >> limits may be different >> >> etc. >> >>It may be that duplicating the functionality is a sufficiently good >>result. >> >>Rune >> > > Rune, > > Good points all. > If the search algorithms are different - but still Remez or minimax - then > the results have to be the same subject to the other variables - that is, > unless there's a difference in how the objective function is formulated > (transition bands are skipped or not skipped is an example of this). > > The error limits would *probably* be adequate to yield very similar results. > So, unless the error limits are sloppy the results should be very close. > (The OP has still not told us what his "similarity" criterion is - so how > close is close enough?) > > The iteration limit should not come into the picture. But, of course it > may. > My attitude is to set the iteration limit so that it is never reached and > rely on the test for numerical problems to stop the search with an error > message. If the search is still perking along without numerical errors then > why not let it continue? (Well, it can happen that you have perfectly > acceptable results and then run into numerical problems). The trick there > is to make sure the program saves the results anyway. But, the number of > iterations needed in practice is pretty small. So, you run some test cases > and set the iteration limit to be beyond anything you've ever seen. Most of > the time, numerical problems happen soon enough if they're going to happen. > One of these days I want to get some code that makes the numerical issues > less frequent. > > Fred > > >
I played with Jake Janovetz's code. I modified it to work in double precision. In comparing it to the matlab results, when the filters you require get longer then the differences show up. For Jake's code I was able to design filters around 1000 taps. Using matlab I was able to get filters around 1600-1800 taps. If you are able to find an older version of matlab the low level design function is implemented as an m file. At the start of version 6 they put it in as a built in function. Even in matlab there are differences in different versions. On an older version I wouldn't get convergence - on the newer version it appears to converge just fine. Cheers, David
Reply by cford April 15, 20052005-04-15
All,
Once again thanks for your help. I have been able to duplicated the
coefficients almost perfectly for the few small Low Pass filters that I
have tried so far. It actually was quite surprizing to see that it
duplicats Matlab so closely. I thought there would be some variation.
Maybe I will see differences as I try more complicated filters? When I
first posted I was seeing quite a deviation (which is why I posted), but
that was due to a bug where the Lagrange interpolation was being done on
one too many points as well as trying to use the same coefficients for the
delta calculation and the interpolation calculation. I finally realized
that these coefficient equations look identical except that the delta
coefficients equation goes one step further than the interpolation
coefficient equation.

Interestingly, I went back to the original Parks-McCllelan paper to try to
get something to work and the coefficient equations have a (-1)^k in front,
which the Proakis DSP textbook does not have. I tried using the original
equations from the paper and couldn't get it to work. It was only when I
used the equations as shown in the textbook that I got it right. I'm not
sure why there is a difference between the paper and the textbook? Maybe
it was a later refinement?

Anyways Thanks,
Cameron

		
This message was sent using the Comp.DSP web interface on
www.DSPRelated.com
Reply by Fred Marshall April 14, 20052005-04-14
"Rune Allnor" <allnor@tele.ntnu.no> wrote in message 
news:1113473399.503871.135950@o13g2000cwo.googlegroups.com...
> > cford wrote: > >> Fred and Stan, >> Thanks for your replies, they have been helpful. The filters that I > am >> looking at creating do not need to be extremely long, but the program >> needs to be able to create filters equivalent to Matlab's "firpm" for > a >> variety of different kinds of filters. I will be working on it for > the >> next several days, but I just wanted to respond to let you know I >> appreciate your kind responses. > > You should be aware that there are plenty reasons why you would not > get the exact numbers from your own program: > > - The number formats may be differnt (float/double) > - Some functions like sin, cos, exp, ... could give > slightly different results if different libraries are used > - Search algorithms may be different > - Even if they aren't, error limits and iteration > limits may be different > > etc. > > It may be that duplicating the functionality is a sufficiently good > result. > > Rune >
Rune, Good points all. If the search algorithms are different - but still Remez or minimax - then the results have to be the same subject to the other variables - that is, unless there's a difference in how the objective function is formulated (transition bands are skipped or not skipped is an example of this). The error limits would *probably* be adequate to yield very similar results. So, unless the error limits are sloppy the results should be very close. (The OP has still not told us what his "similarity" criterion is - so how close is close enough?) The iteration limit should not come into the picture. But, of course it may. My attitude is to set the iteration limit so that it is never reached and rely on the test for numerical problems to stop the search with an error message. If the search is still perking along without numerical errors then why not let it continue? (Well, it can happen that you have perfectly acceptable results and then run into numerical problems). The trick there is to make sure the program saves the results anyway. But, the number of iterations needed in practice is pretty small. So, you run some test cases and set the iteration limit to be beyond anything you've ever seen. Most of the time, numerical problems happen soon enough if they're going to happen. One of these days I want to get some code that makes the numerical issues less frequent. Fred
Reply by Rune Allnor April 14, 20052005-04-14
cford wrote:

> Fred and Stan, > Thanks for your replies, they have been helpful. The filters that I
am
> looking at creating do not need to be extremely long, but the program > needs to be able to create filters equivalent to Matlab's "firpm" for
a
> variety of different kinds of filters. I will be working on it for
the
> next several days, but I just wanted to respond to let you know I > appreciate your kind responses.
You should be aware that there are plenty reasons why you would not get the exact numbers from your own program: - The number formats may be differnt (float/double) - Some functions like sin, cos, exp, ... could give slightly different results if different libraries are used - Search algorithms may be different - Even if they aren't, error limits and iteration limits may be different etc. It may be that duplicating the functionality is a sufficiently good result. Rune
Reply by Fred Marshall April 13, 20052005-04-13
"cford" <cameron.p.ford@L-3com.com> wrote in message 
news:NNmdnTeeaoh18cDfRVn-uA@giganews.com...
> >I used 20. >>PM use 25. >>Jake uses 40. >> >>Since Jake's convergence criterion is around 20x more stringent than > mine, >>it appears he found 40 a better bounding number of iterations. >> >>I find that most optimizations take 3 iterations or so and 10 would be a > >>lot! I guess some long filters might take more. Much of the time > they're >>going to quit with recognized numerical problems anyway if the number of > >>iterations gets too large. >> >>Any of these numbers should be set so that "good" optimizations can > finish >>and that "bad" ones simply don't take too much time before they quit. It > >>should not imply a better result with a larger number - although that > would >>be possible if the number were set to be too small. >> >>Fred >> >> >> > > Fred and Stan, > Thanks for your replies, they have been helpful. The filters that I am > looking at creating do not need to be extremely long, but the program > needs to be able to create filters equivalent to Matlab's "firpm" for a > variety of different kinds of filters. I will be working on it for the > next several days, but I just wanted to respond to let you know I > appreciate your kind responses. > > Thanks, > Cameron >
Cameron, You're welcome. When you say "equivalent to" I perhaps should have said that *all* the approaches should yield the "same" results as long as they're well-implemented. The underlying principles are all the same. Then I suggested some things that might make them different but I don't expect those to be what you find. Fred
Reply by cford April 13, 20052005-04-13
>I used 20. >PM use 25. >Jake uses 40. > >Since Jake's convergence criterion is around 20x more stringent than
mine,
>it appears he found 40 a better bounding number of iterations. > >I find that most optimizations take 3 iterations or so and 10 would be a
>lot! I guess some long filters might take more. Much of the time
they're
>going to quit with recognized numerical problems anyway if the number of
>iterations gets too large. > >Any of these numbers should be set so that "good" optimizations can
finish
>and that "bad" ones simply don't take too much time before they quit. It
>should not imply a better result with a larger number - although that
would
>be possible if the number were set to be too small. > >Fred > > >
Fred and Stan, Thanks for your replies, they have been helpful. The filters that I am looking at creating do not need to be extremely long, but the program needs to be able to create filters equivalent to Matlab's "firpm" for a variety of different kinds of filters. I will be working on it for the next several days, but I just wanted to respond to let you know I appreciate your kind responses. Thanks, Cameron This message was sent using the Comp.DSP web interface on www.DSPRelated.com
Reply by Fred Marshall April 12, 20052005-04-12
I used 20.
PM use 25.
Jake uses 40.

Since Jake's convergence criterion is around 20x more stringent than mine, 
it appears he found 40 a better bounding number of iterations.

I find that most optimizations take 3 iterations or so and 10 would be a 
lot!  I guess some long filters might take more.  Much of the time they're 
going to quit with recognized numerical problems anyway if the number of 
iterations gets too large.

Any of these numbers should be set so that "good" optimizations can finish 
and that "bad" ones simply don't take too much time before they quit.  It 
should not imply a better result with a larger number - although that would 
be possible if the number were set to be too small.

Fred 


Reply by Fred Marshall April 12, 20052005-04-12
"cford" <cameron.p.ford@L-3com.com> wrote in message 
news:JfmdnRo1jZ5lYsbfRVn-2Q@giganews.com...
>I am in the process of trying write C code to duplicate the output of the > Matlab function firpm for a work project. I have already found the C code > written by Jake Janovetz which has been an extremely useful reference. > However, due to the restrictive parameters of my project I am in need of > being able to more fully reproduce the functionality of firpm. I need to > be able to create C code that can take the same inputs as firpm and > generate coefficients that are very close, if not exact duplicates, to the > coefficients generated by firpm. Since the iterative search part of firpm > is embedded in the remezmex.dll file I do not know the following: > > 1. I do not know how remezmex implements the search algorithm to be able > to mimic it. > 2. I do not know what maximum allowable iteration firpm uses to decide to > give up. > 3. I don't know how firpm determines that convergence has occured. Jake's > code uses a formula involving the minimum and maximum extremal frequency > errors. (i.e. if (((max - min)/max) < 0.0001) ). In looking at the > original Parks/McClellan code I can't see that the same criteria was used, > so I am not sure if firpm uses the same criteria, and if they do if the > number is 0.0001, or something else. > > If anyone has any information that can help me out with this, I would be > extremely greatful. > > Thanks, > Cameron Ford
Cameron, How long are the filters you need to design? Are you finding that the two methods don't agree? By how much? How close do they have to be and why? They should be very close indeed. As you probably know, the extremal frequency values are supposed to end up all being equal. Numerical accuracy and the coarseness of the frequency grid are all that limit this - plus the stopping criterion. It becomes a trade between reaching the best solution and having too many cases that appear to fail to converge. It appears that the original PM looks for the error to decrease on an absolute basis. When it fails to be *equal or decrease* the iterations stop. Perhaps I'm reading the code incorrectly or perhaps this works because of roundoff errors. It doesn't seem very robust.... so I'm probably reading it incorrectly. In my own code, the check is something like (with *magnitudes *implied*): (first peak - each other peak) < 0.001*first peak and as soon as the test fails, there's another iteration. This may be estimated to be roughly: (max+min)/2 - min < 0.001*(max+min)/2 (max+min) - 2*min < 2*0.001*2*max/2 (max - min) < 0.002*max Whereas, Jake's is: (max - min) < 0.0001*max So Jake's is around 20x more sensitive. I rather doubt this makes much difference in practice - but I don't understand your situation. Normally one looks for the maximum of the extrema to stop decreasing or the minimum to stop increasing. So, one takes a difference. What might that mean if the extrema are computed in double precision? At what level does the difference fall into the numerical noise? Is the computation of the approximation that good or is it somehow worse? In the end, this is the limiting factor. A particular program might stop sooner than absolutely necessary in order to appear to be robust in convergence. In that case the results could vary from one program to another I suppose. Actually, there is another thing that might make a big difference but if the approach is strictly according to what PM did then it shouldn't matter... That is: how the objective function is stated. In the PM approach, the passband and stopband edges are defined and the transition bands in between are "don't care" in a particular sense. All of the peaks of the error will be in the passbands and stopbands and there will be no inflections in the transition bands. This property was recognized in Rabiner & Gold. So, if for some reason an approach is defined that actually looks at the error in the transition bands the results could be different - and not actually minimax optimal in the passbands and stopbands but rather minimax optimal over a different objective. Another thing that could make a difference is how the band weights are computed and applied. One program may be different from another in some way. I suspect that none of these latter things are what you're concerned with - but I mention them to be complete. Fred