DSPRelated.com
Forums

Seeking contractor to design some FIR or IIR filters

Started by NugenAudio October 12, 2011
On Oct 13, 5:27�pm, spop...@speedymail.org (Steve Pope) wrote:
> Tim Wescott &#4294967295;<t...@seemywebsite.com> wrote: > > > > > > > > > > >On Thu, 13 Oct 2011 14:11:49 -0700, Fred Marshall wrote: > >> A group of engineers and managers were meeting with me to review a > >> system design. &#4294967295;The purpose of the review was probably to decide if the > >> project was worth pursuing, whether the project team knew what they were > >> doing, etc. &#4294967295;The system had some fairly intricate logic in doing things > >> like mode switching, etc. > >> So, I asked them how they were going to perform one or two key functions > >> and their answer was: "there's a computer inside". &#4294967295;So, I pushed a bit > >> harder and got the same answer once again. Needless to say that I > >> decided they didn't have the foggiest idea what they were going to do, > >> much less understood my questions as pointing to some very real issues. > > >> Now, mind you, they were supposed to be the "experts" in this system > >> field and I was just an experienced system developer - so they probably > >> should have been able to wax eloquently on the subject. &#4294967295;Sad they > >> couldn't. > > >What, they couldn't just look you in the eye and say "magic"? > > >This "there's a computer inside" stuff can be so frustrating -- it seems > >that when you can't convince them that the computer makes some intricate > >bit of logic possible, you've got some yahoo who thinks that the computer > >can synthesize answers from no data at all. > > Fred's story is reminiscent of Bernie Madoff's answer to regulators who > asked him, "who is making the trades in your accounts?" &#4294967295;His reply > was "Traders. &#4294967295;Under the supervision of Supervisors." > > Of course no such traders, or trades, existed. &#4294967295;But the regulators > did not press for a better answer. > > Steve
I think it all depends on the complexity of the project.
Steve Pope <spope33@speedymail.org> wrote:
> Tim Wescott <tim@seemywebsite.com> wrote:
(snip)
>>I think it definitely _helps_ to know how to do all the sub-tasks, but I >>think that one can definitely do a good job without.
>>Or perhaps the obverse: if you're only willing to manage projects where >>you can personally do all the sub-tasks, then you're not being >>adventuresome enough.
> On reflection, you're basically correct on this.
My general rule is that someone should have a good understanding one level below the one that they are working on, but don't need to know below that. Software engineers should understand computer hardware. Computer hardware engineers should understand transistor physics. Transistor designers should know solid state physics. continuing, it isn't so obvious, but maybe: solid state physicists/engineers should know nuclear physics. nuclear engineers should know high energy/particle physics. -- glen
On 10/13/2011 5:20 PM, Tim Wescott wrote:
> On Thu, 13 Oct 2011 14:11:49 -0700, Fred Marshall wrote: > >> On 10/13/2011 1:10 PM, Steve Pope wrote: >>> Tim Wescott<tim@seemywebsite.com> wrote: >>> >>>> On Thu, 13 Oct 2011 18:59:59 +0000, Steve Pope wrote: >>> >>>>> I think you're use a reduction-to-absurdity argument which perhaps >>>>> proves my statement above isn't absolutely valid; but I still think >>>>> my statement is generally valid. >>>> >>>> I think it definitely _helps_ to know how to do all the sub-tasks, but >>>> I think that one can definitely do a good job without. >>>> >>>> Or perhaps the obverse: if you're only willing to manage projects >>>> where you can personally do all the sub-tasks, then you're not being >>>> adventuresome enough. >>> >>> On reflection, you're basically correct on this. >>> >>> I am most comfortable managing projects where the technical work is >>> something that I could (if necessary) personally perform, given enough >>> time, and anything else that needs to be purchased is a pure commodity. >>> Whether that is unadventurous is a subjective question. When the >>> deliverables of a project depend upon unknown or unknowable research >>> results, and I make the assumption that someone else knows how to get >>> these results even though I could not, then that is pretty troubling. >>> >>> (This is not to say I'm opposed to adventurous research projects; but >>> the deliverable should be something deterministically possible, like >>> say "a final report".) >>> >>> >>> >>> S. >> >> Well, it's somewhere in between isn't it? >> >> There are great stories in the reverse: where the manager understands >> things better than the prospective workers (and has no choice in the >> matter). Here's one from my own experience: >> >> A group of engineers and managers were meeting with me to review a >> system design. The purpose of the review was probably to decide if the >> project was worth pursuing, whether the project team knew what they were >> doing, etc. The system had some fairly intricate logic in doing things >> like mode switching, etc. >> So, I asked them how they were going to perform one or two key functions >> and their answer was: "there's a computer inside". So, I pushed a bit >> harder and got the same answer once again. Needless to say that I >> decided they didn't have the foggiest idea what they were going to do, >> much less understood my questions as pointing to some very real issues. >> >> Now, mind you, they were supposed to be the "experts" in this system >> field and I was just an experienced system developer - so they probably >> should have been able to wax eloquently on the subject. Sad they >> couldn't. > > What, they couldn't just look you in the eye and say "magic"? > > This "there's a computer inside" stuff can be so frustrating -- it seems > that when you can't convince them that the computer makes some intricate > bit of logic possible, you've got some yahoo who thinks that the computer > can synthesize answers from no data at all.
It's even worse when the computer comes up with nonsense, and they say "The computer did it so it must be right." Jerry -- Engineering is the art of making what you want from things you can get.
On Oct 13, 6:30&#4294967295;pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> Steve Pope <spop...@speedymail.org> wrote: > > Tim Wescott &#4294967295;<t...@seemywebsite.com> wrote: > > (snip) > > >>I think it definitely _helps_ to know how to do all the sub-tasks, but I > >>think that one can definitely do a good job without. > >>Or perhaps the obverse: if you're only willing to manage projects where > >>you can personally do all the sub-tasks, then you're not being > >>adventuresome enough.
i know i can't do shit for user interfaces nor for how to plug into some really crappy API. but i would want to keep abreast of how the basic science and algorithm works. i would not want that to be too "low-level" for me.
> > On reflection, you're basically correct on this. &#4294967295; > > My general rule is that someone should have a good understanding > one level below the one that they are working on, but don't need > to know below that. > > Software engineers should understand computer hardware. > > Computer hardware engineers should understand transistor physics. >
nothing between computer hardware and the physics of transistors?? nothing about circuits? i might think that Kirchoff's laws and volt- amp characteristics might come in between.
> Transistor designers should know solid state physics. >
i would put the two at the same level. i cannot imagine a transistor device designer that doesn't know solid state physics. i would think that these guys should know the Ebers-Moll equations, how it's derived, and how it applies to BJT geometries.
> solid state physicists/engineers should know nuclear physics. > > nuclear engineers should know high energy/particle physics.
r b-j
On 10/13/2011 2:50 PM, brent wrote:
> I think it all depends on the complexity of the project.
For the same person perhaps. But in general you seem to imply that there is no possibility of "dumb, dumber, dumbest" :-)
Steve Pope wrote:

> Andreas Huennebeck <acmh@gmx.de> wrote: > >>NugenAudio wrote: > >>> We are seeking a contractor to provide the coefficients for a set of EQ >>> filters with specific frequency responses, at given sample rates. >>> [..] >> >>Looking at the products you advise on your homepage nugenaudio.com, >>it's hard to believe that you do not have the knowledge to solve this task >>on your own. > > Good. A project leader should have the knowledge to do every sub-task, > even if he/she doesn't have the time to do every sub-task.
This would be optimal but it is not necessary as long as the company has at least one employee who has this knowledge. When I wrote 'you' in my posting above I meant the company, not the poster himself. bye Andreas -- Andreas H&#4294967295;nnebeck | email: acmh@gmx.de ----- privat ---- | www : http://www.huennebeck-online.de Fax/Anrufbeantworter: 0721/151-284301 GPG-Key: http://www.huennebeck-online.de/public_keys/andreas.asc PGP-Key: http://www.huennebeck-online.de/public_keys/pgp_andreas.asc
This is the ITU-R 468 noise weighting filter, but the table in the TASA 
standard is offset by by a constant gain of 5.6 dB for some reason.

http://en.wikipedia.org/wiki/ITU-R_468_noise_weighting

This shows a passive LC ladder filter implementation.

I happen to have worked out some realizations as a favor for a friend, so 
here are some hints:

The s-plane poles and zeros of the LC ladder filter are:

POLE ZERO ANALYSIS OF V1  
NETWORK ZEROS
    REAL PART        IMAG.PART           ZERO FREQ.         ZERO Q
      (HZ)             (HZ)                (HZ) 
  0.00000000D+00   0.00000000D+00      0.00000000D+00  -5.00000000D-01  
NETWORK POLES
    REAL PART        IMAG.PART           POLE FREQ.         POLE Q
      (HZ)             (HZ)                (HZ) 
 -2.98315994D+03  -9.94084265D+03      1.03788051D+04   1.73956565D+00  
 -2.98315994D+03   9.94084265D+03      1.03788051D+04   1.73956565D+00  
 -9.97506312D+03   0.00000000D+00      9.97506312D+03   5.00000000D-01  
 -3.75852916D+03  -5.79004234D+03      6.90297992D+03   9.18308681D-01  
 -3.75852916D+03   5.79004234D+03      6.90297992D+03   9.18308681D-01  
 -4.12270207D+03   0.00000000D+00      4.12270207D+03   5.00000000D-01

This can be broken into a first-order highpass section cascaded with a 6th-
order lowpass section.

I only worked out a digital filter for 48 kHz sample rate.

The z-plane poles/zeros of the highpass section are worked out as:

 SUPPLY POLES:                                                             
     pole # 1 = -9.97506312D+03         +/-j 0
 SUPPLY ZEROS:                                                             
     zero # 1 = 0                       +/-j 0


         Zero #            Real                 Imag.
           1         -0.3385920              0.000000    
           2          0.9999964              0.000000    
         Pole #            Real                 Imag.
           1         -0.3540664              0.000000    
           2          0.2695075              0.000000    

 MAXIMUM ERROR FROM      1.00 Hz TO  20000.00 Hz IS  0.0016023dB

[note: you probably should change the second zero to 1.00000, which would 
make the function fully DC-blocking.]

The z-plane poles/zeros of the lowpass section are:

         Zero #            Real                 Imag.
           1         -0.7801920              0.000000    
           2         -0.5738120              0.000000    
           3         -0.1907084             0.9879871E-01
           4         -0.1907084            -0.9879871E-01
           5          0.6368181E-01         0.1277262    
           6          0.6368181E-01        -0.1277262    
         Pole #            Real                 Imag.
           1         -0.7081976              0.000000    
           2          0.5642261              0.000000    
           3          0.6763973             0.8303727E-01
           4          0.6763973            -0.8303727E-01
           5          0.4432977             0.4196167    
           6          0.4432977            -0.4196167    
 MAXIMUM ERROR FROM      0.00 Hz TO  20000.00 Hz IS  0.0007492dB

This was computed using a proprietary program that uses optimization 
techniques. I don't believe that there are any closed-form solutions to this 
problem that yield good results, but you might want to try the matched-z 
transform.



In article <25GdncqZaZZqSQjTnZ2dnUVZ_qKdnZ2d@giganews.com>, 
paul@n_o_s_p_a_m.nugenaudio.com says...
>Hi, > >We are seeking a contractor to provide the coefficients for a set of EQ >filters with specific frequency responses, at given sample rates. > >The filters we need the coefficients for are the frequency weighting >equalizer for LEQ(m) as given by >http://www.tasatrailers.org/TASAStandard.pdf section 1.4.2 at various >standard audio sample rates (eg- 44.1kHz, 48kHz, 96kHz, 192kHz), and >preferably the matlab code (or whatever was used to generate the >coefficients) so that we can generate the filters for other sample rates if >necessary in the future. > >Please email me at paul at nugenaudio.com for more details > >Thanks > >Paul > >Dr. Paul Tapper, >Technical Director, >NUGEN Audio > >
On Wed, 26 Oct 2011 17:53:19 -0800, Robert Orban <spambucket2413@earthlink.net>
wrote:

>This was computed using a proprietary program that uses optimization >techniques.
Robert, I encourage you to publish this technique. In cases where phase response can be neglected, it matches magnitude response better than FDLS. Greg Berchin
Thank you very much to all the responses, both in this thread, and by
email.

We have now placed the job with a contractor, so, please ... no more offers
to fulfil the job.

Thanks.

Paul
In article <egjia7l4tdl5m0e6vfc82ug7tb8c45fkha@4ax.com>, 
gjberchin@chatter.net.invalid says...
>On Wed, 26 Oct 2011 17:53:19 -0800, Robert Orban
<spambucket2413@earthlink.net>
>wrote: > >>This was computed using a proprietary program that uses
optimization
>>techniques. > >Robert, I encourage you to publish this technique. In cases where
phase response
>can be neglected, it matches magnitude response better than FDLS.
I believe that I posted a brief description of the technique in comp.dsp a few year ago. This time, I'll get a bit more detailed. The algorithm is a grid-based technique that uses a highly oversampled set of frequency points. First we frequency-warp the desired z-plane magnitude function so that it can be optimized as a quotient of two polynomials with no trig terms. Basically, the frequency warp uses an "inverse bilinear transformation." In other words, this creates an s-plane magnitude response that will produce the desired z-plane function once the normal bilinear transformation is applied to the s-plane poles and zeros that we find in the optimization. For our purposes here, we call the s-plane frequency variable 'w', assuming the usual s -> jw transformation to compute the complex frequency response. This first warp maps Nyquist to infinity on the warped frequency axis. Next, we make a second warp of the frequency axis by changing the frequency variable from w^2 to u, which exploits the fact that all magnitude functions are even functions of frequency (and pure real), so a0 + a1 w^2 + a2 w^4.... becomes a0 + a1 u + a2 u^2... Nyquist is still at infinity. We have now transformed the problem into an curve fit to a ratio of polynomials [(a0 + a1 u + a2 u^2...)/(1+ b1 + b2 u^2...)]. There are, of course, many published techniques for doing this. We choose to fit the ratio of polynomials to the doubly-warped discrete grid in a least- squares sense by using singular value decomposition. Next, the least-squares rational approximation is made minimax-error by adding correction terms to it, as computed by Remes' second algorithm, which is described in Forman Acton's "Numerical Method That Work." (This is not the same Remez algorithm used in the famous Parks/Mclellan/Rabiner FIR design program; Prof. Remez published more than one algorithm, and the one I use works for ratios of polynomials.) Starting up the Remez algorithm requires that the least-squares solution provide the correct number of inflection points in the error vs frequency plot so that the Remez algorithm can progressively refine these, so this is one point of potential failure for the algorithm. In addition, this particular algorithm is not guaranteed to be globally convergent. The reason that the optimization is performed by adding correction terms to a least-squares solution is that this improves the numerical stability of the algorithm because only the correction terms are being computed and these tend to be small compared to the terms in the least squares solution. The Remez update equations are often quite ill- conditioned, so having a good numerical strategy is important. Therefore, I use SVD to get the roots of the update equations because it is admirably stable and because it automatically produces a "condition number," which allows for easy diagnosis of problems due to ill- conditioning. The rest of the algorithm just requires taking the resulting ratio of polynomials and frequency-mapping the poles and zeros inversely to the way the original mapping was done. This is done in two steps: the first replaces u by s^2, and the second applies the normal bilinear transform to the poles and zeros in s. This gets us to the z plane and the final poles and zeroes of our optimization. There are some functions that have true minimax solutions in the u-plane but cannot be transformed to the z plane because the result would create non-conjugate poles or zeros having non-zero imaginary parts. There are usually suboptimal solutions for these cases that are not truly minimax but that are still very useful. More specifically, if the u-plane solution has poles on the positive real axis, this will produce unrealizable results. A useful suboptimal strategy is to constrain these poles to 0 and to do a constrained optimization on the remaining poles and zeros. Because the computation is based on an oversampled frequency grid (i.e. where the number of frequency samples is much larger than the number of z-plane poles and zeros in the output), the number of poles and zeros in the z-plane can be chosen freely without regard to the number of poles and zeros in the original analog function being approximated. The only thing that the program does with the original analog poles and zeros is that it uses them to compute the magnitude value corresponding to each point on the frequency grid. All of this was written in Fortran 90 (of course, I used canned routines where possible) and if the algorithm is going to converge, the solution appears very quickly. (The nice thing about Remez is that it is quadratically convergent.) On a modern desktop computer, the results appear almost as soon as the Enter key is pressed. The only thing that the user needs to supply are the analog poles and zeros, the sample rate of the approximation, the number of z-plane poles/zeros desired, and the frequency range over which the approximation should be minimax. I will consider publishing a more comprehensive write-up sometime in the future. (Not now.) However, the write-up that I have uses quite a few ugly-looking equations than don't transform well into text representation. The program that implements my technique belongs to my employer, so I can't make that publicly available. I will consider publishing a more comprehensive paper sometime in the future. However, the write-up that I have uses quite a few equations than don't transform well into text representation.