Reply by Greg Berchin November 3, 20112011-11-03
On Wed, 02 Nov 2011 16:21:18 -0800, Robert Orban <spambucket2413@earthlink.net>
wrote:

>I have not experimented with other mappings, although I suppose that you >could use any of the common ones like impulse-invariant, step-invariant, or >matched-z (as you said).
The trouble with impulse- and step-invariance is that their goal is to match time-domain response without regard to frequency-domain response, so I am not certain that they will produce acceptable results with your technique. Matched-pole-zero (matched-z) does not explicitly address frequency response, but then neither does bilinear-transform. It might make for an interesting comparison of results. On the other hand, it's hard to argue with success, and your bilinear-transform technique obviously works very well.
>One thing that I didn't mention in my post is that although in principle, the >number of z-plane poles and zeros can be different in the optimization, the >technique is mainly useful if the number of z-plane poles and zeros is forced >to be the same because this allows the magnitude response at Nyquist to be >something other than zero or infinite after the final bilinear unwarp. This >makes life much easier on the optimizer because it prevents a very large >magnitude change from happening in the frequency range between Nyquist and >upper frequency range of the optimization.
(On yet another hand, bilinear-transform has its own problems that might be mitigated in a different mapping.)
>For example, I have used it automatically inside a loop to create tables of >pole-zero values for parametric equalizers that model the magnitude responses >of classic analog second-order bell-curve equalizers. Each table contains >several hundred values that correspond to w0 and Q pairs and takes less than >a minute to compute with a reasonably modern PC.
How I love these modern computers! My initial implementation of FDLS was on a Vax 11-780. I would launch the program and have enough time to get a cup of coffee before it completed (if there were multiple users logged-in, I could go to lunch). Then the guys at LLNL implemented it on a Cray 2 (this was 1985), which shortened but did not eliminate the wait. Nowadays, by the time I release the "Return" key on the keyboard, the program is finished. Amazing. Thanks for the overview. I hope that you get the opportunity to publish the details in the future. Greg Berchin
Reply by Robert Orban November 2, 20112011-11-02
In article <o4una7l1d9mser16dp1el1vp4ernmi5726@4ax.com>, 
gjberchin@chatter.net.invalid says...
>Thanks, Robert. Very interesting. I have some questions and comments: > >>First we frequency-warp the desired z-plane >>magnitude function so that it can be optimized as a quotient of two >>polynomials with no trig terms. Basically, the frequency warp uses an >>"inverse bilinear transformation." In other words, this creates an s-plane >>magnitude response that will produce the desired z-plane function once >>the normal bilinear transformation is applied to the s-plane poles and >>zeros that we find in the optimization. > >So you perform this initial "unwarping" with the knowledge that your final
step
>will be warped by a bilinear transform. Have you experimented with other s- >z >mapping methods and their attendant warping (if any)? For example, if you
know
>that your final step will be a "matched pole-zero" mapping, then IIRC there
will
>be no warping at all.
I have not experimented with other mappings, although I suppose that you could use any of the common ones like impulse-invariant, step-invariant, or matched-z (as you said). Once nice thing about the inverse bilinear warp is that if you uniformly frequency-sample the frequency axis after the warp, it automatically concentrates more grid points close to what will be Nyquist after the final "unwarp," which is where the optimizer tends to have the most trouble. Another nice thing is that if the inner optimization has a minimax error curve, the minimax property will be preserved in the final result because of the properties of the bilinear transform -- it doesn't change the magnitudes; it just moves them to different frequencies. The other main goal of the warp is to allow the optimizer to operate on a ratio of polynomials containing no trig or exponential functions, which allows one to choose from a wide range of well established optimization techniques without gumming up the mathematics. In addition, I suspect that it speeds up the optimizer because it doesn't need to calculate trig functions. One thing that I didn't mention in my post is that although in principle, the number of z-plane poles and zeros can be different in the optimization, the technique is mainly useful if the number of z-plane poles and zeros is forced to be the same because this allows the magnitude response at Nyquist to be something other than zero or infinite after the final bilinear unwarp. This makes life much easier on the optimizer because it prevents a very large magnitude change from happening in the frequency range between Nyquist and upper frequency range of the optimization.
> >>Next, we make a second warp of the frequency axis by changing the >>frequency variable from w^2 to u, > >To state an implicit step explicitly, I think you're saying that you compute
the
>magnitude-squared expression and then apply the change of variables.
Exactly correct.
> >Again, I wonder: if you used a non-warping s->z mapping method, AND you knew
the
>exact Laplace domain transfer function for the prototype frequency response, >could you simply use the magnitude-squared form of that transfer function at >this point?
The main point of the change of variables is to minimize the degree of the polynomials in the optimization. Rootfinding occurs as part of the Remez update (in fact, the rootfinding has to be applied to a mildly nonlinear equation so there is an "inner loop" in the Remez update that deals with this; see the Acton reference for more detail), so I thought that minimizing the degree of the polynomials would help the rootfinder. There are some "interesting" numerical stability issues that arise in the Remez update, so it is important to choose a good rootfinder (I used SVD) and to give it all of the help one can.
> ><snip minimax implementation details> > >>The rest of the algorithm just requires taking the resulting ratio of >>polynomials and frequency-mapping the poles and zeros inversely to >>the way the original mapping was done. This is done in two steps: the >>first replaces u by s^2, and the second applies the normal bilinear >>transform to the poles and zeros in s. This gets us to the z plane and the >>final poles and zeroes of our optimization. > >My suspicion is that, if you used a non-warping mapping method, you could
apply
>your technique directly to a *known* Laplace domain transfer function. Thus
you
>would have a direct s->z magnitude function approximation.
I would have to think this through; you may be correct. Because it is based on a bilinear warp, my technique preserves the minimax error in the optimization through the two steps referenced in my quote. So if you want a true minimax error curve, my method is a surefire way to obtain it. I'm not sure if this would be true of a non-warping method; again, it's something I would have to think through.
> >(Repeating the example cited above, your method would produce different
results
>than simple "matched pole-zero" mapping, because they are optimizing
different
>parameters. Substituting "z=exp(sT)" is a naive mapping of critical points, >without regard to the effect upon magnitude response. Your method is an
explicit
>minimization of the differences between the prototype and model magnitude >responses, without regard to the effects upon locations of poles and
zeroes.) I am sure that there are other ways of doing this kind of thing, and for all I know, one or more may be better in terms of stability, accuracy, or computational efficiency. But my method has proven very useful to me because it can produce results that are optimum in a minimax error sense over a specified frequency range (and even if the results are not fully minimax because of numerical issues, they are usually close), it is computationally efficient, it can produce extremely accurate results with smooth curves like RIAA and the like (often to within 1E-6 dB), and it almost always converges when used to model classic analog magnitude responses required in audio. For example, I have used it automatically inside a loop to create tables of pole-zero values for parametric equalizers that model the magnitude responses of classic analog second-order bell-curve equalizers. Each table contains several hundred values that correspond to w0 and Q pairs and takes less than a minute to compute with a reasonably modern PC. It is interesting to look at the IIR coefficients in these tables because they are a bit chaotic. (A 3D surface plot of a given IIR coefficient vs. frequency vs. Q isn't particularly smooth.) I originally thought that it would be possible to do some sort of linear interpolation between values, but it turns out that big jumps occur. This is evidently a consequence of the minimax error criterion but I didn't expect it before it saw it. Here's an interesting example -- matching a 1 kHz 3rd order analog Butterworth filter from 0 to 21 kHz at 44.1 kHz sample rate. (I used rounded -off s-plane poles: -1, -0.5+/-j 0.866.) The target curve has an 80 dB magnitude range, give or take. This is also challenging because the upper frequency bound is 95.2% of Nyquist. Here is the result of a 6-th order optimization: Zero # Real Imag. 1 -0.8840140 0.000000 2 -0.7325760 0.000000 3 -0.5384182 0.000000 4 -0.3346849 0.000000 5 -0.5219714E-01 0.000000 6 0.3407240E-02 0.000000 Pole # Real Imag. 1 -0.8733325 0.000000 2 -0.6922994 0.000000 3 -0.4365723 0.000000 4 0.8672077 0.000000 5 0.9241607 0.1146088 6 0.9241607 -0.1146088 MAXIMUM ERROR FROM 0.00 Hz TO 21000.00 Hz IS 0.0000871dB What is interesting here is what the algorithm does with the "extra" three z -plane poles and zeros that it is given to work with. It seems to interlace them on the real axis in the z-plane (see the first three poles and zeros above). There is nothing explicit in the algorithm that forces this to happen; it seems to happen naturally as part of the optimization process. When the algorithm is allowed only three poles and zeros, it still gives good results. It is interesting to compare the lower 3 poles and zeros in the 6-th order optimization with the poles and zeros produced by the third-order optimization: Zero # Real Imag. 1 -0.6030164 0.000000 2 0.2847928E-01 0.2376939 3 0.2847928E-01 -0.2376939 Pole # Real Imag. 1 0.8486569 0.000000 2 0.9263526 0.1100756 3 0.9263526 -0.1100756 MAXIMUM ERROR FROM 0.00 Hz TO 21000.00 Hz IS 0.1393570dB This one has complex zeros, which the 6-th order optimization did not. A 4-th order optimization does not converge. A 5-th order optimization converges but is not quite minimax because of ill- conditioning in the Remez root finder. Nevertheless, it greatly improves the error compared to the third-order solution: Zero # Real Imag. 1 -0.8485930 0.000000 2 -0.6440151 0.000000 3 -0.3913983 0.000000 4 -0.1923411E-01 0.2816178E-01 5 -0.1923411E-01 -0.2816178E-01 Pole # Real Imag. 1 -0.8286350 0.000000 2 -0.5579574 0.000000 3 0.8671565 0.000000 4 0.9241636 0.1145946 5 0.9241636 -0.1145946 MAXIMUM ERROR FROM 0.00 Hz TO 21000.00 Hz IS 0.0004787dB A 7-th order solution is the best the algorithm can find. It not quite minimax because it encounters a root on the positive real axis (a forbidden condition, as explained in my earlier post on the optimizer): Zero # Real Imag. 1 -0.9072050 0.000000 2 -0.8028597 0.000000 3 -0.6672917 0.000000 4 -0.4965467 0.000000 5 -0.3171957 0.000000 6 -0.5621694E-01 0.000000 7 0.5741816E-02 0.000000 Pole # Real Imag. 1 -0.9011459 0.000000 2 -0.7853810 0.000000 3 -0.6211117 0.000000 4 -0.3974577 0.000000 5 0.8672086 0.000000 6 0.9241608 0.1146089 7 0.9241608 -0.1146089 MAXIMUM ERROR FROM 0.00 Hz TO 21000.00 Hz IS 0.0000147dB The algorithm produces no improvement beyond 7-th order, but 0.0000147dB error is good enough for most folks :-) As far as common closed-form transformations go, the step-invariant transform seems to be best: Zero # Real Imag. 1 -3.473398 0.000000 2 -0.2496715 0.000000 Pole # Real Imag. 1 0.8672085 0.000000 2 0.9241609 0.1146089 3 0.9241609 -0.1146089 It provides +/- 1.1 dB accuracy over this frequency range. The result of the direct calculation of the step-invariant transform is non-minimum phase but could be easily corrected by reflecting the -3.473398 zero around the unit circle. However, the zeros shown above provide the best phase match to the analog prototype after a fixed delay is subtracted from the group delay to match the group delay of the analog and digital filters as much as possible. The phase match of the minimax solution to the analog filter is actually much poorer compared to the step-invariant transform. Even though the minimax approximation is minimum-phase (as is the analog filter), the phase response is strongly affected by the magnitude response outside of the approximation band. The group delay of the minimax digital filter becomes negative above about 3.5 kHz. The group delay of the analog prototype is positive at all frequencies, as is the group delay of the step-invariant transform. So this is an interesting example of a case where a non-minimum phase digital approximation matches the group delay of a minimum-phase analog filter better than a minimum-phase approximation with the same magnitude response. Matched-z and bilinear as basically useless, giving over 10 dB of error. The next best is impulse-invariant (below), which gives about +/- 3.5 dB of error. Zero # Real Imag. 1 -0.9094103 0.000000 2 0.000000 0.000000 Pole # Real Imag. 1 0.8672085 0.000000 2 0.9241609 0.1146089 3 0.9241609 -0.1146089 --Bob Orban
Reply by Greg Berchin October 29, 20112011-10-29
Thanks, Robert. Very interesting. I have some questions and comments:

>First we frequency-warp the desired z-plane >magnitude function so that it can be optimized as a quotient of two >polynomials with no trig terms. Basically, the frequency warp uses an >"inverse bilinear transformation." In other words, this creates an s-plane >magnitude response that will produce the desired z-plane function once >the normal bilinear transformation is applied to the s-plane poles and >zeros that we find in the optimization.
So you perform this initial "unwarping" with the knowledge that your final step will be warped by a bilinear transform. Have you experimented with other s->z mapping methods and their attendant warping (if any)? For example, if you know that your final step will be a "matched pole-zero" mapping, then IIRC there will be no warping at all.
>Next, we make a second warp of the frequency axis by changing the >frequency variable from w^2 to u,
To state an implicit step explicitly, I think you're saying that you compute the magnitude-squared expression and then apply the change of variables. Again, I wonder: if you used a non-warping s->z mapping method, AND you knew the exact Laplace domain transfer function for the prototype frequency response, could you simply use the magnitude-squared form of that transfer function at this point? <snip minimax implementation details>
>The rest of the algorithm just requires taking the resulting ratio of >polynomials and frequency-mapping the poles and zeros inversely to >the way the original mapping was done. This is done in two steps: the >first replaces u by s^2, and the second applies the normal bilinear >transform to the poles and zeros in s. This gets us to the z plane and the >final poles and zeroes of our optimization.
My suspicion is that, if you used a non-warping mapping method, you could apply your technique directly to a *known* Laplace domain transfer function. Thus you would have a direct s->z magnitude function approximation. (Repeating the example cited above, your method would produce different results than simple "matched pole-zero" mapping, because they are optimizing different parameters. Substituting "z=exp(sT)" is a naive mapping of critical points, without regard to the effect upon magnitude response. Your method is an explicit minimization of the differences between the prototype and model magnitude responses, without regard to the effects upon locations of poles and zeroes.)
>The program that implements my technique belongs to my employer, so >I can't make that publicly available. I will consider publishing a more >comprehensive paper sometime in the future.
I look forward to that. And from the response I got with FDLS, I suspect that others do, too. Greg Berchin
Reply by Robert Orban October 28, 20112011-10-28
In article <egjia7l4tdl5m0e6vfc82ug7tb8c45fkha@4ax.com>, 
gjberchin@chatter.net.invalid says...
>On Wed, 26 Oct 2011 17:53:19 -0800, Robert Orban
<spambucket2413@earthlink.net>
>wrote: > >>This was computed using a proprietary program that uses
optimization
>>techniques. > >Robert, I encourage you to publish this technique. In cases where
phase response
>can be neglected, it matches magnitude response better than FDLS.
I believe that I posted a brief description of the technique in comp.dsp a few year ago. This time, I'll get a bit more detailed. The algorithm is a grid-based technique that uses a highly oversampled set of frequency points. First we frequency-warp the desired z-plane magnitude function so that it can be optimized as a quotient of two polynomials with no trig terms. Basically, the frequency warp uses an "inverse bilinear transformation." In other words, this creates an s-plane magnitude response that will produce the desired z-plane function once the normal bilinear transformation is applied to the s-plane poles and zeros that we find in the optimization. For our purposes here, we call the s-plane frequency variable 'w', assuming the usual s -> jw transformation to compute the complex frequency response. This first warp maps Nyquist to infinity on the warped frequency axis. Next, we make a second warp of the frequency axis by changing the frequency variable from w^2 to u, which exploits the fact that all magnitude functions are even functions of frequency (and pure real), so a0 + a1 w^2 + a2 w^4.... becomes a0 + a1 u + a2 u^2... Nyquist is still at infinity. We have now transformed the problem into an curve fit to a ratio of polynomials [(a0 + a1 u + a2 u^2...)/(1+ b1 + b2 u^2...)]. There are, of course, many published techniques for doing this. We choose to fit the ratio of polynomials to the doubly-warped discrete grid in a least- squares sense by using singular value decomposition. Next, the least-squares rational approximation is made minimax-error by adding correction terms to it, as computed by Remes' second algorithm, which is described in Forman Acton's "Numerical Method That Work." (This is not the same Remez algorithm used in the famous Parks/Mclellan/Rabiner FIR design program; Prof. Remez published more than one algorithm, and the one I use works for ratios of polynomials.) Starting up the Remez algorithm requires that the least-squares solution provide the correct number of inflection points in the error vs frequency plot so that the Remez algorithm can progressively refine these, so this is one point of potential failure for the algorithm. In addition, this particular algorithm is not guaranteed to be globally convergent. The reason that the optimization is performed by adding correction terms to a least-squares solution is that this improves the numerical stability of the algorithm because only the correction terms are being computed and these tend to be small compared to the terms in the least squares solution. The Remez update equations are often quite ill- conditioned, so having a good numerical strategy is important. Therefore, I use SVD to get the roots of the update equations because it is admirably stable and because it automatically produces a "condition number," which allows for easy diagnosis of problems due to ill- conditioning. The rest of the algorithm just requires taking the resulting ratio of polynomials and frequency-mapping the poles and zeros inversely to the way the original mapping was done. This is done in two steps: the first replaces u by s^2, and the second applies the normal bilinear transform to the poles and zeros in s. This gets us to the z plane and the final poles and zeroes of our optimization. There are some functions that have true minimax solutions in the u-plane but cannot be transformed to the z plane because the result would create non-conjugate poles or zeros having non-zero imaginary parts. There are usually suboptimal solutions for these cases that are not truly minimax but that are still very useful. More specifically, if the u-plane solution has poles on the positive real axis, this will produce unrealizable results. A useful suboptimal strategy is to constrain these poles to 0 and to do a constrained optimization on the remaining poles and zeros. Because the computation is based on an oversampled frequency grid (i.e. where the number of frequency samples is much larger than the number of z-plane poles and zeros in the output), the number of poles and zeros in the z-plane can be chosen freely without regard to the number of poles and zeros in the original analog function being approximated. The only thing that the program does with the original analog poles and zeros is that it uses them to compute the magnitude value corresponding to each point on the frequency grid. All of this was written in Fortran 90 (of course, I used canned routines where possible) and if the algorithm is going to converge, the solution appears very quickly. (The nice thing about Remez is that it is quadratically convergent.) On a modern desktop computer, the results appear almost as soon as the Enter key is pressed. The only thing that the user needs to supply are the analog poles and zeros, the sample rate of the approximation, the number of z-plane poles/zeros desired, and the frequency range over which the approximation should be minimax. I will consider publishing a more comprehensive write-up sometime in the future. (Not now.) However, the write-up that I have uses quite a few ugly-looking equations than don't transform well into text representation. The program that implements my technique belongs to my employer, so I can't make that publicly available. I will consider publishing a more comprehensive paper sometime in the future. However, the write-up that I have uses quite a few equations than don't transform well into text representation.
Reply by NugenAudio October 27, 20112011-10-27
Thank you very much to all the responses, both in this thread, and by
email.

We have now placed the job with a contractor, so, please ... no more offers
to fulfil the job.

Thanks.

Paul
Reply by Greg Berchin October 27, 20112011-10-27
On Wed, 26 Oct 2011 17:53:19 -0800, Robert Orban <spambucket2413@earthlink.net>
wrote:

>This was computed using a proprietary program that uses optimization >techniques.
Robert, I encourage you to publish this technique. In cases where phase response can be neglected, it matches magnitude response better than FDLS. Greg Berchin
Reply by Robert Orban October 26, 20112011-10-26
This is the ITU-R 468 noise weighting filter, but the table in the TASA 
standard is offset by by a constant gain of 5.6 dB for some reason.

http://en.wikipedia.org/wiki/ITU-R_468_noise_weighting

This shows a passive LC ladder filter implementation.

I happen to have worked out some realizations as a favor for a friend, so 
here are some hints:

The s-plane poles and zeros of the LC ladder filter are:

POLE ZERO ANALYSIS OF V1  
NETWORK ZEROS
    REAL PART        IMAG.PART           ZERO FREQ.         ZERO Q
      (HZ)             (HZ)                (HZ) 
  0.00000000D+00   0.00000000D+00      0.00000000D+00  -5.00000000D-01  
NETWORK POLES
    REAL PART        IMAG.PART           POLE FREQ.         POLE Q
      (HZ)             (HZ)                (HZ) 
 -2.98315994D+03  -9.94084265D+03      1.03788051D+04   1.73956565D+00  
 -2.98315994D+03   9.94084265D+03      1.03788051D+04   1.73956565D+00  
 -9.97506312D+03   0.00000000D+00      9.97506312D+03   5.00000000D-01  
 -3.75852916D+03  -5.79004234D+03      6.90297992D+03   9.18308681D-01  
 -3.75852916D+03   5.79004234D+03      6.90297992D+03   9.18308681D-01  
 -4.12270207D+03   0.00000000D+00      4.12270207D+03   5.00000000D-01

This can be broken into a first-order highpass section cascaded with a 6th-
order lowpass section.

I only worked out a digital filter for 48 kHz sample rate.

The z-plane poles/zeros of the highpass section are worked out as:

 SUPPLY POLES:                                                             
     pole # 1 = -9.97506312D+03         +/-j 0
 SUPPLY ZEROS:                                                             
     zero # 1 = 0                       +/-j 0


         Zero #            Real                 Imag.
           1         -0.3385920              0.000000    
           2          0.9999964              0.000000    
         Pole #            Real                 Imag.
           1         -0.3540664              0.000000    
           2          0.2695075              0.000000    

 MAXIMUM ERROR FROM      1.00 Hz TO  20000.00 Hz IS  0.0016023dB

[note: you probably should change the second zero to 1.00000, which would 
make the function fully DC-blocking.]

The z-plane poles/zeros of the lowpass section are:

         Zero #            Real                 Imag.
           1         -0.7801920              0.000000    
           2         -0.5738120              0.000000    
           3         -0.1907084             0.9879871E-01
           4         -0.1907084            -0.9879871E-01
           5          0.6368181E-01         0.1277262    
           6          0.6368181E-01        -0.1277262    
         Pole #            Real                 Imag.
           1         -0.7081976              0.000000    
           2          0.5642261              0.000000    
           3          0.6763973             0.8303727E-01
           4          0.6763973            -0.8303727E-01
           5          0.4432977             0.4196167    
           6          0.4432977            -0.4196167    
 MAXIMUM ERROR FROM      0.00 Hz TO  20000.00 Hz IS  0.0007492dB

This was computed using a proprietary program that uses optimization 
techniques. I don't believe that there are any closed-form solutions to this 
problem that yield good results, but you might want to try the matched-z 
transform.



In article <25GdncqZaZZqSQjTnZ2dnUVZ_qKdnZ2d@giganews.com>, 
paul@n_o_s_p_a_m.nugenaudio.com says...
>Hi, > >We are seeking a contractor to provide the coefficients for a set of EQ >filters with specific frequency responses, at given sample rates. > >The filters we need the coefficients for are the frequency weighting >equalizer for LEQ(m) as given by >http://www.tasatrailers.org/TASAStandard.pdf section 1.4.2 at various >standard audio sample rates (eg- 44.1kHz, 48kHz, 96kHz, 192kHz), and >preferably the matlab code (or whatever was used to generate the >coefficients) so that we can generate the filters for other sample rates if >necessary in the future. > >Please email me at paul at nugenaudio.com for more details > >Thanks > >Paul > >Dr. Paul Tapper, >Technical Director, >NUGEN Audio > >
Reply by Andreas Huennebeck October 14, 20112011-10-14
Steve Pope wrote:

> Andreas Huennebeck <acmh@gmx.de> wrote: > >>NugenAudio wrote: > >>> We are seeking a contractor to provide the coefficients for a set of EQ >>> filters with specific frequency responses, at given sample rates. >>> [..] >> >>Looking at the products you advise on your homepage nugenaudio.com, >>it's hard to believe that you do not have the knowledge to solve this task >>on your own. > > Good. A project leader should have the knowledge to do every sub-task, > even if he/she doesn't have the time to do every sub-task.
This would be optimal but it is not necessary as long as the company has at least one employee who has this knowledge. When I wrote 'you' in my posting above I meant the company, not the poster himself. bye Andreas -- Andreas H&#4294967295;nnebeck | email: acmh@gmx.de ----- privat ---- | www : http://www.huennebeck-online.de Fax/Anrufbeantworter: 0721/151-284301 GPG-Key: http://www.huennebeck-online.de/public_keys/andreas.asc PGP-Key: http://www.huennebeck-online.de/public_keys/pgp_andreas.asc
Reply by Fred Marshall October 14, 20112011-10-14
On 10/13/2011 2:50 PM, brent wrote:
> I think it all depends on the complexity of the project.
For the same person perhaps. But in general you seem to imply that there is no possibility of "dumb, dumber, dumbest" :-)
Reply by robert bristow-johnson October 14, 20112011-10-14
On Oct 13, 6:30&#4294967295;pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> Steve Pope <spop...@speedymail.org> wrote: > > Tim Wescott &#4294967295;<t...@seemywebsite.com> wrote: > > (snip) > > >>I think it definitely _helps_ to know how to do all the sub-tasks, but I > >>think that one can definitely do a good job without. > >>Or perhaps the obverse: if you're only willing to manage projects where > >>you can personally do all the sub-tasks, then you're not being > >>adventuresome enough.
i know i can't do shit for user interfaces nor for how to plug into some really crappy API. but i would want to keep abreast of how the basic science and algorithm works. i would not want that to be too "low-level" for me.
> > On reflection, you're basically correct on this. &#4294967295; > > My general rule is that someone should have a good understanding > one level below the one that they are working on, but don't need > to know below that. > > Software engineers should understand computer hardware. > > Computer hardware engineers should understand transistor physics. >
nothing between computer hardware and the physics of transistors?? nothing about circuits? i might think that Kirchoff's laws and volt- amp characteristics might come in between.
> Transistor designers should know solid state physics. >
i would put the two at the same level. i cannot imagine a transistor device designer that doesn't know solid state physics. i would think that these guys should know the Ebers-Moll equations, how it's derived, and how it applies to BJT geometries.
> solid state physicists/engineers should know nuclear physics. > > nuclear engineers should know high energy/particle physics.
r b-j