DSPRelated.com
Blogs

Finding the Best Optimum

Tim WescottNovember 4, 2013

When I was in school learning electrical engineering I owned a large mental pot, full of simmering resentment against the curriculum as it was being taught.

It really started in my junior year, when we took Semiconductor Devices, or more accurately "how to build circuits using transistors". I had been seduced by the pure mathematics of sophomore EE courses, where all the circuit elements (resistors, capacitors, coils and -- oh the joy -- dependent sources) are ideally modeled, and the labs were all carefully constructed to not push against the limits of the models (coils never live up to their models -- somehow we didn't do any labs with coils). This was capped off by the usual Signals & Systems course with the book by Oppenheim, Wilsky & Young.

All of this idealism in teaching led to some unrealistic expectations about design, which were put to test when I first started to apply what I was learning in my Semiconductor Devices class.

In this class, they teach you about the 'small signal' model of a transistor, where you take this physical device, a little plastic button with metal legs, and you reduce it to a subcircuit consisting of some resistors, maybe some capacitors, and a dependent source. Then you use the model so obtained and make predictions about circuit behavior.

Now, I had been an electronics hobbyist long before I started college, so it didn't take me long before I decided to build up a circuit using my new-found knowledge. So, during break I build an amplifier circuit. Designed using my new-found knowledge, it had a Thevenin bias circuit with a nice big cap on the emitter resistor for lots of amplification, and lots of promise. The only problem was -- it didn't work for beans. My nice sinusoidal input came out as a series of spikes.  What was wrong? The problem was that I was trying to build a power amplifier using smal-signal rules, and no one had told me just how small these 'small signals' really were.

Being young, I responded to this by getting offended. Didn't these people want me to be able to do something with what I learned? Why teach me useless knowledge?

After that I was on guard, ready to get ticked off at any apparent shortcoming in my education. So when I started graduate school and started learning about optimal solutions to systems problems I was ready and waiting to feel resentment, and I'd found a perfect vehicle.

Systems engineers love the idea of optimization, and I'm no different. The notion is that you construct a problem statement that includes some cost to be minimized or some benefit to be maximized. Then you frame this problem statement in a way that lets you find the partial derivative of the cost vs. about a gazzilion different variables. Then you just turn the crank, and voila! Out pops a vector containing the Worlds Best Values for the gazzilion different variables. If you're really lucky, out will pop a whole system description that you must only implement to have the Best Possible Solution to Your Problem.

There are one or two problems with this approach, however. The first problem is that, in general, finding global optimums to multivariate problems can be tremendously difficult. It is easy to find a local optimum, but finding a global optimum is akin to wandering in the Oregon Cascades on a foggy day, equipped only with an altimeter, trying to find the summit of Mount Hood. You know when you're going up hill, but when you get to a point where everywhere you turn is 'down' you know you've found a local optimum, but you can't tell if you've gotten to the highest possible point or if you're just standing on a smallish boulder.

The second problem with optimization is that, very often, the way that you've framed your problem statement doesn't match reality well enough.  Consequently the optimal solution that the math coughs up does not match, in any meaningful way, the optimal solution to your real problem.

Take a classic problem from communication systems, for example. Say you want to encode some data in some known way and you want to find the Very Best signal processing to do on that data to extract it. The nice thing about comm systems is that if you are building both ends of the link you get to choose how to frame the problem. So if you're lucky you can make darn sure that the transmitted symbols don't overlap each other. Then, if you assume a wide-open, linear communication channel and additive white Gaussian noise, you find out that -- praise the math! -- the Very Best way to demodulate this data is with a linear system. Furthermore, you find out that this linear system is a matched filter (this is why we like matched filters, by the way).

This ointment, however, is very attractive to flies.

First, noise is very rarely Gaussian. It's often very close to being Gaussian, but most real-world noise has excessive energy in the 'tails', and in some media one simply can't model the noise as a stationary Gaussian process (such as the atmosphere at 300kHz: http://www.wescottdesign.com/articles/MSK/mskTop.html). Next, our communications channels often have transmission characteristics that promote intersymbol interference in spite of our best efforts. Finally, we often find that the channels are nonlinear, and distort the signal in ways that are not taken into account in our hypothetical analysis leading to a matched filter.

One runs into similar problems in control systems -- the subject of optimal control is a mature one, and there are some fine results out there. If you're not careful, however, you'll find yourself designing a controller that only works when the plant exactly matches the model -- and that never happens. You can alleviate this problem somewhat by using robust control to model the plant and its uncertainties, then find the best controller for that, but you're still using a model that doesn't really match reality.

You can, in fact, beat your brains out trying to gather all the information you need to find a 'true optimum' solution, and never even get to the step where you put a blindfold on and try to find the top of the mountain. I have seen this sort of effort have a very negative effect on an engineering project -- someone spends so much time looking for the "best" solution to a problem that the whole project slips. Less bad, but still not good, you can spend a lot of engineering time (and your employer's or customer's money) looking for incremental improvements.

At one point in my career I suffered a great deal of angst, because I was torn between finding the 'best' solution to a problem (meaning the best possible technical solution), and just getting things done and out the door. Perhaps I'm slow, but it took me years to hit upon what should have been obvious from the start: it is my job, as a design engineer, to optimize for money. This means that if I'm working on a product that is going to have a production run of 1000 and I can take \$10 out of the cost of the system, then I should spend less than \$10000 to do it (actually I should spend much less -- most companies want to see the engineering paid off in a year or two, so that's the production figure you can use to amortize your costs). On the other hand, if I'm working on a product that is going to ship 100,000 units a month, I can spend three months of my time taking a dime out of the system cost and be a True Engineering Hero.

You can go far with this view, but you have to be careful. Optimizing for money usually doesn't mean that you should do slap-dash work. You shouldn't do slap-dash work because usually slap-dash work will rise up and inflict pain and expense on someone down the road. You can get away with it if you work for a fly-by-night outfit (and, presumably, if you are not burdened by a concience), but otherwise it's generally a bad idea.

Taking the "optimize for money" view does mean that you should think about how well something works, and how much better you can make it, and how much time and resources you need to take tomake that happen. When I am doing this I try to categorize things simply (because time is money and I'm optimizing for money). If I can, I put system features into three bins: "good", "bad" and "adequate". Sometimes I'll slip in "excellent" and "really sucks". Then I look at how long it'll take to make bad things adequate, and adequate things good (the effort to make good things bad is, somehow, always too easy). Then I either decide on my own what needs to be changed or I make a report and recommendation to my client, and we decide together what should be done.

The important thing to remember here is that when you start thinking of how to make the "best" system, algorithm, or whatever, that you need to be careful about what you mean by "best", and choose a definition of "best" that will, in the end, be truly beneficial to you, the people you work for, and for your (or their) customers.


To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: