In classical feedback control, ie single loop controls, the stability is well studied and analyzed using Nyquist plot or Bode plot. One can tune the open loop response, such that it never goes close to the critical (-1,0) point in the Nyquist plot.
But for MIMO systems, what is the stability criteria? I did some search and it seems H-infinity control is what is used for such systems. So my question is:
1) Many practical systems tend to be multiloop or rather MIMO systems. So are they all designed using H-infinity control?
2) How does Nyquist criteria get translated to MIMO loops? Is there a similar Nyquist like criteria for MIMO loops?
The old traditional way of implementing a multi-loop controller is to start with the innermost loop and get it stable and behaving well, then treat it like a fixed plant and move out to the next loop, and then repeat.
This works, and particularly in situations where the various innies and outies can be decoupled it works quite well. In fact, I can't think of a single system that I've put into production that has multiple loops where I did not use this technique. You can do your Bode plots and test for the Nyquist criteria at each step. However, there is no nicely formulaic way to design for optimality, and keeping the loops robustly stable is more a matter of intuition and experience than hard and fast mathematics.
As JOS said, the way that you check directly for stability is to find the eigenvalues of the state transition matrix (or, if you like thinking in continuous time, the eigenvalues of the state evolution matrix). As long as they're in the stability region (magnitude less than 1 or real part less than zero, respectively), then your system is stable. Even if you're using the method I outline above, it's not a bad idea to double-check your math by doing so, particularly if you need to check for stability and performance in the face of parameter variation.
H-infinity control is one way of directly designing a controller for a MIMO system. As I understand it, the technique itself does not lend itself well to designing robust controllers, nor does it make it easy to deal with nonlinearities. There are various nifty theoretical methods for doing so, but they're all fairly mathematically intense, and I do not know what's prevalent these days -- the good ol' divide and conquer method of doing each loop individually has never failed me yet.
Thanks for your reply. So if all loops are stable then the complete closed loop is stable. I am also of the same opinion. But I think this is a sufficient condition for complete stability. In MIMO loops, even if some loops are unstable the complete system can be still be stable.
So for the optimum solution, I don't know what criteria to check. I am looking for help in this area.
Each loop being stable going out is, indeed, a sufficient but not necessary condition. But in many cases (mostly when there's a 'natural' progression of loops: an example would be a velocity loop around a motor, followed by a position loop around a valve, with, finally, a fluid-level loop around a tank full of reactants) doing so gets you a pretty good solution -- particularly when there are significant linearities associated with each step.
"Optimum", in this context, is a loaded word. It'll mean something significantly different to someone steeped in control theory (who will be thinking optimal in the H-infinity or quadratic sense) than to a 'civilian' (who will be expecting the system to work as best as can be for whatever real-world criteria they have in mind). If you're thinking optimal in a control systems sense, then go on a safari through the controls literature -- but remember that their "optimal" isn't necessarily optimal for whoever signs the checks.
And note, too, that a typical way to use such 'optimal' control design strategies is to select cost functions which you know, a-priori, to result in outcomes that are more robust to real-world variations, match real-world performance desires, or both.
And, in this case, the way that an engineer develops this 'a-priori' knowledge is through seat-of-the-pants experience -- which gets you full circle to just doing the whole damned thing intuitively anyway, only with new big words*.
* (OK. I've tipped over into being cynical. Cest la vie. I was once challenged, in a project meeting, after I'd made some humorously disparaging comment about Sales, with the question of what the difference is between a realist and a cynic. As I was floundering for a comeback, a normally quiet and retiring coworker saved me with "around here, not much!" I think it applies to many things in this imperfect world.)
In my experience, one finds the eigenvalues of the state-transition matrix (in the state-space representation) and see that they all have magnitude less than 1.
Thanks for your reply. Is there a criteria how large the eigenvalues can be for a particular performance? What I mean is, for continuous time single loop systems, it is common to say that phase margin(PM) should be >60 degrees for robust performance. A PM of 20 degrees is also stable but the resulting closed loop response will be totally unacceptable.
The question is, where should one keep the eigenvalues? Eigenvalues having magnitude less than 1 (or for continuous time in the left half plane) is fine, but for a particular response(PM like criteria) how do they look like?