DSPRelated.com
Free Books

State Space Models

Equations of motion for any physical system may be conveniently formulated in terms of the state of the system [330]:

$\displaystyle \underline{{\dot x}}(t) = f_t[\underline{x}(t),\underline{u}(t)] \protect$ (2.6)

Here, $ \underline{x}(t)$ denotes the state of the system at time $ t$, $ \underline{u}(t)$ is a vector of external inputs (typically forces), and the general vector function $ f_t$ specifies how the current state $ \underline{x}(t)$ and inputs $ \underline{u}(t)$ cause a change in the state at time $ t$ by affecting its time derivative $ \underline{{\dot x}}(t)$. Note that the function $ f_t$ may itself be time varying in general. The model of Eq.$ \,$(1.6) is extremely general for causal physical systems. Even the functionality of the human brain is well cast in such a form.

Equation (1.6) is diagrammed in Fig.1.4.

Figure: Continuous-time state-space model $ \underline{{\dot x}}(t) = f_t[\underline{x}(t),\underline{u}(t)]$.
\includegraphics{eps/statespaceanalog}

The key property of the state vector $ \underline{x}(t)$ in this formulation is that it completely determines the system at time $ t$, so that future states depend only on the current state and on any inputs at time $ t$ and beyond.2.8 In particular, all past states and the entire input history are ``summarized'' by the current state $ \underline{x}(t)$. Thus, $ \underline{x}(t)$ must include all ``memory'' of the system.

Forming Outputs

Any system output is some function of the state, and possibly the input (directly):

$\displaystyle \underline{y}(t) \isdef o_t[\underline{x}(t),\underline{u}(t)]
$

The general case of output extraction is shown in Fig.1.5.

Figure: Continuous-time state-space model with output vector $ \underline{y}(t) = o_t[\underline{x}(t),\underline{u}(t)]$.
\includegraphics{eps/statespaceanalogwo}

The output signal (vector) is most typically a linear combination of state variables and possibly the current input:

$\displaystyle \underline{y}(t) \isdefs C\underline{x}(t) + D\underline{u}(t)
$

where $ C$ and $ D$ are constant matrices of linear-combination coefficients.


State-Space Model of a Force-Driven Mass

Figure 1.6: Ideal mass $ m$ on frictionless surface driven by force $ f(t)$.
\includegraphics{eps/forcemassintrosimp}

For the simple example of a mass $ m$ driven by external force $ f$ along the $ x$ axis, we have the system of Fig.1.6. We should choose the state variable to be velocity $ v={\dot x}$ so that Newton's $ f=ma$ yields

$\displaystyle \dot{v} \eqsp \frac{1}{m} f.
$

This is a first-order system (no vector needed). We'll look at a simple vector example below in §1.3.7.


Numerical Integration of General State-Space Models

An approximate discrete-time numerical solution of Eq.$ \,$(1.6) is provided by

$\displaystyle \underline{x}(t_n+T_n) \eqsp \underline{x}(t_n) + T_n\,f[\underline{x}(t_n),\underline{u}(t_n)], \quad n=0,1,2,\ldots\,. \protect$ (2.7)

Let

$\displaystyle g_{t_n}[\underline{x}(t_n),\underline{u}(t_n)] \isdefs \underline{x}(t_n) + T_n\,f_{t_n}[\underline{x}(t_n),\underline{u}(t_n)].
$

Then we can diagram the time-update as in Fig.1.7. In this form, it is clear that $ g_{t_n}$ predicts the next state $ \underline{x}(t_n+T_n)$ as a function of the current state $ \underline{x}(t_n)$ and current input $ \underline{u}(t_n)$. In the field of computer science, computations having this form are often called finite state machines (or simply state machines), as they compute the next state given the current state and external inputs.

Figure 1.7: Discrete-time state-space model viewed as a state predictor, or finite state machine.
\includegraphics{eps/statemachineg}

This is a simple example of numerical integration for solving an ODE, where in this case the ODE is given by Eq.$ \,$(1.6) (a very general, potentially nonlinear, vector ODE). Note that the initial state $ \underline{x}(t_0)$ is required to start Eq.$ \,$(1.7) at time zero; the initial state thus provides boundary conditions for the ODE at time zero. The time sampling interval $ T_n$ may be fixed for all time as $ T_n=T$ (as it normally is in linear, time-invariant digital signal processing systems), or it may vary adaptively according to how fast the system is changing (as is often needed for nonlinear and/or time-varying systems). Further discussion of nonlinear ODE solvers is taken up in §7.4, but for most of this book, linear, time-invariant systems will be emphasized.

Note that for handling switching states (such as op-amp comparators and the like), the discrete-time state-space formulation of Eq.$ \,$(1.7) is more conveniently applicable than the continuous-time formulation in Eq.$ \,$(1.6).


State Definition

In view of the above discussion, it is perhaps plausible that the state $ \underline{x}(t) = [x_1(t), \ldots, x_N(t)]^T$ of a physical system at time $ t$ can be defined as a collection of state variables $ x_i(t)$, wherein each state variable $ x_i(t)$ is a physical amplitude (pressure, velocity, position, $ \ldots$) corresponding to a degree of freedom of the system. We define a degree of freedom as a single dimension of energy storage. The net result is that it is possible to compute the stored energy in any degree of freedom (the system's ``memory'') from its corresponding state-variable amplitude.

For example, an ideal mass $ m$ can store only kinetic energy $ E_m \eqsp \frac{1}{2}m\, v^2$, where $ v={\dot x}$ denotes the mass's velocity along the $ x$ axis. Therefore, velocity is the natural choice of state variable for an ideal point-mass. Coincidentally, we reached this conclusion independently above by writing $ f=ma$ in state-space form $ \dot{v}=(1/m)f$. Note that a point mass that can move freely in 3D space has three degrees of freedom and therefore needs three state variables $ (v_x,v_y,v_z)$ in its physical model. In typical models from musical acoustics (e.g., for the piano hammer), masses are allowed only one degree of freedom, corresponding to being constrained to move along a 1D line, like an ideal spring. We'll study the ideal mass further in §7.1.2.

Another state-variable example is provided by an ideal spring described by Hooke's law $ f=kx$B.1.3), where $ k$ denotes the spring constant, and $ x$ denotes the spring displacement from rest. Springs thus contribute a force proportional to displacement in Newtonian ODEs. Such a spring can only store the physical work (force times distance), expended to displace, it in the form of potential energy $ E_k \eqsp \frac{1}{2}k\, x^2$. More about ideal springs will be discussed in §7.1.3. Thus, spring displacement is the most natural choice of state variable for a spring.

In so-called RLC electrical circuits (consisting of resistors $ R_i$, inductors $ L_i$, and capacitors $ C_i$), the state variables are typically defined as all of the capacitor voltages (or charges) and inductor currents. We will discuss RLC electrical circuits further below.

There is no state variable for each resistor current in an RLC circuit because a resistor dissipates energy but does not store it--it has no ``memory'' like capacitors and inductors. The state (current $ I$, say) of a resistor $ R$ is determined by the voltage $ V$ across it, according to Ohm's law $ V=IR$, and that voltage is supplied by the capacitors, inductors, and voltage-sources, etc., to which it is connected. Analogous remarks apply to the dashpot, which is the mechanical analog of the resistor--we do not assign state variables to dashpots. (If we do, such as by mistake, then we will obtain state variables that are linearly dependent on other state variables, and the order of the system appears to be larger than it really is. This does not normally cause problems, and there are many numerical ways to later ``prune'' the state down to its proper order.)

Masses, springs, dashpots, inductors, capacitors, and resistors are examples of so-called lumped elements. Perhaps the simplest distributed element is the continuous ideal delay line. Because it carries a continuum of independent amplitudes, the order (number of state variables) is infinity for a continuous delay line of any length! However, in practice, we often work with sampled, bandlimited systems, and in this domain, delay lines have a finite number of state variables (one for each delay element). Networks of lumped elements yield finite-order state-space models, while even one distributed element jumps the order to infinity until it is bandlimited and sampled.

In summary, a state variable may be defined as a physical amplitude for some energy-storing degree of freedom. In models of mechanical systems, a state variable is needed for each ideal spring and point mass (times the number of dimensions in which it can move). For RLC electric circuits, a state variable is needed for each capacitor and inductor. If there are any switches, their state is also needed in the state vector (e.g., as boolean variables). In discrete-time systems such as digital filters, each unit-sample delay element contributes one (continuous) state variable to the model.


Next Section:
Linear State Space Models
Previous Section:
Difference Equations (Finite Difference Schemes)