More General Finite-Difference MethodsThe FDA and bilinear transform of the previous sections can be viewed as first-order conformal maps from the analog plane to the digital plane. These maps are one-to-one and therefore non-aliasing. The FDA performs well at low frequencies relative to the sampling rate, but it introduces artificial damping at high frequencies. The bilinear transform preserves the frequency axis exactly, but over a warped frequency scale. Being first order, both maps preserve the number of poles and zeros in the model.
We may only think in terms of mapping the plane to the plane for linear, time-invariant systems. This is because Laplace transform analysis is not defined for nonlinear and/or time-varying differential equations (no plane). Therefore, such systems are instead digitized by some form of numerical integration to produce solutions that are ideally sampled versions of the continuous-time solutions. It is often necessary to work at sampling rates much higher than the desired audio sampling rate, due to the bandwidth-expanding effects of nonlinear elements in the continuous-time system. A tutorial review of numerical solutions of Ordinary Differential Equations (ODE), including nonlinear systems, with examples in the realm of audio effects (such as a diode clipper), is given in . Finite difference schemes specifically designed for nonlinear discrete-time simulation, such as the energy-preserving ``Kirchoff-Carrier nonlinear string model'' and ``von Karman nonlinear plate model'', are discussed in . The remaining sections here summarize a few of the more elementary techniques discussed in . state-space form (§1.3.7) ,8.7a general class of th-order Ordinary Differential Equations (ODE), can be written as
where denotes time in seconds, denotes a vector of state variables at time ,
In this case, standard methods for converting a filter from continuous to discrete time may be used, such as the FDA (§7.3.1) and bilinear transform (§7.3.2).8.8
finite-difference approximation (Eq.(7.2)) with the derivative evaluated at time yields the forward Euler method of numerical integration:
where denotes the approximation to computed by the forward Euler method. Note that the ``driving function'' is evaluated at time , not . As a result, given, and the input vector for all , Eq.(7.10) can be iterated forward in time to compute for all . Since is an arbitrary function, we have a solver that is applicable to nonlinear, time-varying ODEs Eq.(7.8). Because each iteration of the forward Euler method depends only on past quantities, it is termed an explicit method. In the LTI case, an explicit method corresponds to a causal digital filter . Methods that depend on current and/or future solution samples (i.e., for ) are called implicit methods. When a nonlinear numerical-integration method is implicit, each step forward in time typically uses some number of iterations of Newton's Method (see §7.4.5 below).
Because the derivative is now evaluated at time instead of , the backward Euler method is implicit. Notice, however, that if time were reversed, it would become explicit; in other words, backward Euler is implicit in forward time and explicit in reverse time.
Thus, the trapezoidal rule is driven by the average of the derivative estimates at times and . The method is implicit in either forward or reverse time. The trapezoidal rule gets its name from the fact that it approximates an integral by summing the areas of trapezoids. This can be seen by writing Eq.(7.12) as
Newton's Method of Nonlinear MinimizationNewton's method ,[166, p. 143] finds the minimum of a nonlinear (scalar) function of several variables by locally approximating the function by a quadratic surface, and then stepping to the bottom of that ``bowl'', which generally requires a matrix inversion. Newton's method therefore requires the function to be ``close to quadratic'', and its effectiveness is directly tied to the accuracy of that assumption. For smooth functions, Newton's method gives very rapid quadratic convergence in the last stages of iteration. Quadratic convergence implies, for example, that the number of significant digits in the minimizer approximately doubles each iteration. Newton's method may be derived as follows: Suppose we wish to minimize the real, positive function with respect to . The vector Taylor expansion  of about gives
Applying Eq.(7.13) iteratively, we obtain Newton's method:
where is given as an initial condition. When the is any quadratic form in , then , and Newton's method produces in one iteration; that is, for every .
Newton's method [354,555]. This effectively converts the implicit method into a corresponding explicit method. Best results are obtained for highly oversampled systems (i.e., is larger than typical audio sampling rates).
where denotes an estimate of the second time derivative .
Semi-Implicit Trapezoidal RuleThe semi-implicit trapezoidal rule method is given by 
nonlinear ordinary differential equations, including explicit, implicit, and semi-implicit numerical integration methods. Specific methods included the explicit forward Euler (similar to the finite difference approximation of §7.3.1), backward Euler (implicit), trapezoidal rule (implicit, and equivalent to the bilinear transform of §7.3.2 in the LTI case), and semi-implicit variants of the backward Euler and trapezoidal methods. As demonstrated and discussed further in , implicit methods are generally more accurate than explicit methods for nonlinear systems, with semi-implicit methods (§7.4.6) typically falling somewhere in between. Semi-implicit methods therefore provide a source of improved explicit methods. See  and the references therein for a discussion of accuracy and stability of such schemes, as well as applied examples.
ODEs include second-order backward difference formulas (commonly used in circuit simulation ), the fourth-order Runge-Kutta method , and their various explicit, implicit, and semi-implicit variations. See  for further discussion of these and related finite-difference schemes, and for application examples in the virtual analog area (digitization of musically useful analog circuits). Specific digitization problems addressed in  include electric-guitar distortion devices [553,556], the classic ``tone stack''  (an often-used bass, midrange, and treble control circuit in guitar amplifiers), the Moog VCF, and other electronic components of amplifiers and effects. Also discussed in  is the ``K Method'' for nonlinear system digitization, with comparison to nonlinear wave digital filters (see Appendix F for an introduction to linear wave digital filters). The topic of real-time finite difference schemes for virtual analog systems remains a lively research topic [554,338,293,84,264,364,397]. For Partial Differential Equations (PDEs), in which spatial derivatives are mixed with time derivatives, the finite-difference approach remains fundamental. An introduction and summary for the LTI case appear in Appendix D. See  for a detailed development of finite difference schemes for solving PDEs, both linear and nonlinear, applied to digital sound synthesis. Physical systems considered in  include bars, stiff strings, bow coupling, hammers and mallets, coupled strings and bars, nonlinear strings and plates, and acoustic tubes (voice, wind instruments). In addition to numerous finite-difference schemes, there are chapters on finite-element methods and spectral methods.
Summary of Lumped Modeling
Digitization of Lumped Models