Free Books

The Digital Waveguide Oscillator

In this section, adapted from [460], a digital sinusoidal oscillator derived from digital waveguide theory is described which has good properties for VLSI implementation. Its main features are no wavetable and a computational complexity of only one multiply per sample when amplitude and frequency are constant. Three additions are required per sample. A piecewise exponential amplitude envelope is available for the cost of a second multiplication per sample, which need not be as expensive as the tuning multiply. In the presence of frequency modulation (FM), the amplitude coefficient can be varied to exactly cancel amplitude modulation (AM) caused by changing the frequency of oscillation.

Additive Synthesis

One of the very first computer music techniques introduced was additive synthesis [379]. It is based on Fourier's theorem which states that any sound can be constructed from elementary sinusoids, such as are approximately produced by carefully struck tuning forks. Additive synthesis attempts to apply this theorem to the synthesis of sound by employing large banks of sinusoidal oscillators, each having independent amplitude and frequency controls. Many analysis methods, e.g., the phase vocoder, have been developed to support additive synthesis. A summary is given in [424].

While additive synthesis is very powerful and general, it has been held back from widespread usage due to its computational expense. For example, on a single DSP56001 digital signal-processing chip, clocked at 33 MHz, only about $ 60$ sinusoidal partials can be synthesized in real time using non-interpolated, table-lookup oscillators. Interpolated table-lookup oscillators are much more expensive, and when all the bells and whistles are added, and system overhead is accounted for, only around $ 12$ fully general, high-quality partials are sustainable at $ 44.1$ KHz on a $ 33 MHz$ DSP56001 (based on analysis of implementations provided by the NeXT Music Kit).

At CD-quality sampling rates, the note A1 on the piano requires $ 22050/55\approx 400$ sinusoidal partials, and at least the low-frequency partials should use interpolated lookups. Assuming a worst-case average of $ 100$ partials per voice, providing 32-voice polyphony requires $ 3200$ partials, or around $ 64$ DSP chips, assuming we can pack an average of $ 50$ partials into each DSP. A more reasonable complement of $ 8$ DSP chips would provide only $ 4$-voice polyphony which is simply not enough for a piano synthesis. However, since DSP chips are getting faster and cheaper, DSP-based additive synthesis looks viable in the future.

The cost of additive synthesis can be greatly reduced by making special purpose VLSI optimized for sinusoidal synthesis. In a VLSI environment, major bottlenecks are wavetables and multiplications. Even if a single sinusoidal wavetable is shared, it must be accessed sequentially, inhibiting parallelism. The wavetable can be eliminated entirely if recursive algorithms are used to synthesize sinusoids directly.

Digital Sinusoid Generators

In [168], three techniques were examined for generating sinusoids digitally by means of recursive algorithms.C.12 The recursions can be interpreted as implementations of second-order digital resonators in which the damping is set to zero. The three methods considered were (1) the 2D rotation (2DR), or complex multiply (also called the ``coupled form''), (2) the modified coupled form (MCF), or ``magic circle'' algorithm,C.13which is similar to (1) but with better numerical behavior, and (3) the direct-form, second-order, digital resonator (DFR) with its poles set to the unit circle.

These three recursions may be defined as follows:

(1) & x_1(n) &=& c_nx_1(n-1) + s_nx_2(n...
..._2(n-1) & \\
& x_2(n) &=& x_1(n-1) & \mbox{(DFR)}

where $ c_n\isdef \cos(2\pi f_n T)$, $ s_n\isdef \sin(2\pi f_n T)$, $ f_n$ is the instantaneous frequency of oscillation (Hz) at time sample $ n$, and $ T$ is the sampling period in seconds. The magic circle parameter is $ \epsilon=2\sin(\pi f_n T)$.

The digital waveguide oscillator appears to have the best overall properties yet seen for VLSI implementation. This structure, introduced in [460], may be derived from the theory of digital waveguides (see Appendix C, particularly §C.9, and [433,464]). Any second-order digital filter structure can be used as a starting point for developing a corresponding sinusoidal signal generator, so in this case we begin with the second-order waveguide filter.

The Second-Order Waveguide Filter

The first step is to make a second-order digital filter with zero damping by abutting two unit-sample sections of waveguide medium, and terminating on the left and right with perfect reflections, as shown in Fig.C.38. The wave impedance in section $ i$ is given by $ R_i=\rho c/A_i$, where $ \rho$ is air density, $ A_i$ is the cross-sectional area of tube section $ i$, and $ c$ is sound speed. The reflection coefficient is determined by the impedance discontinuity via $ k=(R_1-R_2)/(R_1+R_2)$. It turns out that to obtain sinusoidal oscillation, one of the terminations must provide an inverting reflection while the other is non-inverting.

Figure C.38: The second-order, lossless, digital waveguide oscillator, built using two acoustic tube sections.

At the junction between sections $ 1$ and $ 2$, the signal is partially transmitted and partially reflected such that energy is conserved, i.e., we have lossless scattering. The formula for the reflection coefficient $ k$ can be derived from the physical constraints that (1) pressure is continuous across the junction, and (2) there is no net flow into or out of the junction. For traveling pressure waves $ p^\pm(t)$ and volume-velocity waves $ u^\pm(t)$, we have $ p^+(t) = Ru^+(t)$ and $ p^-(t)
= -Ru^-(t)$. The physical pressure and volume velocity are obtained by summing the traveling-wave components.

The discrete-time simulation for the physical system of Fig.C.38 is shown in Fig.C.39. The propagation time from the junction to a reflecting termination and back is one sample period. The half sample delay from the junction to the reflecting termination has been commuted with the termination and combined with the half sample delay to the termination. This is a special case of a ``half-rate'' waveguide filter [433].

Figure C.39: The second-order, lossless, waveguide filter.

Since only two samples of delay are present, the digital system is at most second order, and since the coefficients are real, at most one frequency of oscillation is possible in $ (0,\pi)$.

The scattering junction shown in the figure is called the Kelly-Lochbaum junction in the literature on lattice and ladder digital filters [173]. While it is the most natural from a physical point of view, it requires four multiplies and two additions for its implementation.

It is well known that lossless scattering junctions can be implemented in a variety of equivalent forms, such as the two-multiply and even one-multiply junctions. However, most have the disadvantage of not being normalized in the sense that changing the reflection coefficient $ k$ changes the amplitude of oscillation. This can be understood physically by noting that a change in $ k$ implies a change in $ R_2/R_1$. Since the signal power contained in a waveguide variable, say $ p_1^+(n)$, is $ \left[p_1^+(n)\right]^2/R_1$, we find that modulating the reflection coefficient corresponds to modulating the signal energy represented by the signal sample in at least one of the two delay elements. Since energy is proportional to amplitude squared, energy modulation implies amplitude modulation.

The well-known normalization procedure is to replace the traveling pressure waves $ p^\pm$ by ``root-power'' pressure waves $ \tilde{p}^\pm =
p^\pm/\sqrt{R}$ so that signal power is just the square of a signal sample $ (\tilde{p}^\pm)^2$. When this is done, the scattering junction transforms from the Kelly-Lochbaum or one-multiply form into the normalized ladder junction in which the reflection coefficients are again $ \pm k$, but the forward and reverse transmission coefficients become $ \sqrt{1-k^2}$. Defining $ k=\sin(\theta)$, the transmission coefficients can be seen as $ \cos(\theta)$, and we arrive essentially at the coupled form, or two-dimensional vector rotation considered in [168].

An alternative normalization technique is based on the digital waveguide transformerC.16). The purpose of a ``transformer'' is to ``step'' the force variable (pressure in our example) by some factor $ g$ without scattering and without affecting signal energy. Since traveling signal power is proportional to pressure times velocity $ p^+u^+$, it follows that velocity must be stepped by the inverse factor $ 1/g$ to keep power constant. This is the familiar behavior of transformers for analog electrical circuits: voltage is stepped up by the ``turns ratio'' and current is stepped down by the reciprocal factor. Now, since $ p^+ = R
u^+$, traveling signal power is equal to $ p^+u^+ = (p^+)^2/R$. Therefore, stepping up pressure through a transformer by the factor $ g$ corresponds to stepping up the wave impedance $ R$ by the factor $ g^2$. In other words, the transformer raises pressure and decreases volume velocity by raising the wave impedance (narrowing the acoustic tube) like a converging cone.

If a transformer is inserted in a waveguide immediately to the left, say, of a scattering junction, it can be used to modulate the wave impedance ``seen'' to the left by the junction without having to use root-power waves in the simulation. As a result, the one-multiply junction can be used for the scattering junction, since the junction itself is not normalized. Since the transformer requires two multiplies, a total of three multiplies can effectively implement a normalized junction, where four were needed before. Finally, in just this special case, one of the transformer coefficients can be commuted with the delay element on the left and combined with the other transformer coefficient. For convenience, the $ -1$ coefficient on the left is commuted into the junction so it merely toggles the signs of inputs to existing summers. These transformations lead to the final form shown in Fig.C.40.

Figure C.40: The transformer-normalized, digital waveguide oscillator.

The ``tuning coefficient'' is given by $ C(n)=\cos(2\pi f_nT)$, where $ f_n$ is the desired oscillation frequency in Hz at sample $ n$ (in the undamped case), and $ T$ is the sampling period in seconds. The ``amplitude coefficient'' is $ G(n) = r_n g_n/g_{n-1}$, where growth or decay factor per sample ( $ r_n\equiv 1$ for constant amplitude),C.14 and $ g_n$ is the normalizing transformer ``turns ratio'' given by

$\displaystyle g_n=\sqrt{\frac{1-C(n)}{1+C(n)}}.

When both amplitude and frequency are constant, we have $ G(n)\equiv
1$, and only the tuning multiply is operational. When frequency changes, the amplitude coefficient deviates from unity for only one time sample to normalize the oscillation amplitude.

When amplitude and frequency are constant, there is no gradual exponential growth or decay due to round-off error. This happens because the only rounding is at the output of the tuning multiply, and all other computations are exact. Therefore, quantization in the tuning coefficient can only cause quantization in the frequency of oscillation. Note that any one-multiply digital oscillator should have this property. In contrast, the only other known normalized oscillator, the coupled form, does exhibit exponential amplitude drift because it has two coefficients $ c=\cos(\theta)$ and $ s=\sin(\theta)$ which, after quantization, no longer obey $ c^2+s^2=1$ for most tunings.

Application to FM Synthesis

The properties of the new oscillator appear well suited for FM applications in VLSI because of the minimized computational expense. However, in this application there are issues to be resolved regarding conversion from modulator output to carrier coefficients. Preliminary experiments indicate that FM indices less than $ 1$ are well behaved when the output of a modulating oscillator simply adds to the coefficient of the carrier oscillator (bypassing the exact FM formulas). Approximate amplitude normalizing coefficients have also been derived which provide a first-order approximation to the exact AM compensation at low cost. For music synthesis applications, we believe a distortion in the details of the FM instantaneous frequency trajectory and a moderate amount of incidental AM can be tolerated since they produce only second-order timbral effects in many situations.

Digital Waveguide Resonator

Converting a second-order oscillator into a second-order filter requires merely introducing damping and defining the input and output signals. In Fig.C.40, damping is provided by the coefficient $ G(n)$, which we will take to be a constant

$\displaystyle G(n)\equiv\message{CHANGE eqv TO equiv IN SOURCE}g.

When $ g<1$, the oscillator decays exponentially to zero from any initial conditions. The two delay elements constitute the state of the resonator. Let us denote by $ x_1(n)$ the output of the delay element on the left in Fig.C.40 and let $ x_2(n)$ be the delay-element output on the right. In general, an output signal $ y(n)$ may be formed as any linear combination of the state variables:

$\displaystyle y(n) = c_1 x_1(n) + c_2 x_2(n)

Similarly, input signals $ u(n)$ may be summed into the state variables $ x_i(n)$ scaled by arbitrary gain factors $ b_i$.

The foregoing modifications to the digital waveguide oscillator result in the so-called digital waveguide resonator (DWR) [304]:

$\displaystyle \tilde{x}_1(n)$ $\displaystyle =$ $\displaystyle g x_1(n)\protect$ (C.127)
$\displaystyle v(n)$ $\displaystyle =$ $\displaystyle c[\tilde{x}_1(n) + x_2(n)]$ (C.128)
$\displaystyle x_1(n+1)$ $\displaystyle =$ $\displaystyle v(n) - x_2(n) + \tilde{b}_1 u(n)$ (C.129)
$\displaystyle x_2(n+1)$ $\displaystyle =$ $\displaystyle \tilde{x}_1(n) + v(n) + b_2 u(n)$ (C.130)
$\displaystyle y(n)$ $\displaystyle =$ $\displaystyle c_1 x_1(n) + c_2 x_2(n)
\protect$ (C.131)

where, as derived in the next section, the coefficients are given by
$\displaystyle g$ $\displaystyle =$ $\displaystyle r^2\protect$ (C.132)
$\displaystyle \tilde{b}_1$ $\displaystyle =$ $\displaystyle b_1 \sqrt{\frac{1-c}{1+c}}$ (C.133)
$\displaystyle c$ $\displaystyle =$ $\displaystyle \sqrt{\frac{1}{1 + \frac{\tan^2(\theta)(1+g)^2+(1-g)^2}{4g}}}$ (C.134)
  $\displaystyle \approx$ $\displaystyle 1 - \frac{\tan^2(\theta)(1+g)^2 + (1-g)^2}{8g}.
\protect$ (C.135)

where $ \lambda=r e^{j\theta}$ denotes one desired pole (the other being at $ \overline{\lambda}$). Note that $ c=\cos(\theta)$ when $ g=1$ (undamped case). The DWR requires only two multiplies per sample. As seen earlier, when the decay time is set to $ \infty$ ($ g=1$), one of the multiplies disappears, leaving only one multiply per sample for sinusoidal oscillation.

Figure C.41 shows an overlay of initial impulse responses for the three resonators discussed above. The decay factor was set to $ r=0.99$, and the output of each multiplication was quantized to 16 bits, as were all coefficients. The three waveforms sound and look identical. (There are small differences, however, which can be seen by plotting the differences of pairs of waveforms.)

Figure: Overlay of three resonator impulse responses, with $ \mathbf {B}^T=[1,0]$ and $ \mathbf {C}=[0,1]$, for the (1) complex-multiply resonator (labeled ``2DR'' for ``2D rotation''), (2) modified coupled form (MCF), and (3) second-order digital waveguide resonator (DWR).

Figure C.42 shows the same impulse-response overlay but with $ r=1$ and only 4 significant bits in the coefficients and signals. The complex multiply oscillator can be seen to decay toward zero due to coefficient quantization ( $ x_1^2+y_1^2<1$). The MCF and DWR remain steady at their initial amplitude. All three suffer some amount of tuning perturbation.

Figure: Overlay of three resonator impulse responses, as in Fig.C.41, but with $ r=1$ and quantization of coefficients and signals to 4 significant bits.

State-Space Analysis

We will now use state-space analysisC.15[449] to determine Equations (C.133-C.136).

From Equations (C.128-C.132),

$\displaystyle x_1(n+1) = c[g x_1(n) + x_2(n)] - x_2(n) = c\,g x_1(n) + (c-1) x_2(n)


$\displaystyle x_2(n+1) = g x_1(n) + c[g x_1(n) + x_2(n)] = (1+c) g x_1(n) + c\,x_2(n)

In matrix form, the state time-update can be written

$\displaystyle \left[\begin{array}{c} x_1(n+1) \\ [2pt] x_2(n+1) \end{array}\rig...
...bf{A} \left[\begin{array}{c} x_1(n) \\ [2pt] x_2(n) \end{array}\right] \protect$ (C.136)

or, in vector notation,
$\displaystyle \underline{x}(n+1)$ $\displaystyle =$ $\displaystyle \mathbf{A}\underline{x}(n) + \mathbf{B}u(n)$ (C.137)
$\displaystyle y(n)$ $\displaystyle =$ $\displaystyle \mathbf{C}\underline{x}(n)$ (C.138)

where we have introduced an input signal $ u(n)$, which sums into the state vector via the $ 2\times 1$ (or $ 2\times N_u$) vector $ \mathbf{B}$. The output signal is defined as the $ 1\times 2$ vector $ \mathbf{C}$ times the state vector $ \underline{x}(n)$. Multiple outputs may be defined by choosing $ \mathbf{C}$ to be an $ N_y \times 2$ matrix.

A basic fact from linear algebra is that the determinant of a matrix is equal to the product of its eigenvalues. As a quick check, we find that the determinant of $ A$ is

$\displaystyle \det{\mathbf{A}} = c^2g - g(c+1)(c-1) = c^2g - g(c^2-1) = c^2g - gc^2+g = g. \protect$ (C.139)

When the eigenvalues $ {\lambda_i}$ of $ \mathbf{A}$ (system poles) are complex, then they must form a complex conjugate pair (since $ \mathbf{A}$ is real), and we have $ \det{\mathbf{A}} = \vert{\lambda_i}\vert^2 = g$. Therefore, the system is stable if and only if $ \vert g\vert<1$. When making a digital sinusoidal oscillator from the system impulse response, we have $ \vert g\vert=1$, and the system can be said to be ``marginally stable''. Since an undriven sinusoidal oscillator must not lose energy, and since every lossless state-space system has unit-modulus eigenvalues (consider the modal representation), we expect $ \left\vert\det{A}\right\vert=1$, which occurs for $ g=1$.

Note that $ \underline{x}(n) = \mathbf{A}^n\underline{x}(0)$. If we diagonalize this system to obtain $ \tilde{\mathbf{A}}= \mathbf{E}^{-1}\mathbf{A}\mathbf{E}$, where $ \tilde{\mathbf{A}}=$   diag$ [\lambda_1,\lambda_2]$, and $ \mathbf{E}$ is the matrix of eigenvectors of $ \mathbf{A}$, then we have

$\displaystyle \tilde{\underline{x}}(n) = \tilde{A}^n\,\tilde{\underline{x}}(0) ...
...eft[\begin{array}{c} \tilde{x}_1(0) \\ [2pt] \tilde{x}_2(0) \end{array}\right]

where $ \tilde{\underline{x}}(n) \isdef \mathbf{E}^{-1}\underline{x}(n)$ denotes the state vector in these new ``modal coordinates''. Since $ \tilde{A}$ is diagonal, the modes are decoupled, and we can write

\tilde{x}_1(n) &=& \lambda_1^n\,\tilde{x}_1(0)\\
\tilde{x}_2(n) &=& \lambda_2^n\,\tilde{x}_2(0)

If this system is to generate a real sampled sinusoid at radian frequency $ \omega $, the eigenvalues $ \lambda_1$ and $ \lambda_2$ must be of the form

\lambda_1 &=& e^{j\omega T}\\
\lambda_2 &=& e^{-j\omega T},

(in either order) where $ \omega $ is real, and $ T$ denotes the sampling interval in seconds.

Thus, we can determine the frequency of oscillation $ \omega $ (and verify that the system actually oscillates) by determining the eigenvalues $ \lambda_i$ of $ A$. Note that, as a prerequisite, it will also be necessary to find two linearly independent eigenvectors of $ A$ (columns of $ \mathbf{E}$).


Starting with the defining equation for an eigenvector $ \underline{e}$ and its corresponding eigenvalue $ \lambda$,

$\displaystyle \mathbf{A}\underline{e}_i = {\lambda_i}\underline{e}_i,\quad i=1,2

we get, using Eq.$ \,$(C.137),

$\displaystyle \left[\begin{array}{cc} gc & c-1 \\ [2pt] gc+g & c \end{array}\ri...
...n{array}{c} {\lambda_i} \\ [2pt] {\lambda_i}\eta_i \end{array}\right]. \protect$ (C.140)

We normalized the first element of $ \underline{e}_i$ to 1 since $ g\underline{e}_i$ is an eigenvector whenever $ \underline{e}_i$ is. (If there is a missing solution because its first element happens to be zero, we can repeat the analysis normalizing the second element to 1 instead.)

Equation (C.141) gives us two equations in two unknowns:

$\displaystyle gc+\eta_i(c-1)$ $\displaystyle =$ $\displaystyle {\lambda_i}
\protect$ (C.141)
$\displaystyle g(1+c) +c\eta_i$ $\displaystyle =$ $\displaystyle {\lambda_i}\eta_i$ (C.142)

Substituting the first into the second to eliminate $ {\lambda_i}$, we get

g+gc+c\eta_i &=& [gc+\eta_i(c-1)]\eta_i = gc\eta_i + \eta_i^2 ...
- \frac{c^2(1-g)^2}{4(1-c)^2}}.

As $ g$ approaches $ 1$ (no damping), we obtain

$\displaystyle \eta_i = \pm j\sqrt{\frac{1+c}{1-c}} \qquad \hbox{(when $g=1$)}.

Thus, we have found both eigenvectors:

\underline{e}_1&=&\left[\begin{array}{c} 1 \\ [2pt] \eta \end{...
- \frac{c^2(1-g)^2}{4(1-c)^2}}

They are linearly independent provided $ \eta\neq0$. In the undamped case ($ g=1$), this holds whenever $ c\neq -1$. The eigenvectors are finite when $ c\neq 1$. Thus, the nominal range for $ c$ is the interval $ c\in(-1,1)$.

We can now use Eq.$ \,$(C.142) to find the eigenvalues:

{\lambda_i}&=& gc+ \eta_i(c-1)\\
&=& gc+ \frac{(1-g)c}{2}\pm ...
\pm j\sqrt{g(1-c^2) - \left[\frac{c(1-g)}{2}\right]^2}

Damping and Tuning Parameters

The tuning and damping of the resonator impulse response are governed by the relation

$\displaystyle {\lambda_i}= e^{\frac{T}{\tau}} e^{\pm j\omega T}

where $ T$ denotes the sampling interval, $ \tau $ is the time constant of decay, and $ \omega $ is the frequency of oscillation in radians per second. The eigenvalues are presumed to be complex, which requires, from Eq.$ \,$(C.144),

$\displaystyle g(1-c^2) \geq\frac{c^2(1-g)^2}{4} \,\,\Rightarrow\,\,c^2 \leq \frac{4g}{(1+g)^2}

To obtain a specific decay time-constant $ \tau $, we must have

e^{-2T/\tau} &=& \left\vert{\lambda_i}\right\vert^2 = c^2\left...
...left[g(1-c^2) - c^2\left(\frac{1-g}{2}\right)^2\right]\\
&=& g

Therefore, given a desired decay time-constant $ \tau $ (and the sampling interval $ T$), we may compute the damping parameter $ g$ for the digital waveguide resonator as

$\displaystyle \zbox {g = e^{-2T/\tau}.}

Note that this conclusion follows directly from the determinant analysis of Eq.$ \,$(C.140), provided it is known that the poles form a complex-conjugate pair.

To obtain a desired frequency of oscillation, we must solve

\theta = \omega T
&=& \tan^{-1}\left[\frac{\sqrt{g(1-c^2) - [...
...,\tan^2{\theta} &=& \frac{g(1-c^2) - [c(1-g)/2]^2}{[c(1+g)/2]^2}

for $ c$, which yields

$\displaystyle \zbox {
c= \sqrt{\frac{1}{1 + \frac{\tan^2(\theta)(1+g)^2+(1-g)^2}{4g}}}
\approx 1 - \frac{\tan^2(\theta)(1+g)^2 + (1-g)^2}{8g}.

Note that this reduces to $ c=\cos(\theta)$ when $ g=1$ (undamped case).

Eigenvalues in the Undamped Case

When $ g=1$, the eigenvalues reduce to

$\displaystyle {\lambda_i}= c\pm j\sqrt{1-c^2}

Assuming $ \left\vert c\right\vert<1$, the eigenvalues can be expressed as

$\displaystyle {\lambda_i}= c\pm j\sqrt{1-c^2} = \cos(\theta) \pm j\sin(\theta) = e^{\pm j\theta} \protect$ (C.143)

where $ \theta=\omega T$ denotes the angular advance per sample of the oscillator. Since $ c\in(-1,1)$ corresponds to the range $ \theta\in(-\pi,\pi)$, we see that $ c$ in this range can produce oscillation at any digital frequency.

For $ \left\vert c\right\vert>1$, the eigenvalues are real, corresponding to exponential growth and/or decay. (The values $ c=\pm 1$ were excluded above in deriving Eq.$ \,$(C.144).)

In summary, the coefficient $ c$ in the digital waveguide oscillator ($ g=1$) and the frequency of sinusoidal oscillation $ \omega $ is simply

$\displaystyle \fbox{$\displaystyle c= \cos(\omega T)$}.


This section introduced and analyzed the digital waveguide oscillator and resonator, as well as some related algorithms. As a recursive algorithm for digital sinusoid generation, it has excellent properties for VLSI implementation. It is like the 2D rotation (complex multiply) in that it offers instantaneous amplitude from its state and constant amplitude in the presence of frequency modulation. However, its implementation requires only two multiplies per sample instead of four. When used as a constant-frequency oscillator, it requires only one multiply per sample.

Matlab Sinusoidal Oscillator Implementations

This section provides Matlab/Octave program listings for the sinusoidal resonator/oscillator algorithms discussed above:

The test program computes an impulse response of each resonator, and plots them overlaid for comparison. Next, it optionally plays each one as a sound and plots them individually.

% Filter test program in matlab

%N = 300;   % Number of samples to generate
N = 3000;   % Number of samples to generate
f = 100;   % Desired oscillation frequency (Hz)
fs = 8192; % Audio sampling rate (Hz)
%R = .99;   % Decay factor (1=never, 0=instant)
R = 1;   % Decay factor (1=never, 0=instant)
b1 = 1;    % Input gain to state variable x(n)
b2 = 0;    % Input gain to state variable y(n)

%nd = 16;   % Number of significant digits to use
nd = 4;   % Number of significant digits to use
base = 2;  % Mantissa base (two significant figures)
           % (see 'help chop' in Matlab)

u = [1,zeros(1,N-1)]; % driving input test signal
theta = 2*pi*f/fs; % resonance frequency, rad/sample

% ================================================

x1 = zeros(1,N); % Predeclare saved-state arrays
y1 = zeros(1,N);

x1(1) = 0;      % Initial condition
y1(1) = 0;      % Initial condition

c = chop(R*cos(theta),nd,base); % coefficient 1
s = chop(R*sin(theta),nd,base); % coefficient 2

for n=1:N-1,
  x1(n+1) = chop(   c*x1(n) - s*y1(n) + b1*u(n), nd,base);
  y1(n+1) = chop(   s*x1(n) + c*y1(n) + b2*u(n), nd,base);

% ================================================
% (ref: http://ccrma.stanford.edu/~jos/wgo/ )

x2 = zeros(1,N); % Predeclare saved-state arrays
y2 = zeros(1,N);

x2(1) = 0.0;     % Initial condition
y2(1) = 0.0;     % Initial condition

e = chop(2*sin(theta/2),nd,base); % tuning coefficient

for n=1:N-1,
  x2(n+1) = chop(R*(x2(n)-e*y2(n))+b1*u(n),nd,base);
  y2(n+1) = chop(R*(e*x2(n+1)+y2(n))+b2*u(n),nd,base);

% ================================================

x3 = zeros(1,N); % Predeclare saved-state arrays
y3 = zeros(1,N);

x3(1) = 0;      % Initial condition
y3(1) = 0;      % Initial condition

g = R*R;        % decay coefficient
t = tan(theta); % toward tuning coefficient
cp = sqrt(g/(g + t^2*(1+g)^2/4 + (1-g)^2/4)); % exact
%cp = 1 - (t^2*(1+g)^2 + (1-g)^2)/(8*g); % good approx
b1n = b1*sqrt((1-cp)/(1+cp)); % input scaling

% Quantize coefficients:
cp = chop(cp,nd,base);
g = chop(g,nd,base);
b1n = chop(b1n,nd,base);

for n=1:N-1,
  gx3 = chop(g*x3(n), nd,base); % mpy 1
  y3n = y3(n);
  temp = chop(cp*(gx3 + y3n), nd,base); % mpy 2
  x3(n+1) = temp - y3n + b1n*u(n);
  y3(n+1) = gx3 + temp + b2*u(n);

% ================================================
% playandplot.m

title('Impulse Response Overlay');
xlabel('Time (samples)');
alt=1;%alt = (-1).^(0:N-1); % for visibility
plot([y1',y2',(alt.*y3)']); grid('on');
title('Impulse Response Overlay');
xlabel('Time (samples)');


if 1
  playandplot(y1,u,fs,'2D rotation',1);
  playandplot(y2,u,fs,'Magic circle',2);

function playandplot(y,u,fs,ttl,fnum);
% sound(y,fs,16);

  unwind_protect # protect graph state
  xlabel('Time (ms)');
  t = 1000*(0:length(u)-1)/fs;
  xlabel('Time (ms)');
  unwind_protect_cleanup # restore graph state

Note that the chop utility only exists in Matlab. In Octave, the following ``compatibility stub'' can be used:

function [y] = chop(x,nd,base)
y = x;

Faust Implementation

The function oscw (file osc.lib) implements a digital waveguide sinusoidal oscillator in the Faust language [453,154,170]. There is also oscr implementing the 2D rotation case, and oscs implementing the modified coupled form (magic circle).

Next Section:
Non-Cylindrical Acoustic Tubes
Previous Section:
Waveguide Transformers and Gyrators