DSPRelated.com
Free Books

Background Fundamentals

Signal Representation and Notation

Below is a summary of various notational conventions used in digital signal processing for representing signals and spectra. For a more detailed presentation, see the elementary introduction to signal representation, sinusoids, and exponentials in [84].A.1

Units

In this book, time $ t$ is always in physical units of seconds (s), while time $ n$ or $ m$ is in units of samples (counting numbers having no physical units). Time $ t$ is a continuous real variable, while discrete-time in samples is integer-valued. The physical time $ t$ corresponding to time $ n$ in samples is given by

$\displaystyle t = nT,
$

where $ T$ is the sampling interval in seconds.

For frequencies, we have two physical units: (1) cycles per second and (2) radians per second. The name for cycles per second is Hertz (Hz) (though in the past it was cps). One cycle equals $ 2\pi$ radians, which is 360 degrees ($ \hbox{${}^{\circ}$}$). Therefore, $ f$ Hz is the same frequency as $ 2\pi
f$ radians per second (rad/s). It is easy to confuse the two because both radians and cycles are pure numbers, so that both types of frequency are in physical units of inverse seconds (s $ \null^{-1}$).

For example, a periodic signal with a period of $ P$ seconds has a frequency of $ f = (1/P)$ Hz, and a radian frequency of $ \omega =
2\pi/P$ rad/s. The sampling rate, $ f_s$, is the reciprocal of the sampling period $ T$, i.e.,

$\displaystyle f_s = \frac{1}{T}.
$

Since the sampling period $ T$ is in seconds, the sampling rate $ f_s=1/T$ is in Hz. It can be helpful, however, to think ``seconds per sample'' and ``samples per second,'' where ``samples'' is a dimensionless quantity (pure number) included for clarity. The amplitude of a signal may be in any arbitrary units such as volts, sound pressure (SPL), and so on.


Sinusoids

The term sinusoid means a waveform of the type

$\displaystyle A\cos(2\pi ft + \phi) = A \cos(\omega t + \phi). \protect$ (A.1)

Thus, a sinusoid may be defined as a cosine at amplitude $ A$, frequency $ f$, and phase $ \phi$. (See [84] for a fuller development and discussion.) A sinusoid's phase $ \phi$ is in radian units. We may call

$\displaystyle \theta(t) \isdef \omega t + \phi
$

the instantaneous phase, as distinguished from the phase offset $ \phi$. Thus, the ``phase'' of a sinusoid typically refers to its phase offset. The instantaneous frequency of a sinusoid is defined as the derivative of the instantaneous phase with respect to time (see [84] for more):

$\displaystyle f(t) \isdef \frac{d}{dt} \theta(t) = \frac{d}{dt} \left[\omega t + \phi\right] = \omega
$

A discrete-time sinusoid is simply obtained from a continuous-time sinusoid by replacing $ t$ by $ nT$ in Eq.$ \,$(A.1):

$\displaystyle A\cos(2\pi f nT + \phi) = A \cos(\omega n T + \phi).
$


Spectrum

In this book, we think of filters primarily in terms of their effect on the spectrum of a signal. This is appropriate because the ear (to a first approximation) converts the time-waveform at the eardrum into a neurologically encoded spectrum. Intuitively, a spectrum (a complex function of frequency $ \omega$) gives the amplitude and phase of the sinusoidal signal-component at frequency $ \omega$. Mathematically, the spectrum of a signal $ x$ is the Fourier transform of its time-waveform. Equivalently, the spectrum is the z transform evaluated on the unit circle $ z=e^{j\omega
T}$. A detailed introduction to spectrum analysis is given in [84].A.2

We denote both the spectrum and the z transform of a signal by uppercase letters. For example, if the time-waveform is denoted $ x(n)$, its z transform is called $ X(z)$ and its spectrum is therefore $ X(e^{j\omega T})$. The time-waveform $ x(n)$ is said to ``correspond'' to its z transform $ X(z)$, meaning they are transform pairs. This correspondence is often denoted $ x(n)\leftrightarrow X(z)$, or $ x(n)\leftrightarrow X(e^{j\omega T})$. Both the z transform and its special case, the (discrete-time) Fourier transform, are said to transform from the time domain to the frequency domain.

We deal most often with discrete time $ nT$ (or simply $ n$) but continuous frequency $ f$ (or $ \omega=2\pi f$). This is because the computer can represent only digital signals, and digital time-waveforms are discrete in time but may have energy at any frequency. On the other hand, if we were going to talk about FFTs (Fast Fourier Transforms--efficient implementations of the Discrete Fourier Transform, or DFT) [84], then we would have to discretize the frequency variable also in order to represent spectra inside the computer. In this book, however, we use spectra only for conceptual insights into the perceptual effects of digital filtering; therefore, we avoid discrete frequency for simplicity.

When we wish to consider an entire signal as a ``thing in itself,'' we write $ x(\cdot)$, meaning the whole time-waveform ($ x(n)$ for all $ n$), or $ X(\cdot)$, to mean the entire spectrum taken as a whole. Imagine, for example, that we have plotted $ x(n)$ on a strip of paper that is infinitely long. Then $ x(\cdot)$ refers to the complete picture, while $ x(n)$ refers to the $ n$th sample point on the plot.


Complex and Trigonometric Identities

This section gives a summary of some of the more useful mathematical identities for complex numbers and trigonometry in the context of digital filter analysis. For many more, see handbooks of mathematical functions such as Abramowitz and Stegun [2].

The symbol $ \isdef $ means ``is defined as''; $ z$ stands for a complex number; and $ r$, $ \theta$, $ x$, and $ y$ stand for real numbers. The quantity $ t$ is used below to denote $ \tan(\theta/2)$.

Complex Numbers

\begin{displaymath}
\begin{array}{rclrcl}
\mrr {j}{\isdef }{\sqrt{-1}}{z}{\isdef...
...z}}{=}{\left\vert z\right\vert^2 \;=\; x^2+y^2=r^2}
\end{array}\end{displaymath}


The Exponential Function

\begin{eqnarray*}
\mrr {e^x}{\isdef }{\displaystyle\lim_{n\to\infty}\left(1+\fra...
...{-j\theta}}{2}}
\mrr {e}{=}{2.7\,1828\,1828\,4590\,\ldots}{}{}{}
\end{eqnarray*}


Trigonometric Identities

\begin{eqnarray*}
\mr {\sin(-\theta)}{-\sin(\theta)}%
{\cos(-\theta)}{\cos(\thet...
...{-2\cos\left(\frac{A+B}{2}\right)\cos\left(\frac{A-B}{2}\right)}
\end{eqnarray*}

Trigonometric Identities, Continued

\begin{eqnarray*}
\mr {\sin(A)-\sin(B)}{2\cos\left(\frac{A+B}{2}\right)\sin\left...
...cos(A)\cos(B)}}%
{\tan(A+B)}{\frac{\tan(A+B)}{1-\tan(A)\tan(B)}}
\end{eqnarray*}


Half-Angle Tangent Identities

\begin{displaymath}
\begin{array}{rclrcl}
\mr {t\;\isdef \;\tan\left(\frac{\thet...
...1+t^2}}%
\mrone {\tan(\theta)}{=}{\frac{2t}{1-t^2}}
\end{array}\end{displaymath}


A Sum of Sinusoids at the
Same Frequency is Another
Sinusoid at that Frequency

It is an important and fundamental fact that a sum of sinusoids at the same frequency, but different phase and amplitude, can always be expressed as a single sinusoid at that frequency with some resultant phase and amplitude. An important implication, for example, is that

$\textstyle \parbox{0.8\textwidth}{sinusoids are eigenfunctions of linear time-invariant
(LTI) systems.}$
That is, if a sinusoid is input to an LTI system, the output will be a sinusoid at the same frequency, but possibly altered in amplitude and phase. This follows because the output of every LTI system can be expressed as a linear combination of delayed copies of the input signal. In this section, we derive this important result for the general case of $ N$ sinusoids at the same frequency.

Proof Using Trigonometry

We want to show it is always possible to solve

$\displaystyle A\cos(\omega t + \phi) = A_1\cos(\omega t + \phi_1) + A_2\cos(\omega t + \phi_2) + \cdots + A_N\cos(\omega t + \phi_N) \protect$ (A.2)

for $ A$ and $ \phi$, given $ A_i, \phi_i$ for $ i=1,\ldots,N$. For each component sinusoid, we can write
$\displaystyle A_i\cos(\omega t + \phi_i)$ $\displaystyle =$ $\displaystyle A_i\cos(\omega t)\cos(\phi_i) - A_i\sin(\omega t)\sin(\phi_i)$  
  $\displaystyle =$ $\displaystyle \left[A_i\cos(\phi_i)\right]\cos(\omega t)
- \left[A_i\sin(\phi_i)\right]\sin(\omega t)$ (A.3)

Applying this expansion to Eq.$ \,$(A.2) yields

\begin{eqnarray*}
\left[A\cos(\phi)\right]\cos(\omega t)
&-&\left[A\sin(\phi)\ri...
...a t)
- \left[\sum_{i=1}^N A_i\sin(\phi_i)\right]\sin(\omega t).
\end{eqnarray*}

Equating coefficients gives

$\displaystyle A\cos(\phi)$ $\displaystyle =$ $\displaystyle \sum_{i=1}^N A_i\cos(\phi_i) \isdefs x$  
$\displaystyle A\sin(\phi)$ $\displaystyle =$ $\displaystyle \sum_{i=1}^N A_i\sin(\phi_i) \isdefs y.
\protect$ (A.4)

where $ x$ and $ y$ are known. We now have two equations in two unknowns which are readily solved by (1) squaring and adding both sides to eliminate $ \phi$, and (2) forming a ratio of both sides of Eq.$ \,$(A.4) to eliminate $ A$. The results are

\begin{eqnarray*}
A &=& \sqrt{x^2+y^2}\\
\phi &=& \tan^{-1}\left(\frac{y}{x}\right)
\end{eqnarray*}

which has a unique solution for any values of $ A_i$ and $ \phi_i$.


Proof Using Complex Variables

To show by means of phasor analysis that Eq.$ \,$(A.2) always has a solution, we can express each component sinusoid as

$\displaystyle A_i\cos(\omega t + \phi_i) =$   re$\displaystyle \left\{A_i e^{j(\omega t + \phi_i)}\right\}
$

Equation (A.2) therefore becomes

\begin{eqnarray*}
\mbox{re}\left\{A e^{j(\omega t + \phi)}\right\} &=& \sum_{i=1...
...}\right\}\\
&=& \mbox{re}\left\{A e^{j(\omega t+\phi)}\right\}.
\end{eqnarray*}

Thus, equality holds when we define

$\displaystyle A e^{j\phi} \isdef \sum_{i=1}^N A_i e^{j\phi_i}. \protect$ (A.5)

Since $ A e^{j\phi}$ is just the polar representation of a complex number, there is always some value of $ A\geq 0$ and $ \phi\in[-\pi,\pi)$ such that $ A e^{j\phi}$ equals whatever complex number results on the right-hand side of Eq.$ \,$(A.5).

As is often the case, we see that the use of Euler's identity and complex analysis gives a simplified algebraic proof which replaces a proof based on trigonometric identities.


Phasor Analysis: Factoring a Complex Sinusoid into Phasor Times Carrier

The heart of the preceding proof was the algebraic manipulation

$\displaystyle \sum_{i=1}^N A_i e^{j(\omega t + \phi_i)} = e^{j\omega t} \sum_{i=1}^N A_i e^{j\phi_i}.
$

The carrier term $ e^{j\omega t}$ ``factors out'' of the sum. Inside the sum, each sinusoid is represented by a complex constant $ A_i e^{j\phi_i}$, known as the phasor associated with that sinusoid.

For an arbitrary sinusoid having amplitude $ A$, phase $ \phi$, and radian frequency $ \omega$, we have

$\displaystyle A\cos(\omega t + \phi) =$   re$\displaystyle \left\{(A e^{j\phi}) e^{j\omega t}\right\}.
$

Thus, a sinusoid is determined by its frequency $ \omega$ (which specifies the carrier term) and its phasor $ {\cal A}\isdef A e^{j\phi}$, a complex constant. Phasor analysis is discussed further in [84].


Next Section:
Elementary Audio Digital Filters
Previous Section:
Conclusion