Filtered White Noise

When a white-noise sequence is filtered, successive samples generally become correlated.6.8 Some of these filtered-white-noise signals have names:

More generally, filtered white noise can be termed colored noise or correlated noise. As long as the filter is linear and time-invariant (LTI), and strictly stable (poles inside and not on the unit circle of the $ z$ plane), its output will be a stationary ``colored noise''. We will only consider stochastic processes of this nature.

In the preceding sections, we have looked at two ways of analyzing noise: the sample autocorrelation function in the time or ``lag'' domain, and the sample power spectral density (PSD) in the frequency domain. We now look at these two representations for the case of filtered noise.

Let $ x(n)$ denote a length $ N$ sequence we wish to analyze. Then the Bartlett-windowed acyclic sample autocorrelation of $ x$ is $ x\star x$, and the corresponding smoothed sample PSD is $ \left\vert X(\omega)\right\vert^2$5.7, §2.3.6).

For filtered white noise, we can write $ x$ as a convolution of white noise $ v$ and some impulse response $ h$:

$\displaystyle x(n) = (h\ast v)(n) \isdef \sum_{m=-\infty}^\infty v(m)h(n-m)

The DTFT of $ x$ is then, by the convolution theorem2.3.5),

$\displaystyle X(\omega) = H(\omega)V(\omega)

so that

x\star x &\longleftrightarrow&
\left\vert X(\omega)\right\vert...
(h\star h)\ast (v\star v) \propto h\star h,

since $ v\star v \propto \sigma_v^2\delta$ for white noise. Thus, we have derived that the autocorrelation of filtered white noise is proportional to the autocorrelation of the impulse response times the variance of the driving white noise.

Let's try to pin this down more precisely and find the proportionality constant. As the number $ M$ of observed samples of $ x(n) = (h\ast
v)(n)$ goes to infinity, the length-$ M$ Bartlett-window bias $ M-\vert l\vert$ in the autocorrelation $ x\star x$ converges to a constant scale factor $ M$ at lags such that $ M\gg \vert l\vert$. Therefore, the unbiased autocorrelation can be expressed as

$\displaystyle \hat{r}_x(l) = \frac{1}{M-\vert l\vert} x\star x \;\to\; \frac{1}{M} x \star x, \quad \hbox{($M\gg l$)}.

In the limit, we obtain

$\displaystyle \lim_{M\to\infty} \frac{1}{M} x \star x = r_x(l)

In the frequency domain we therefore have

S_x(\omega) &=&
\lim_{M\to \infty}\frac{1}{M}\vert X(\omega)\...
...vert^2\sigma_v^2 \;\longleftrightarrow\;
(h\star h) \sigma_v^2 .

In summary, the autocorrelation of filtered white noise $ x=h\ast v$ is

$\displaystyle \zbox {r_x(l) = \sigma_v^2\cdot(h\star h)(l) \;\longleftrightarrow\;\sigma_v^2 \left\vert H(\omega)\right\vert^2}

where $ \sigma_v^2$ is the variance of the driving white noise.

In words, the true autocorrelation of filtered white noise equals the autocorrelation of the filter's impulse response times the white-noise variance. (The filter is of course assumed LTI and stable.) In the frequency domain, we have that the true power spectral density of filtered white noise is the squared-magnitude frequency response of the filter scaled by the white-noise variance.

For finite number of observed samples of a filtered white noise process, we may say that the sample autocorrelation of filtered white noise is given by the autocorrelation of the filter's impulse response convolved with the sample autocorrelation of the driving white-noise sequence. For lags $ l$ much less than the number of observed samples $ M$, the driver sample autocorrelation approaches an impulse scaled by the white-noise variance. In the frequency domain, we have that the sample PSD of filtered white noise is the squared-magnitude frequency response of the filter $ \vert H(\omega)\vert^2$ scaled by a sample PSD of the driving noise.

We reiterate that every stationary random process may be defined, for our purposes, as filtered white noise.6.9 As we see from the above, all correlation information is embodied in the filter used.

Example: FIR-Filtered White Noise

Let's estimate the autocorrelation and power spectral density of the ``moving average'' (MA) process

$\displaystyle x(n) = v(n) + v(n-1) + \cdots + v(n-8)

where $ v(n)$ is unit-variance white noise.

Since $ h = [1,1,1,1,1,1,1,1]$,

$\displaystyle h\star h = [8,7,6,5,4,3,2,1,0,\ldots]

for nonnegative lags ($ l\ge0$). More completely, we can write

$\displaystyle (h\star h)(l) = \left\{\begin{array}{ll}
8-l, & \vert l\vert<8 \\ [5pt]
0, & \vert l\vert\ge 8. \\

Thus, the autocorrelation of $ h$ is a triangular pulse centered on lag 0. The true (unbiased) autocorrelation is given by

$\displaystyle r_x(l) \isdef {\cal E}\{x(n)x(n+l)\} = \sigma_v^2 (h\star h)(l)

The true power spectral density (PSD) is then

$\displaystyle \hbox{\sc DTFT}_\omega(h\star h) = 8^2\cdot\hbox{asinc}^2_{8}(\omega) = \frac{\sin^2(4\omega)}{\sin^2(0.5\omega)}

Figure 5.3 shows a collection of measured autocorrelations together with their associated smoothed-PSD estimates.

Figure 5.3: Averaged sample autocorrelations (biased) and their Fourier transforms (smoothed PSD estimates), for FIR-filtered white noise.

Example: Synthesis of 1/F Noise (Pink Noise)

Pink noise6.10 or ``1/f noise'' is an interesting case because it occurs often in nature [273],6.11is often preferred by composers of computer music, and there is no exact (rational, finite-order) filter which can produce it from white noise. This is because the ideal amplitude response of the filter must be proportional to the irrational function $ 1/\sqrt{f}$, where $ f$ denotes frequency in Hz. However, it is easy enough to generate pink noise to any desired degree of approximation, including perceptually exact.

The following Matlab/Octave code generates pretty good pink noise:

Nx = 2^16;  % number of samples to synthesize
B = [0.049922035 -0.095993537 0.050612699 -0.004408786];
A = [1 -2.494956002   2.017265875  -0.522189400];
nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est.
v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1)
x = filter(B,A,v);    % Apply 1/F roll-off to PSD
x = x(nT60+1:end);    % Skip transient response

In the next section, we will analyze the noise produced by the above matlab and verify that its power spectrum rolls off at approximately 3 dB per octave.

Example: Pink Noise Analysis

Let's test the pink noise generation algorithm presented in §5.14.2. We might want to know, for example, does the power spectral density really roll off as $ 1/f$? Obviously such a shape cannot extend all the way to dc, so how far does it go? Does it go far enough to be declared ``perceptually equivalent'' to ideal 1/f noise? Can we get by with fewer bits in the filter coefficients? Questions like these can be answered by estimating the power spectral density of the noise generator output.

Figure 5.4 shows a single periodogram of the generated pink noise, and Figure 5.5 shows an averaged periodogram (Welch's method of smoothed power spectral density estimation). Also shown in each log-log plot is the true 1/f roll-off line. We see that indeed a single periodogram is quite random, although the overall trend is what we expect. The more stable smoothed PSD estimate from Welch's method (averaged periodograms) gives us much more confidence that the noise generator makes high quality 1/f noise.

Note that we do not have to test for stationarity in this example, because we know the signal was generated by LTI filtering of white noise. (We trust the randn function in Matlab and Octave to generate stationary white noise.)

Figure 5.4: Periodogram of pink noise.

Figure 5.5: Estimated power spectral density of pink noise.

Next Section:
Processing Gain
Previous Section:
Welch's Method with Windows