Sign in

Not a member? | Forgot your Password?

Search Online Books



Search tips

Free Online Books

Free PDF Downloads

A Quadrature Signals Tutorial: Complex, But Not Complicated

Understanding the 'Phasing Method' of Single Sideband Demodulation

Complex Digital Signal Processing in Telecommunications

Introduction to Sound Processing

C++ Tutorial

Introduction of C Programming for DSP Applications

Fixed-Point Arithmetic: An Introduction

Cascaded Integrator-Comb (CIC) Filter Introduction

Chapters

FIR Filter Design Software

See Also

Embedded SystemsFPGA
Chapter Contents:

Search Spectral Audio Signal Processing

  

Book Index | Global Index


Would you like to be notified by email when Julius Orion Smith III publishes a new entry into his blog?

  

Sample-Mean Variance

The simplest case to study first is the sample mean:

$\displaystyle \hat{\mu}_x(n) \isdef \frac{1}{M}\sum_{m=0}^{M-1}x(n-m)
$

Here we have defined the sample mean at time $ n$ as the average of the $ M$ successive samples up to time $ n$--a ``running average''. The true mean is assumed to be the average over any infinite number of samples such as

$\displaystyle \mu_x = \lim_{M\to\infty}\hat{\mu}_x(n)
$

or

$\displaystyle \mu_x = \lim_{K\to\infty}\frac{1}{2K+1}\sum_{m=-K}^{K}x(n+k)
\isdefs {\cal E}\left\{x(n)\right\}.
$

Now assume $ \mu_x=0$, and let $ \sigma_x^2$ denote the variance of the process $ x(\cdot)$, i.e.,

   Var$\displaystyle \left\{x(n)\right\} \isdefs {\cal E}\left\{[x(n)-\mu_x]^2\right\} \eqsp {\cal E}\left\{x^2(n)\right\} \eqsp \sigma_x^2
$

Then the variance of our sample-mean estimator $ \hat{\mu}_x(n)$ can be calculated as follows:

\begin{eqnarray*}
\mbox{Var}\left\{\hat{\mu}_x(n)\right\} &\isdef & {\cal E}\lef...
...2}\sum_{m_1=0}^{M-1}\sum_{m_2=0}^{M-1}
r_x(\vert m_1-m_2\vert)
\end{eqnarray*}

where we used the fact that the time-averaging operator $ {\cal E}\left\{\right\}$ is linear, and $ r_x(l)$ denotes the unbiased autocorrelation of $ x(n)$. If $ x(n)$ is white noise, then $ r_x(\vert m_1-m_2\vert) =
\sigma_x^2\delta(m_1-m_2)$, and we obtain

\begin{eqnarray*}\mbox{Var}\left\{\hat{\mu}_x(n)\right\}
&=&\frac{1}{M^2}\sum_{m...
...
\sigma_x^2\delta(m_1-m_2)\\
&=&\zbox {\frac{\sigma_x^2}{M}}\\
\end{eqnarray*}

We have derived that the variance of the $ M$-sample running average of a white-noise sequence $ x(n)$ is given by $ \sigma_x^2/M$, where $ \sigma_x^2$ denotes the variance of $ x(n)$. We found that the variance is inversely proportional to the number of samples used to form the estimate. This is how averaging reduces variance in general: When averaging $ M$ independent (or merely uncorrelated) random variables, the variance of the average is proportional to the variance of each individual random variable divided by $ M$.


Previous: Estimator Variance
Next: Sample-Variance Variance

Order a Hardcopy of Spectral Audio Signal Processing


About the Author: Julius Orion Smith III
Julius Smith's background is in electrical engineering (BS Rice 1975, PhD Stanford 1983). He is presently Professor of Music and Associate Professor (by courtesy) of Electrical Engineering at Stanford's Center for Computer Research in Music and Acoustics (CCRMA), teaching courses and pursuing research related to signal processing applied to music and audio systems. See http://ccrma.stanford.edu/~jos/ for details.


Comments


No comments yet for this page


Add a Comment
You need to login before you can post a comment (best way to prevent spam). ( Not a member? )