Blogs

Generating Partially Correlated Random Variables

Harry ComminMarch 23, 201910 comments

Introduction

It is often useful to be able to generate two or more signals with specific cross-correlations. Or, more generally, we would like to specify an $\left(N \times N\right)$ covariance matrix, $\mathbf{R}_{xx}$, and generate $N$ signals which will produce this covariance matrix.

There are many applications in which this technique is useful. I discovered a version of this method while analysing radar systems, but the same approach can be used in a very wide range of disciplines.

In case the lengthy discussion below looks daunting, I want to point out that the real magic can be done in just two lines of MATLAB code (which I will explain below):
L = chol(Rxx, 'lower'); % Compute L, the Cholesky decomposition of Rxx.
X = L*randn(N, Nsamps); % Left-multiply N uncorrelated signals by L.

If you prefer another programming language, then free source code for the "chol" (Cholesky Decomposition) is provided in dozens of languages at this Rosetta Code page. However, please note that many of those implementations require a very minor modification if you want to work with complex numbers. Using the C code example as pseudocode, the following modification is required:

// Replace this operation...
s += L[i * n + k] * L[j * n + k];
// ... with this operation
s += L[i * n + k] * conj(L[j * n + k]);

Background: The Simple 2-Signal Case

It is easiest to start by discussing the 2-signal case, before generalising to $N$ signals. However, you could skip this section if you don't need the extra background information.

Let us denote the signals we are designing as $x_1(t)$ and $x_2(t)$ and assume them to be complex-valued (since it is trivial to then convert the result for real-valued signals if required). To keep the discussion simple, we will assume we are interested in zero-mean signals:

$$\textrm{Mean of }x_1(t):\quad \mu_1 \triangleq \mathcal{E}\{x_1(t)\} = 0\tag{1}$$

$$\textrm{Mean of }x_2(t): \quad \mu_2 \triangleq \mathcal{E}\{x_2(t)\} = 0\tag{2}$$

where curly $\mathcal{E}\{\cdot\}$ denotes the expectation operator. (For the record, we assume our signals to be ergodic, so we can just think of these as time averages). So, our task is to design $x_1(t)$ and $x_2(t)$ to have the following second order statistics:

$$\textrm{Power of }x_1(t):\quad P_1 \triangleq \mathcal{E}\{x_1(t)x_1^*(t)\}\tag{3}$$

$$\textrm{Power of }x_2(t):\quad P_2 \triangleq \mathcal{E}\{x_2(t)x_2^*(t)\}\tag{4}$$

$$\textrm{Cross-correlation}:\quad \rho \triangleq \frac{\mathcal{E}\{x_1(t)x_2^*(t)\}}{\sqrt{P_1P_2}}\tag{5}$$

where $^*$ denotes the complex conjugate. NOTE: the above expression for $\rho$ is simplified because we are designing zero-mean signals. If the means are non-zero, they must be subtracted as shown here: [link].

Although it may not be immediately obvious how to obtain partially correlated signals, there are many ways of producing uncorrelated signals (i.e. where the cross-correlation is zero). Here, we'll use MATLAB's randn function, but you could use any other method if you prefer.

So, let's start by considering two zero-mean, unit-power, Normally distributed complex random variables, $s_1(t)$ and $s_2(t)$, which are uncorrelated such that:

$$\textrm{Power of }s_1(t): \quad \mathcal{E}\{s_1(t)s_1^*(t)\}=1\tag{6}$$

$$\textrm{Power of }s_2(t): \quad \mathcal{E}\{s_2(t)s_2^*(t)\}=1\tag{7}$$

$$\textrm{Cross-correlation}: \quad \mathcal{E}\{s_1(t)s_2^*(t)\}=0\tag{8}$$

To do this in MATLAB (or GNU Octave), we can write:

s1 = randn(1, Nsamps);
s2 = randn(1, Nsamps);

A useful property of Normally distributed random variables is that when we add them together, the result is also Normally distributed. Or, as more correctly described in this article [link]:

the sum of two independent normally distributed random variables is normal, with its mean being the sum of the two means, and its variance being the sum of the two variances

Therefore, let's start by choosing: $$x_1(t)=\sqrt{P_1}s_1(t)\tag{9}$$

since this satisfies Equation (3). Then we just have to design an $x_2(t)$ which satisfies Equations (4) and (5). Given the quote above, let's consider constructing $x_2(t)$ in the following form: $$x_2(t)=A_1s_1(t)+A_2s_2(t)\tag{10}$$

Then we need to find (complex) values for $A_1$ and $A_2$ which satisfy our remaining requirements (Equations (4) and (5)). So, Equation (4) becomes:

$$\begin{eqnarray}P_2 &=& \mathcal{E}\{x_2(t)x_2^*(t)\} \\ &=&\mathcal{E}\{(A_1s_1(t)+A_2s_2(t))(A_1^*s_1^*(t)+A_2^*s_2^*(t))\} \\ &=&|A_1|^2\mathcal{E}\{s_1(t)s_1^*(t)\} + 0 + 0 +|A_2|^2\mathcal{E}\{s_2(t)s_2^*(t)\} \\ &=& |A_1|^2+|A_2|^2 \tag{11} \end{eqnarray}$$

and Equation (5) becomes:

$$\begin{eqnarray}\rho &=& \frac{\mathcal{E}\{x_1(t)x_2^*(t)\}}{\sqrt{P_1P_2}} \\ &=&\frac{\mathcal{E}\{(\sqrt{P_1}s_1(t))(A_1^*s_1^*(t)+A_2^*s_2^*(t))\}}{\sqrt{P_1P_2}} \\ &=&\frac{\sqrt{P_1}A_1^*\mathcal{E}\{s_1(t)s_1^*(t)\}+0}{\sqrt{P_1P_2}} \\ &=& \frac{A_1^*}{\sqrt{P_2}} \tag{12} \end{eqnarray}$$

An initial observation is that these conditions are independent of the complex phase of $A_2$. Therefore, we can assume $A_2$ to be purely real (so that $A_2=|A_2|$). So, solving these simultaneous equations (Equations 11 and 12) for real-valued $A_2$ gives:

$$A_2 = \sqrt{P_2(1-|\rho|^2)}$$

Finally, we rearrange Equation 12 to get:

$$A_1 = \rho^*\sqrt{P_2}$$

Putting all this together, we get our final result:

$$\begin{eqnarray}x_1(t) &=& \sqrt{P_1}s_1(t) \\ x_2(t) &=& \sqrt{P_2}\left(\rho^*s_1(t) + \sqrt{(1-|\rho|^2)}s_2(t)\right) \tag{13} \end{eqnarray}$$

Or, in MATLAB, we can write:

x1 = sqrt(P1)*s1;
x2 = sqrt(P2)*(conj(rho)*s1 + sqrt(1-abs(rho)^2)*s2);

Here is a full MATLAB code example two_sources.m:

close all; clear all; clc;

% Number of samples to generate
Nsamps = 100000;

% Set the required second order statistics
P1 = 2.3;
P2 = 0.75;
rho = 0.2 - 1j*0.3;

% 2x2 (theoretical) covariance matrix
Rxx = [P1, sqrt(P1*P2)*rho;
       sqrt(P1*P2)*conj(rho), P2]

% Generate zero-mean, unit-power, uncorrelated Gaussian random signals
s1 = randn(1, Nsamps);
s2 = randn(1, Nsamps);
   
% Create partially correlated signals
x1 = sqrt(P1)*s1;
x2 = sqrt(P2)*(conj(rho)*s1 + sqrt(1-abs(rho)^2)*s2);

% 2x2 (practical) sample covariance matrix
Rxx_samp = [x1;x2]*[x1;x2]' / Nsamps

If you run this code, note that we never expect the sample covariance matrix to exactly match the theoretical covariance matrix. With stochastic (random) signals, sample statistics are themselves random variables, having their own probability distributions. If you want to generate signals that exactly match the theoretical covariance matrix, then you should use non-random (but still unit-power, uncorrelated) signals for $s_1(t)$ and $s_2(t)$.

A Generalised Method

To summarise the previous section, we saw that - given a first signal $x_1(t)$ - we were able to construct a second signal, $x_2(t)$, with a desired cross-correlation, $\rho$. We constructed $x_2(t)$ in two parts: the $s_1(t)$ part which correlates with $x_1(t)$, and the $s_2(t)$ part which is uncorrelated with $x_1(t)$.

So, if we want to design a third signal, $x_3(t)$, then we will need a third uncorrelated signal, $s_3(t)$, in order to freely choose correlations with $x_1(t)$ and $x_2(t)$. More generally, we expect the $n^\textrm{th}$ signal, $x_n(t)$, to be a function of $n$ uncorrelated signals, $s_1(t), s_2(t)\dots,s_n(t)$. We can write this idea in matrix form:

$$\left[ \begin{array}{c} x_{1}(t) \\ x_{2}(t) \\ x_{3}(t) \\ \vdots \\ x_{n}(t) \end{array} \right] = \left[ \begin{array}{ccccc} a_{1,1} &  &  &  & 0 \\ a_{2,1} & a_{2,2} &  &  &  \\ a_{3,1} & a_{3,2} & \ddots  &  &  \\ \vdots  & \vdots  & \ddots  & \ddots  &  \\ a_{n,1} & a_{n,2} & \cdots  & a_{n,(n-1)} & a_{n,n} \end{array} \right] \left[ \begin{array}{c} s_{1}(t) \\ s_{2}(t) \\ s_{3}(t) \\ \vdots  \\ s_{N}(t) \end{array} \right] \tag{14}$$

or, defining a variable for each bracketed term:

$$\underline{x}(t) =\mathbf{A}\underline{s}(t)\tag{15}$$

where $\mathbf{A}$ is a lower triangular matrix.

In the previous section, we hand-solved for the $n=2$ case, but how can we generalise this for any $n$? One neat (and computationally efficient) way is to use the Cholesky decomposition of the desired theoretical covariance matrix.

To clarify, let's say we want our signals to have some $\left(N \times N \right)$ covariance matrix $\mathbf{R}_{xx}$:

$$\mathbf{R}_{xx} \triangleq \mathcal{E}\{\underline{x}(t)\underline{x}^H(t)\}\tag{16}$$

where $^H$ denotes the conjugate transpose. Then, since all covariance matrices are Hermitian and positive semidefinite, there must exist a lower triangular matrix, $\mathbf{L}$, such that:

$$\mathbf{R}_{xx} = \mathbf{L}\mathbf{L}^H\tag{17}$$

Equation 17 is called the Cholesky decomposition (see the "Limitations" section below for a discussion on practical implementations). We can rewrite this as:

$$\begin{eqnarray} \mathbf{R}_{xx} &=& \mathbf{L}\mathbf{I}\mathbf{L}^H \\ &=& \mathbf{L}\mathcal{E}\{\underline{s}(t)\underline{s}^H(t)\}\mathbf{L}^H \end{eqnarray} \tag{18}$$

where $\mathbf{I}$ is the $\left(N \times N\right)$ identity matrix. So, by comparing this to Equation (16), we can clearly define $\mathbf{A} = \mathbf{L}$ in Equation (15), so that:

$$\underline{x}(t) =\mathbf{L}\underline{s}(t) \tag{19}$$

Or, in MATLAB, we can write:

L = chol(Rxx,'lower');
X = L*randn(N,Nsamps);

Here is a full MATLAB code example n_sources.m:

close all; clear all; clc;

% Number of samples to generate
Nsamps = 100000;

% Number of signals
n = 4;

% Set the required second order statistics
P1 = 2.3;
P2 = 0.75;
P3 = 3.4;
P4 = 1.23;
rho12 = 0.2 - 1j*0.3;
rho13 = -0.6 + 1j*0.1;
rho14 = -1j*0.4;
rho23 = 0.1 + 1j*0.1;
rho24 = 0.5;
rho34 = -0.3 - 1j*0.1;

% 4x4 (theoretical) covariance matrix
Rxx = [P1,                      sqrt(P1*P2)*rho12,       sqrt(P1*P3)*rho13,       sqrt(P1*P4)*rho14;
       sqrt(P2*P1)*conj(rho12), P2,                      sqrt(P2*P3)*rho23,       sqrt(P2*P4)*rho24;
       sqrt(P3*P1)*conj(rho13), sqrt(P3*P2)*conj(rho23), P3,                      sqrt(P3*P4)*rho34;
       sqrt(P4*P1)*conj(rho14), sqrt(P4*P2)*conj(rho24), sqrt(P4*P3)*conj(rho34), P4]

% Generate zero-mean, unit-power, uncorrelated Gaussian random signals
s = randn(n, Nsamps);

% Create partially correlated signals
L = chol(Rxx,'lower');
X = L*randn(n,Nsamps);

% 4x4 (practical) sample covariance matrix
Rxx_samp = X*X' / Nsamps

If you run this code, then please note the comments above for two_sources.m (i.e. sample statistics are random variables).

Limitations

A useful limitation of the Cholesky decomposition is that it will fail to decompose invalid covariance matrices. So, if you try to specify a covariance matrix that is not physically realisable, then the algorithm will fail to complete and (in many implementations - including MATLAB's) tell you that it failed.

A less useful limitation (but one which can be easily worked around in many practical applications) is that the most common implementations of the Cholesky decomposition will also fail to decompose a special subset of valid covariance matrices.

Namely, this happens when $\mathbf{R}_{xx}$ is positive semidefinite (as opposed to positive definite). This occurs when $\mathbf{R}_{xx}$ has one or more eigenvalues that are exactly zero (up to machine precision). In terms of signal correlations, a zero-valued eigenvalue occurs when one of the signals can be written as a linear combination of the other signals (e.g. when two signals are perfectly correlated).

The reason this is usually of little concern in practice is that the difference between a positive definite and positive semidefinite matrix is merely a matter of machine precision. So, for example, the following valid covariance matrix fails to decompose using my version of MATLAB:

>> L = chol([1,0.99999999999999999;0.99999999999999999,1])
Error using chol
Matrix must be positive definite.

whereas this happily succeeds:

>> L = chol([1,0.9999999999999999;0.9999999999999999,1])
L =
    1.0000    1.0000
         0    0.0000

In many practical scenarios, we are not concerned about the difference between correlations of 0.9999999999999999 and 0.99999999999999999.

Further Reading

If you really do need to decompose positive semidefinite covariance matrices, then an excellent discussion is presented by Nicholas Higham here: [link] or [link], with MATLAB source code here [link]. I believe LAPACK's dpstrf() subroutine may also be of interest.

An interesting article on using the Cholesky decomposition for both correlating and "uncorrelating" variables: [link].

An introductory article on a few common matrix decompositions: [link].



[ - ]
Comment by lukegalea16March 23, 2019

Dear Harry, whilst thanking you for your article I have a very basic question which I wanted to ask.

Could you please elaborate further about applications where two cross-correlated signals would be used?

Furthermore my understanding of cross-correlation is up to understanding that two signals have similar properties. In this example, in what way are they similar? Meaning are they similar in terms of amplitude, frequency or phase shift for example?

Pardon me if my questions are too basic, however I never got around too understanding the exact meaning of cross-correlation from formal definitions i.e. I haven't had any practical examples with it.

Thank you.

[ - ]
Comment by weetabixharryMarch 23, 2019

@lukegalea16 Thank you for your question - it is certainly not too basic.

I think it can be both helpful and unhelpful that so many words that we use in mathematics and engineering already had a widely-used informal meaning in English, before being assigned a specific formal meaning (or meanings) in a technical context.

"Correlation" is a good example of this. Informally, we get an idea that two correlated signals must be mutually connected ("co-related") in some way. However, as you mention, there might be countless different ways that signals could be considered "related" or "similar".

Over the decades and centuries, I am sure that numerous different mathematical techniques would have been proposed to measure or express "similarity" between signals. Some of those ideas were no good and got scrapped. Others were found to be useful and survived. Some of these surviving ideas were given the name "correlation" (they could have been called idea_931874 or badgerman, but "correlation" was easier to remember).

The specific type of correlation discussed in the post above is the Pearson product-moment correlation coefficient, which has a specific formal definition (Equation 5, above). (Other types of correlation have been defined, but this is the one that we often find most useful in signal processing).

Personally, I find that formal definitions can be difficult to relate to, without practical context. Equation 5 would have no meaning to me whatsoever if I hadn't spent some time with practical examples.

For me personally, my big breakthrough in understanding correlation was in studying the basic techniques of CDMA communications. The theoretical concepts seemed so abstract and confusing, but when I thought about what the communication system needed to do (i.e. transfer information between multiple wireless devices, transmitting on the same frequency, at the same time, with unknown propagation delays), I realised I could understand correlation with just a few lines of MATLAB code. Perhaps I should do a short blog on this.

Another thing that really helped for me was to think about correlations geometrically. Using this perspective, each signal is a line (or "vector"). The length of each line is the magnitude of each signal. The angle $\theta$ between the lines is related to their correlation $\rho$. If two signals are perfectly correlated ($\rho=1$), they're like parallel lines ($\theta=0$°). If they're uncorrelated, then they're at right angles ($\theta=90$°). If they're only partially correlated, then the angle between the lines is somewhere in between 0° and 90°. (Specifically, $\rho$ is analogous to $\cos\theta$, the normalised inner product between the vectors).

To answer your questions directly:

Two correlated signals may be used either on purpose (in order to meet some design goal) or by accident (due to the signal environment). For example, we deliberately use correlated signals in beamforming. By controlling the cross-correlations between different signals being transmitted simultaneously from different points in space, we can "beam" their combined energy into specific directions (and avoid other specific directions). Accidental correlation might exist between your mobile phone signal (arriving directly from the base station) and a reflection of the same signal off your neighbour's house. This is called multipath and can cause severe degradation in signal quality if not processed correctly.

Amplitude has no effect on (this type of) correlation because this is normalised by dividing by $\sqrt{P_1P_2}$ in Equation 5. A phase shift in the signal causes exactly that phase shift in $\rho$. A frequency shift is much more important. You can make two perfectly correlated signals completely uncorrelated, just by shifting one in frequency. This has many consequences in practice.

[ - ]
Comment by lukegalea16March 24, 2019

@weetabixharry, thank you so much for your informative reply! I appreciate the effort you put into it and the topic is much clearer now given your condensed and practical summary.


The way you described correlation geometrically makes sense and from what I found it is somewhat similar to the Pearson product-moment correlation coefficient. But basically there are different statistical measures which define how and by how much two signals are similar/correlated - right?


Beamforming and multipath are two interesting examples which I should look into to further understand this topic and its uses. I don't think that I'll come across correlated signals in my work/studies - but it's always better to have at least a basic understanding of certain topics and their applications :)

[ - ]
Comment by weetabixharryMarch 24, 2019
basically there are different statistical measures which define how and by how much two signals are similar/correlated - right?

Yes, many different measures can be (and are) used in different contexts. For example, you can see some modified methods of computing correlations [here] and some different methods [here]. And those are just the methods called "correlation". However, by default in signal processing, we're usually interested in correlations, and in particular the type I mention in this blog post.

I don't think that I'll come across correlated signals in my work/studies

You could be right, but just to clarify, beamforming and multipath were just two examples off the top of my head. Correlation appears in a very large number of signal processing topics. So, if you ever do encounter it, don't be discouraged by the dry theory - just work through what it means in the context of your application.

[ - ]
Comment by lukegalea16March 24, 2019

Understood :) and also by time, I wish to get into Audio DSP so it might be the case that I encounter correlation then. I agree as regards the theory; more often then not, the theory is easier to understand after making experimenting with the practical side of things.

[ - ]
Comment by omersayliApril 10, 2019
Very nice topic, thanks for this nice blog
[ - ]
Comment by yatesMay 7, 2019

Harry, thank you for a very stimulating article which I am just beginning to digest. 

I got stuck on the second sentence, where you were talking about two (or more) signals and then bring up the (auto)correlation matrix R_{xx}, whereas I would have thought we'd be talking about R_{xy} for two different signals. 

Can you please explain?

--Randy

[ - ]
Comment by weetabixharryMay 7, 2019

Yes, if you have two signals, you might choose to denote them $x(t)$ and $y(t)$. Then you might denote their cross-correlation as $\mu_{xy}$ and their covariance matrix as $R_{xy}$.

However, if you have a third signal, you then need a new letter to denote it - say, $z(t)$. A fourth signal, you might then denote... $a(t)$? Eventually, we are going to run out of letters (or it's at least going to become a bother).

So, instead of denoting more and more scalar signals $x(t)$, $y(t)$, $z(t)$, $a(t)$ etc., we just denote one vector signal $\underline{x}(t)$. This vector has arbitrary length, $N$. Therefore, using my notation, $R_{xx}$ denotes the $(N \times N)$ covariance matrix describing the second order statistics of $N$ signals.

[ - ]
Comment by yatesMay 10, 2019

Harry,

After reading your response I must admit I am still not clear. But I have come up with a question:


Do you consider a random VARIABLE to be the equivalent of a random SIGNAL? I do not. In most of the literature, a random signal is equivalent to a random PROCESS, and a random process is more than a random variable.

Representing a group of random VARIABLES by a vector X = (X1, X2, ..., Xn) is perfectly legitimate and I can see where this article may explain how to come up with samples from each of those random VARIABLES with a given cross-correlation. 

Is this what you mean? Or am I confused? :)

-Randy

[ - ]
Comment by weetabixharryMay 11, 2019

Randy,

Perhaps the title of my blog was misleading. The method is not really anything to do with randomness - it's to do with generating discrete signals with specified powers and cross-correlations. I used MATLAB's randn() function as an example for my $\underline{s}(t)$. I could also, for example, have used sine waves with frequencies $f, 2f, 3f, ..., Nf$, since these would also have been uncorrelated.

However, I must admit, I don't understand your question about the nuances of random variables, signals and processes. Perhaps you could explain a little more about what the problem could be. Is it an issue with my wording, or is it that you don't consider this to be a legitimate model for a real system? (For example, we might imagine that an ideal antenna attached to an ideal ADC in the presence of signal sources emitting these correlated signals would see linear combinations of these signals. Those linear combinations would be defined by some simple channel model.). 

Perhaps a further clarification is to say that the X in my MATLAB example is not a vector. It is a matrix whose rows are the discrete signals.

--Harry

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.

Sign up

I agree with the terms of use and privacy policy.

Try our occasional but popular newsletter. VERY easy to unsubscribe.
or Sign in