## The Inner Product

The inner product (or dot product'', or scalar product'') is an operation on two vectors which produces a scalar. Defining an inner product for a Banach space specializes it to a Hilbert space (or inner product space''). There are many examples of Hilbert spaces, but we will only need for this book (complex length vectors, and complex scalars).

The inner product between (complex) -vectors and is defined by5.9

The complex conjugation of the second vector is done in order that a norm will be induced by the inner product:5.10

As a result, the inner product is conjugate symmetric:

Note that the inner product takes to . That is, two length complex vectors are mapped to a complex scalar.

### Linearity of the Inner Product

Any function of a vector (which we may call an operator on ) is said to be linear if for all and , and for all scalars and in ,

A linear operator thus commutes with mixing.'' Linearity consists of two component properties:
• homogeneity:
A function of multiple vectors, e.g., can be linear or not with respect to each of its arguments.

The inner product is linear in its first argument, i.e., for all , and for all ,

This is easy to show from the definition:

The inner product is also additive in its second argument, i.e.,

but it is only conjugate homogeneous (or antilinear) in its second argument, since

The inner product is strictly linear in its second argument with respect to real scalars and :

where .

Since the inner product is linear in both of its arguments for real scalars, it may be called a bilinear operator in that context.

### Norm Induced by the Inner Product

We may define a norm on using the inner product:

It is straightforward to show that properties 1 and 3 of a norm hold (see §5.8.2). Property 2 follows easily from the Schwarz Inequality which is derived in the following subsection. Alternatively, we can simply observe that the inner product induces the well known norm on .

### Cauchy-Schwarz Inequality

The Cauchy-Schwarz Inequality (or Schwarz Inequality'') states that for all and , we have

with equality if and only if for some scalar .

We can quickly show this for real vectors , , as follows: If either or is zero, the inequality holds (as equality). Assuming both are nonzero, let's scale them to unit-length by defining the normalized vectors , , which are unit-length vectors lying on the unit ball'' in (a hypersphere of radius ). We have

which implies

or, removing the normalization,

The same derivation holds if is replaced by yielding

The last two equations imply

In the complex case, let , and define . Then is real and equal to . By the same derivation as above,

Since , the result is established also in the complex case.

### Triangle Inequality

The triangle inequality states that the length of any side of a triangle is less than or equal to the sum of the lengths of the other two sides, with equality occurring only when the triangle degenerates to a line. In , this becomes

We can show this quickly using the Schwarz Inequality:

### Triangle Difference Inequality

A useful variation on the triangle inequality is that the length of any side of a triangle is greater than the absolute difference of the lengths of the other two sides:

Proof: By the triangle inequality,

Interchanging and establishes the absolute value on the right-hand side.

### Vector Cosine

The Cauchy-Schwarz Inequality can be written

In the case of real vectors , we can always find a real number which satisfies

We thus interpret as the angle between two vectors in .

### Orthogonality

The vectors (signals) and 5.11are said to be orthogonal if , denoted . That is to say

Note that if and are real and orthogonal, the cosine of the angle between them is zero. In plane geometry (), the angle between two perpendicular lines is , and , as expected. More generally, orthogonality corresponds to the fact that two vectors in -space intersect at a right angle and are thus perpendicular geometrically.

Example ():

Let and , as shown in Fig.5.8.

The inner product is . This shows that the vectors are orthogonal. As marked in the figure, the lines intersect at a right angle and are therefore perpendicular.

### The Pythagorean Theorem in N-Space

In 2D, the Pythagorean Theorem says that when and are orthogonal, as in Fig.5.8, (i.e., when the vectors and intersect at a right angle), then we have

 (5.1)

This relationship generalizes to dimensions, as we can easily show:
 re (5.2)

If , then and Eq.(5.1) holds in dimensions.

Note that the converse is not true in . That is, does not imply in . For a counterexample, consider , , in which case

while .

For real vectors , the Pythagorean theorem Eq.(5.1) holds if and only if the vectors are orthogonal. To see this, note that, from Eq.(5.2), when the Pythagorean theorem holds, either or is zero, or is zero or purely imaginary, by property 1 of norms (see §5.8.2). If the inner product cannot be imaginary, it must be zero.

Note that we also have an alternate version of the Pythagorean theorem:

### Projection

The orthogonal projection (or simply projection'') of onto is defined by

The complex scalar is called the coefficient of projection. When projecting onto a unit length vector , the coefficient of projection is simply the inner product of with .

Motivation: The basic idea of orthogonal projection of onto is to drop a perpendicular'' from onto to define a new vector along which we call the projection'' of onto . This is illustrated for in Fig.5.9 for and , in which case

Derivation: (1) Since any projection onto must lie along the line collinear with , write the projection as . (2) Since by definition the projection error is orthogonal to , we must have

Thus,

See §I.3.3 for illustration of orthogonal projection in matlab.

Next Section:
Signal Reconstruction from Projections
Previous Section:
Signal Metrics