DSPRelated.com
Forums

Non-orthogonal basis and redundancy

Started by brettcooper July 25, 2012
Hello Forum,

I am clear on what an orthogonal or orthonormal basis is and how it is
useful for representing a certain function. Some orthogonal bases are
better than other: it takes less basis functions to approximate the
function pretty well....

A non-orthogonal basis function is said to cause redundancy...in what
sense? I know that in a non-orthogonal basis the basis functions are
correlated instead of uncorrelated....

For instance, using an orthogonal basis, we get coefficients by performing
inner products. Same with the non-orthogonal basis.....

where is the redundancy?

thanks
Brett
brettcooper <61104@dsprelated> wrote:

> I am clear on what an orthogonal or orthonormal basis is and how it is > useful for representing a certain function. Some orthogonal bases are > better than other: it takes less basis functions to approximate the > function pretty well....
Easier to think of in terms of orthogonal or non-orthogonal vectors. In describing map positions, we usually use orthogonal coordinates such as latitude and longitude, or x and y. I that case, each point on a map has a unique coordinate. If you use instead (x+y) and (x-y) as basis vectors (I don't know how to put hats over those, but the idea is there) you have a different orthongonal basis. If you use (x) and (x+y) as basis vectors, you still can uniquely identify any point, but it takes a little more work.
> A non-orthogonal basis function is said to cause redundancy...in what > sense? I know that in a non-orthogonal basis the basis functions are > correlated instead of uncorrelated....
I think redundancy isn't the right word. In an (x,y) coordinate system, moving along x doesn't change y, moving along y doesn't change x. In an (x,x+y) system, moving along x does change (x+y) and moving along (x+y) does change x. Now, you could have a redundant coordinate system, such as three coordinates in a two dimensional space, say (x, x+y, x-y). But that means that the coordinates are not independent. But now, instead of points consider describing a line. In an orthogonal coordinate system, one can describe a line as ax+by+c=0 Note that there is now redundancy, as (2a)x+(2b)y+(2c)=0 represents the same line. We can easily remove that redundancy, by adding one restriction on (a,b,c). In the case of non-orthogonal coordinates (basis vectors), you can still remove the redundancy, but it isn't quite as easy.
> For instance, using an orthogonal basis, we get coefficients by > performing inner products. Same with the non-orthogonal basis.....
With an orthogonal basis, the inner (dot) product tells you the component of that, and you sum up the components. For a non-orthogonal basis, the inner (dot) product between basis functions (vectors) isn't zero. You can't just sum them up.
> where is the redundancy?
-- glen
Am 25.07.2012 21:53, schrieb brettcooper:
> Hello Forum, > > I am clear on what an orthogonal or orthonormal basis is and how it is > useful for representing a certain function. Some orthogonal bases are > better than other: it takes less basis functions to approximate the > function pretty well....
That depends on the signal to approximate and not on the basis.
> A non-orthogonal basis function is said to cause redundancy...in what > sense?
At least not in a sense of linear algebra: For a basis of N dimensions, you need N coefficients regardless of whether the basis is orthogonal or not. The question is rather the following: Given a high dimensional space, and given the mean square error as error measure, and given that we want to approximate a given signal with as few basis functions as possible as best as possible in the mean-square error sense, then it is not hard to see that for this particular goal the basis should be orthogonal. That is, a necessary condition for making the error when approximating with n < N basis functions as small as possible is that the error signal (from the missing basis functions) is orthogonal to the reconstructed signal. This, plus a little abstract nonsense will give you the orthogonality of the basis. > I know that in a non-orthogonal basis the basis functions are
> correlated instead of uncorrelated.... > > For instance, using an orthogonal basis, we get coefficients by performing > inner products. Same with the non-orthogonal basis.....
This is certainly not true. You get coefficients by the scalar product for orthonormal basis only. Otherwise, you need to invert the filter matrix - or multiply by the dual basis (which is exactly the same said in other words). So long, Thomas
On Thu, 26 Jul 2012 08:37:43 +0200, Thomas Richter
<thor@math.tu-berlin.de> wrote:

>Am 25.07.2012 21:53, schrieb brettcooper: >> Hello Forum, >> >> I am clear on what an orthogonal or orthonormal basis is and how it is >> useful for representing a certain function. Some orthogonal bases are >> better than other: it takes less basis functions to approximate the >> function pretty well.... > >That depends on the signal to approximate and not on the basis. > >> A non-orthogonal basis function is said to cause redundancy...in what >> sense? > >At least not in a sense of linear algebra: For a basis of N dimensions, >you need N coefficients regardless of whether the basis is orthogonal or >not. > >The question is rather the following: Given a high dimensional space, >and given the mean square error as error measure, and given that we want >to approximate a given signal with as few basis functions as possible as >best as possible in the mean-square error sense, then it is not hard to >see that for this particular goal the basis should be orthogonal. That >is, a necessary condition for making the error when approximating with n >< N basis functions as small as possible is that the error signal (from >the missing basis functions) is orthogonal to the reconstructed signal. >This, plus a little abstract nonsense will give you the orthogonality of >the basis.
I suspect that this is the root of the use of the word "redundancy" for non-orthogonal basis functions. Since, as you mention, orthogonality is required to minimize the number of basis functions, non-orthogonal basis functions will generally require more information, e.g., in the form of more basis functions, which implies redundancy in comparison to an orthogonal set. When the bases aren't orthogonal there is information spread across the coefficients of multiple basis functions that is not unique to the individual coefficients, so it could be said to be represented redundantly across the multiple coefficients. An easy example would be to take the axes of a Cartesian coordinate system and rotate one so that they're not orthogonal any more. You still have two coefficients to uniquely represent a point in the plane, but the information in the two coefficients is no longer independent (or orthogonal, if you want to say it that way). I can easily see the word "redundant" being applied to that case.
> > > I know that in a non-orthogonal basis the basis functions are >> correlated instead of uncorrelated.... >> >> For instance, using an orthogonal basis, we get coefficients by performing >> inner products. Same with the non-orthogonal basis..... > >This is certainly not true. You get coefficients by the scalar product >for orthonormal basis only. Otherwise, you need to invert the filter >matrix - or multiply by the dual basis (which is exactly the same said >in other words). > >So long, > Thomas
Eric Jacobsen Anchor Hill Communications www.anchorhill.com
If we use a non-orthogonal basis, the scalar products between the signal f
and the basis functions give us coefficients that do not allow us to
reconstruct the original signal f.

For example, in 2D, we have two non-orthogonal basis vectors, A and B. The
dot product between A and f and B and f gives two coefficients, a and b.

But f is not equal to a*A+b*B

Graphically, the only way to reconstruct f using A and B is to make the
projection on A and B not perpendicular but parallel to the other basis
vector. Only in that case f=a*A+b*B

But why would that involve both basis vectors at once instead of one at a
time?

thanks
Brett


(also, this "redundancy" could be good: if all coefficient contain a little
bit of information about the other ones, if we lose some information, some
coefficient, we can resort to the other one and maybe retrieve the
information)


On Sunday, July 29, 2012 5:39:29 PM UTC-5, brettcooper wrote:

> > For example, in 2D, we have two non-orthogonal basis vectors, A and B. The > > dot product between A and f and B and f gives two coefficients, a and b. > > > > But f is not equal to a*A+b*B > > > > Graphically, the only way to reconstruct f using A and B is to make the > > projection on A and B not perpendicular but parallel to the other basis > > vector. Only in that case f=a*A+b*B > > > > But why would that involve both basis vectors at once instead of one at a > > time?
You cannot project onto A unless you know the direction of B, and so both vectors are involved. If A and B were orthogonal, then you can project onto A without knowing B. For example, in ordinary three-dimensional space, if A is the x axis and B is orthogonal to A, then B could be **any** vector in the plane perpendicular to A and you don't need to know which of these possible vectors is B; the projection on A is the same. Not so if B is not orthogonal to A; then you need to know B in order to project onto A in the direction parallel to B. Dilip Sarwate
Hello,

Maybe "redundancy" can be explained in a sense that you can reduce the
basis functions to an orthogonal set for example via Gram-Schmidt. You'll
end up with fewer orthogonal basis functions => redundancy in your basis
function set has been removed. 

This might be of interest, not sure if it tells you anything new, though:
http://www.mathworks.se/help/techdoc/math/f4-2224.html#f4-2282

The relevant part is 
"However, if A does not have full rank, the solution to the least-squares
problem is not unique. There are many vectors x that minimize norm(A*x -b)"
...


mnentwig <24789@dsprelated> wrote:
> Hello, > > Maybe "redundancy" can be explained in a sense that you can reduce the > basis functions to an orthogonal set for example via Gram-Schmidt. You'll > end up with fewer orthogonal basis functions => redundancy in your basis > function set has been removed. > > This might be of interest, not sure if it tells you anything new, though: > http://www.mathworks.se/help/techdoc/math/f4-2224.html#f4-2282 > > The relevant part is > "However, if A does not have full rank, the solution to the least-squares > problem is not unique. There are many vectors x that minimize norm(A*x -b)" > ..
... then columns of A[n,n] doesn't span n-(sub)space. A Basis is simply a basis - a minimum cardinality set of vectors for the given subspace ;)
brettcooper <61104@dsprelated> wrote:
> Some orthogonal bases are > better than other: it takes less basis functions to approximate the > function pretty well....
It' sometimes called a minimum entropy representation/coding.
> A non-orthogonal basis function is said to cause redundancy...in what > sense? I know that in a non-orthogonal basis the basis functions are > correlated instead of uncorrelated.... > For instance, using an orthogonal basis, we get coefficients by performing > inner products. Same with the non-orthogonal basis..... > > where is the redundancy?
A non-orthogonal set is redundant only if its cardinality is greater that dimension of subspace spanned by the given set.
>... then columns of A[n,n] doesn't span n-(sub)space. >A Basis is simply a basis - a minimum cardinality set of vectors for the
given subspace ;) you're right. I read the question wrong. Redundancy indeed, but not the one that was asked for :)