in article 1112443135.716309.30550@f14g2000cwb.googlegroups.com, Andor at
an2or@mailcircuit.com wrote on 04/02/2005 06:58:
> robert bristow-johnson wrote:
...
>> i am modifying some of your notation to keep it compatible to what you
>> normally see in common DSP texts. one major difference is the numbering of
>> indices, and i really think that the MATLAB/Fortran convention of numbering
>> arrays (or matrices for that matter) is not equivalent elegance to what is
>> normally done in DSP texts and i do not want to change that notation, so i'm
>> changing yours so we can have compatible semantics. sorry, Andor, if you
>> don't like it, but i really think we need more common elements in our
>> notation and i refuse to yield to the MATLAB convention.
>
> Robert, allow me to say that sometimes you are pretty hard headed :-).
no! what would ever give you that impression? i can't be my political POV.
i'm such a softie!
:^/ (that's my nose getting longer.)
> If you wish, we can use zero-based C-style array indexing.
otherwise we get this ugly exp(j 2 pi (n-1)(k-1)/N) kinda crap. you can
call it "C-style", i might call it "Dykstra-style" indexing (check out
http://www.cs.utexas.edu/users/EWD/ewd08xx/EWD831.PDF ). i really think
that this indexing issue transcends "C" or whatever programming language.
> I just wanted to make the point that two different bases (plural of
> basis) are important.
i got that.
> Let's use {t[0], ..., t[N-1]} (t for time) as the names for the
> standard basis set.
>
>
>> are not the t[n] in this basis set simply equal to (... 0, 1, 0, 0, ...)?
>
> Well, if x is a vector, then (x[0], ... , x[N-1]) is just shorthand for
>
> x = sum_{n=0}^{N-1} x[n] t[n].
but, for the "time domain" basis set, those t[n] basis vectors are column
vectors (... 0, 1, 0, ...) where the "1" is in the nth row. right? (it has
to be, or then i really dunno what the hell you mean.)
> If we use another basis set, for example {f[0], ... , f[N-1]}, then we
> can also write X = (X[0] , ... , X[N-1]) for
>
> X = sum_{k=0}^{N-1} X[k] f[k],
>
> in which case (.., 0, 1, 0, ..) means something different than in the
> first.
no, no,... (.., 0, 1, 0, ..) are NOT the basis vectors for the "frequency
domain" basis. those are
f[k] = (... , exp(j 2 pi n k/N), exp(j 2 pi (n+1) k/N), ...)
where n is the row number (using Dykstra indexing) of the column vector.
> Let's just call {t[0], ... , t[N-1]} the standard basis.
i would call it the "time-domain basis".
>>> x = (x[0], ... , x[N-1]) a vector from _X_, F the DFT operator,
>>
>> which is a square matrix with elements of exp(-j 2 pi n k/N) (where n & k
>> are the column & row indices), right? i might reserve the symbol "X" for
>>
>> X = F x
>
> Well, for F to be unitary we need to define it with a 1/sqrt(N)
> factor, but otherwise you are correct, the entries in the matrix are
> 1/sqrt(N) exp(-j 2 pi n k/N).
yeah, you're right, but i was trying to fudge what you were saying to make
it most consistent with the DSP lit. it's just a scaling issue (and the
loss of the semantic "unitary"). i really like that unitary transform
property for the continuous Fourier Transform, not so much for the DFT
(except maybe for implementation on a fixed-point machine). i know it means
that repeatedly applying F, F, F, and F again makes your data set grow in
magnitude by a factor of sqrt(N) each time, if you don't put the 1/sqrt(N)
in there.
>>
>>> and S the cylic left-shift operator, i.e.
>>>
>>> S x = (x[1], x[2], .., x[N-1], x[0]).
>>
>>
>> fine, this is an invertible (is the linear algebra term "complete"?)
>> operator unlike the non-cyclic left-shift linear operator:
>>
>> T x = (x[1], x[2], .., x[N-1], 0)
>>
>> which destroys information.
>>
>>
>>> Then the operator F S F* (where * denotes the adjoint of F)
>>
>> F* is a square matrix with elements of 1/N * exp(+j 2 pi n k/N)
>> (where n & k are the row & column indices), right?
>
> Again, you need 1/sqrt(N) for unitarity.
yeah, well.
>>
>>> is in diagonal form
>>
>> but not on the main diagonal, right?
>
> YES, in the main diagonal! That's the point! See below:
no, it ain't! (perhaps i misunderstand what is meant by "in diagonal
form".) doesn't that require being symmetric about the main diagonal? or
is it just that each element in every diagonal is equal?
/ 0 1 0 0 \
| 0 0 1 0 |
S = | 0 0 0 1 |
\ 1 0 0 0 /
is not symmetric about the main diagonal. maybe i misunderestimate
(borrowed term from our illustrious Idiot-in-Chief) the meaning of "diagonal
form".
> The point really was the equation above that, which is the shift
> theorem. Let's look at an example with N=4.
> I'll use w := e^(-j 2 pi/N) to facilitate notation. Then
>
> / 1 1 1 1 \
> | 1 w w^2 w^3 |
> F = 1/2| 1 w^2 w^4 w^6 |,
> \ 1 w^3 w^6 w^9 /
>
> (note that eg w^4 = 1 etc.),
>
> / 1 1 1 1 \
> | 1 w w^-2 w^-3 |
> F* = 1/2| 1 w^-2 w^-4 w^-6 |,
> \ 1 w^-3 w^-6 w^-9 /
>
fine.
> and
>
> / 0 1 0 0 \
> | 0 0 1 0 |
> S = 1/2| 0 0 0 1 |.
> \ 1 0 0 0 /
are you sure there is a 1/sqrt(N) on this? (you're the expert, but i have
my doubts about this statement.)
> Then a simple computation shows that
>
> / 1 0 0 0 \
> | 0 w^-1 0 0 |
> F S F* = | 0 0 w^-2 0 | =: d_S,
> \ 0 0 0 w^-3 /
>
> that is the diagonal matrix with the eigenvalues of S on its diagonal.
> Multiplying each side k times by itself yields
>
> (F S F*)(F S F*)...(F S F*) = F S^k F* = d_S^k
>
> (where S^k is the linear operator that effects k cyclic rotations to
> the left), and again by multiplying from the right with F we get,
>
> F S^k = d_S^k F (Eq. 1b).
>
> If you set X = F x, for any x in _X_, then the left hand side of (Eq.
> 1b) is
>
> F S^k x = F S^k (x[0],...,x[N-1]) = F (x[k], x[k+1], ..., x[k-1]), (Eq. 2)
>
> that is we take the DFT of the cyclically rotated vector
> (x[k], x[k+1], ..., x[k-1]). The right hand side of (Eq. 1b) gives
>
> d_S^k F x = d_S^k X (Eq. 3).
>
> Comparing components of (Eq.2) and (Eq.3) gives
>
> (F (x[k], x[k+1], ... , x[k-1]))[m] = w^(-k m) X[m], for 0 <= m <= N-1,
>
> which is the shift theorem as was stated originally. However, I prefer
> the much more compact notation in (Eq. 1b).
>
>
>>> Nowhere have I used modular addressing or periodic sequences or
>>> anything like it.
>>
>> that, i disagree with. all of the linear algebra math is fine, but to stand
>> back and look at the whole thing and say "Nowhere ... anything like"
>> circular indexing or periodic extension, i really disagree. (but
>> this is a disagreement of observation, i.e. what we, two different
>> observers, say when we both observe the same phenomenon).
>>
>> on one hand, this operator S, clearly does a circular shift. it is as if
>> your vectors were not a linear string of numbers (with two ends), but that
>> string where the back end of the vector was bent around and attached to the
>> front end (like a strip of paper). the boundary between x[N-1] and x[0] is
>> qualitatively no different than the boundary between x[0] and x[1] or
>> between any other x[n] and x[n+1]. that is, as i observe it, a periodic
>> extension of x and all of the vectors in the _X_ space have that property.
>
> But we did not need to introduce periodic sequency, we could state
> everything using just the linear operator S.
sure, you wouldn't even have to go through all that rigmarole since it is
obvious that
/ x[1] \ / 0 1 0 ... 0 \ / x[0] \
| x[2] | | 0 0 1 ... 0 | | x[1] |
| x[3] | = | 0 0 0 ... 0 | | x[2] |
| ... | | .. .. .. .. | | ... |
| x[N-1]| | 0 0 0 ... 1 | | x[N-2]|
\ x[0] / \ 1 0 0 ... 0 / \ x[N-1]/
and alls i'm saying is that this (along with the frequency-domain
counterpart) wouldn't be happening if it weren't true that:
>> ... this would not be the case if the elements of F and F*
>> and d_S where not special (i wouldn't claim "unique", just "special"). they
>> have to be these particular periodic functions as i have emphasized above.
>
> Yes, of course.
...
> You can view it that way if you like, it is consistent. However, there
> is no need to, as all the properties of F and S are just special cases,
> and the theory of linear algebra completely explains what is going on
> and why F and S have these properties.
but the point i'm saying why, in this special case of the DFT, there *is*
this inherent circularity or periodic extension is that there are all sorts
of other linear operators (on the same hilbert space) that have no such
property.
suppose we have the same hilbert space _X_ and the same "time-domain" basis,
{t[0], ... , t[N-1]} where
t[n] = ( ..., 0, 1, 0, ...) (with the "1" in the nth element)
x = x[0]*t[0] + x[1]*t[1] + ... + x[N-1]*t[N-1]
and some other "polynomial-domain" basis,
{p[0], ... , p[N-1]} where
p[k] = (..., n^k, (n+1)^k, ...) (where n^k is in the nth element)
x = X[0]*p[0] + X[1]*p[1] + ... + X[N-1]*p[N-1]
there is a linear mapping, P, that will map
x = (x[0], x[1], ... x[N-1])
to
X = (X[0], X[1], ... X[N-1])
where
N-1
x[n] = SUM{ n^k X[k] }
k=0
or
x = P* X
we know that the P matrix is the inverse of P* and the elements of P* are
obvious from above.
isn't that sorta similar to your linear algebra expression of the DFT,
except for at least two big differences? one is that the
"polynomial-domain" basis, p[k], and the "frequency-domain" basis, f[k], are
clearly different, and the other is that although P is linear (so all these
nice Hilbert space properties apply), there is no equivalent *circular*
"shift theorem" to this linear operator. (there might be some
non-invertable linear shift theorem that corresponds multiplying by powers
of n, but i wouldn't know what to do with any non-zero x[n] that falls off
the edge in the linear shift.)
not all linear operators periodically extend the data passed to it, but the
DFT does (because of "special" properties it has due to the periodic nature
of the elements of F or the periodic nature of the basis f).
...
> I wouldn't say stricken, but perhaps biased. I prefer the view that
> gives my a better outlook on my surroundings. I think you do too, it is
> just that your surroundings are different than mine :-).
i really agree. the "surroundings" i behold are taking a real-time stream
of data (audio, in my case, but that should not matter) or a really long
(sound)file and processing it by sections. i think that this is very, very
common in the actual application of the DFT.
now, when we yank a section of data out of that stream, it is just a finite
block (or "vector" if you prefer) of data. it might be appropriate
(depending on what next we might do to the data) to consider that "yanking"
operation to be windowing (even if it's just a rectangular window) and there
are some effects due to that windowing. fine. no periodic extension yet.
this block of data has two ends to it, a front end and a back end.
then, suppose you pass this finite chunk (or "vector") of data to a DFT
operation. BINGO! at that point, you better understand that your string of
numbers (or "vector") has been rolled or bent around where the back end was
just glued to the front end and there is NO qualitative difference between
what connects x[N-1] to x[0] and what connects x[0] to x[1] or any other
x[n] to x[n+1]. it's still describable as a finite set of numbers but it is
equivalent in every respect to periodically extending that finite set in
both directions forever.
this is "mind-numbingly stupid brain-dead dogma". but if some practitioners
(say someone doing fast convolution) forget that (or its equivalents, like
they have to use modulo addressing), they will run into trouble. i guess
dogma has its uses. we keep it in mind, we never question it, and sometimes
it saves our asses. it can also be entertaining when it chases the karma.
perhaps the karma runs over the dogma.
...
> Forgive the long delay in answering.
no need to. i don't think there is any obligation.
> This morning I went to the flea
> market with my family. We bought a new chair for our baby daughter.
she your first? (getting any sleep? some people on comp.dsp remember when
i was there in 1997.)
> and a new bike for me!
i'm still riding my old Gitane from the 1970's. good, lightweight, fast
bike. looks like shit. "commuter" size tires (a little bigger than touring
bikes and a lot thinner than the mountain bikes everyone here gets - it's a
lot like driving SUVs in the city, i don't get it.)
--
r b-j rbj@audioimagination.com
"Imagination is more important than knowledge."