DSPRelated.com
Forums

shift theorem

Started by porterboy March 26, 2005
Jerry Avins wrote:
> jim wrote: > >> Ok Jerry, your debating skills are better than mine > > > Not really. A story is in order. Back when stump speeches were often > made by a candidate standing on a stump, one debater, finding no stump > available, stood instead on a barrel head. In the midst of his speech, > the head broke and he fell into the barrel. Unhurt, he climbed out and > standing on the rim said, "You see, folks? the weight of my argument can > always be depended upon to carry me through." But maybe he was all wet. > >> But let's cut to the chase. >> >> Here's the shift theorem as presented by Robert. >> DFT{ y[n-k] } = exp(-j*2*pi*k*m/N) * Y[m] >> >> Will you permit k to be a non-integer? >> And if yes, will this DFT have a period of N when k is non-integer? > > > k can be a non integer. It is only necessary that n-k be an integer so a > meaning could be assigned to it. y[n-k] specifies the (n-k)th y in a > sequence of y's; which one? > > ...
I hit "Send" too soon. that should be "y[n-k] specifies the (n-k)th y in a sequence of y's; which one would an non-integer point to? And I should have added: I think the periodicity of the DFT is beside the point. It is or it isn't, as you choose. The IDFT always reconstructs a periodic time sequence. If that's not what you want, you have to chop it. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
in article GN2dnTHb370d7NHfRVn-2Q@rcn.net, Jerry Avins at jya@ieee.org wrote
on 03/31/2005 16:54:

> The IDFT always reconstructs a periodic time > sequence. If that's not what you want, you have to chop it.
but if you chop it, you might have to periodically re-extend it, conceptually, so that if future DFT and shift or convolution is done, you won't have to use the mod(n,N) operator (which is another conceptual way to periodically extend the sequence). -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
robert bristow-johnson wrote:
> in article GN2dnTHb370d7NHfRVn-2Q@rcn.net, Jerry Avins at jya@ieee.org wrote > on 03/31/2005 16:54: > > >>The IDFT always reconstructs a periodic time >>sequence. If that's not what you want, you have to chop it. > > > but if you chop it, you might have to periodically re-extend it, > conceptually, so that if future DFT and shift or convolution is done, you > won't have to use the mod(n,N) operator (which is another conceptual way to > periodically extend the sequence).
Right. So? I see neither what the fuss is about nor why it seems obscure to some. I blame it on too much math but maybe like the orator whose weighty argument carried him through, I'm all wet. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
"robert bristow-johnson" <rbj@audioimagination.com> wrote in message 
news:BE71C701.5C91%rbj@audioimagination.com...
>> >> It depends which path you chose to follow. If you let k be nonintegral, >> you >> through out the completeness of the DFT's basis set. > > trying to extrapolate ... do you mean "throw out"? (i think i agree with > that.) >
Yes, spelling has never been one of my strong suits. Clay
in article wu13e.2168$UW6.1919@bignews5.bellsouth.net, Clay S. Turner at
Physics@Bellsouth.net wrote on 03/31/2005 20:35:

> "robert bristow-johnson" <rbj@audioimagination.com> wrote in message > news:BE71C701.5C91%rbj@audioimagination.com... >>> >>> It depends which path you chose to follow. If you let k be nonintegral, >>> you through out the completeness of the DFT's basis set. >> >> trying to extrapolate ... do you mean "throw out"? (i think i agree with >> that.) >> > > Yes, spelling has never been one of my strong suits.
no biggie. the worst spelling mistakes are wrong word substitutions because the spell checkers don't catch'em. i haven't heard back from Jim or any other comment (from anyone) on my last response to him. could it be that my "mind-numbingly stupid position" is numbing his mind again? fractional delay - still inherently periodic. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Robert, after reading through this thread again I think you missed the
point why I believe the linear algebra view is more elegant. You write:

>when you say: > > DFT{ x[mod(n-d, N)] } = exp(-j*2*pi*k*d/N) * X[k] > > > where mod(n, N) = n - N*floor(n/N) > and floor(v) is the largest integer no greater than v. > > that is an expression of your cyclical rotation. it is also one and
the
> same to periodically extending x[n].
You have to interpret the mod - indexing as a linear operator. To be specific, let V be a n-dim complex Hilbert space with standard ("time domain") basis {e_1, ... , e_n} , v = (v_1, ... , v_n) a vector from V, F the DFT operator, and S the cylic left-shift operator, i.e. S v = (v_2, v_3, .., v_n, v_1). Then the operator F S F* (where * denotes the adjoint of F) is in diagonal form (this is an easy consequence of the fact that the n-th unit roots not equal to one add up to -1). This says that the shift operator is diagonal with respect to the eigenbasis of F (which exists because every unitary operator is diagonalizable). This basis, let's denote it {f_1, ..., f_n}, can be interpreted as the "frequency domain" basis. So we have F S F* = d_S, (1) where d_S = diag( 1, e^(i 2 pi /n), e^(i 2 pi 2/n), ..., e^(i 2 pi (n-1)/n ), the operator with the eigenvalues of S on its diagonal, zero everywhere else. Therefore, by multiplying (1) on the left with F we get the shift theorem stated in the language of linear algebra: F S = d_S F. Nowhere have I used modular addressing or periodic sequences or anything like it. It's all just vectors, basis changes and unitary transforms. Now in that link I posted, it is shown that from the relation (1), the existence of the FFT algorithm (which is a special factorization of the F matrix) can be deduced. That's pretty cool! Furthermore, similar fast algorithms for other transforms are shown to exist, using just a little algebra. Perhaps this has been interesting, regards, Andor
robert bristow-johnson wrote:
> > i haven't heard back from Jim or any other comment (from anyone) on
my last
> response to him. could it be that my "mind-numbingly stupid
position" is
> numbing his mind again? > > fractional delay - still inherently periodic. > > --
Hello Robert, Another thing is the two basis functions comprising the complex exponential fail to be orthogonal when k is not an integer. So if one finds the magnitude of a DFT at a single frequency, the result is phase dependent. A lot of nice properties get tossed when k is nonintegral. Clay

"Clay S. Turner" wrote:

> > It depends which path you chose to follow. If you let k be nonintegral, you > through out the completeness of the DFT's basis set. Or you are going to > have to define an interesting basis set which is nonstandard. >
If by that you mean that a non-integer k when applied to the DFT would result in only part of a waveform - well yes that *is* the point I was making. The fact is is that not allowing t0 to be shifted off of a sample point is a requirement for the DFT to have a period of N. I don't think you would need to modify the DFT basis set if you were interested in creating a robust set of definitions that handled a shift so that t0 doesn't land on a sample point. For instance, in the example I gave previously where t0 lands half way between 2 samples, you could enumerate the frame as [1,3,5,7....2*N-1]. This would produce a DFT that is whole, complete and has a period of 2N. Other definitions for enumerating the sequence could be defined for other fractional shifts altho it might become numerically unwieldy for anything but the simplest fractions. -jim
> I always recall a similar thing that arises in the Goertzel algo where > someone wishes to use a non integral k. For some applications this may work, > but Parseval's theorem which is often used when the Goertzel is used (I.e., > DTMF decoding) is no longer strictly true. You end up with a fractional wave > whose integral is not zero. The k being an integer or not in the Goertzel > algo depends on the viewing of the algo as being derived from a filter or > from the DFT. But if k is not an integer, some of the DFT's properties need > to be modified. > > To see these issues look at the normalization when you have a non integral k > in the DFT. > > Clay
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
in article 1112360801.400934.10580@g14g2000cwa.googlegroups.com, Andor at
an2or@mailcircuit.com wrote (with some modification by r b-j) on 04/01/2005
08:06:

> Robert, after reading through this thread again I think you missed the > point why I believe the linear algebra view is more elegant.
i guess i did, because i see this inherent periodic extension of the DFT as orthogonal to using constructs commonly used in Linear Algebra (the linear algebra view neither supports nor detracts from the periodic extension property of the DFT). by Linear Algebra i think we agree that it is more general field than just vectors, matrices, and determinants. i am thinking that it includes other metric spaces (in our case, metric spaces that are also Hilbert spaces but perhaps they don't necessarily have to have a finite number of dimensions).
> You write: > >> when you say: >> >> DFT{ x[mod(n-d, N)] } = exp(-j*2*pi*k*d/N) * X[k] >> >> >> where mod(n, N) = n - N*floor(n/N) >> and floor(w) is the largest integer no greater than w. >> >> that is an expression of your cyclical rotation. it is also one and >> the same to periodically extending x[n].
i am modifying some of your notation to keep it compatible to what you normally see in common DSP texts. one major difference is the numbering of indices, and i really think that the MATLAB/Fortran convention of numbering arrays (or matrices for that matter) is not equivalent elegance to what is normally done in DSP texts and i do not want to change that notation, so i'm changing yours so we can have compatible semantics. sorry, Andor, if you don't like it, but i really think we need more common elements in our notation and i refuse to yield to the MATLAB convention.
> You have to interpret the mod - indexing as a linear operator. To be > specific, let _X_ be a N-dim complex Hilbert space with standard ("time > domain") basis {e[0], ... , e[N-1]},
are not the e[n] in this basis set simply equal to (... 0, 1, 0, 0, ...)? (also, using this symbol "e" means we might not want to use it for the base of the natural exponential.)
> x = (x[0], ... , x[N-1]) a vector from _X_, F the DFT operator,
which is a square matrix with elements of exp(-j 2 pi n k/N) (where n & k are the column & row indices), right? i might reserve the symbol "X" for X = F x
> and S the cylic left-shift operator, i.e. > > S x = (x[1], x[2], .., x[N-1], x[0]).
fine, this is an invertible (is the linear algebra term "complete"?) operator unlike the non-cyclic left-shift linear operator: T x = (x[1], x[2], .., x[N-1], 0) which destroys information.
> Then the operator F S F* (where * denotes the adjoint of F)
F* is a square matrix with elements of 1/N * exp(+j 2 pi n k/N) (where n & k are the row & column indices), right?
> is in diagonal form
but not on the main diagonal, right?
> (this is an easy consequence of the fact that the > N-th unit roots not equal to one add up to -1). This says that the shift > operator is diagonal with respect to the eigenbasis of F (which exists > because every unitary operator is diagonalizable). This basis, let's > denote it {f[0], ..., f[N-1]}, can be interpreted as the "frequency > domain" basis.
i think that this basis f[k] = 1/N exp(+j 2 pi n k/N), right?
> So we have > > F S F* = d_S, (Eq 1) > > where
> d_S = diag( 1, exp(j 2 pi /N), exp(j 2 pi 2/N), ..., exp(j 2 pi (N-1)/N) ), > the operator with the eigenvalues of S on its diagonal, zero everywhere > else.
and those eigenvalues have be ordered a particular way along the diagonal. it can't be, for instance, d_S = diag( exp(j 2 pi /N), exp(j 2 pi 2/N), ..., exp(j 2 pi (N-1)/N), 1 ) also, here's a good reason why the MATLAB convention of indexing is not equivalent to zero-based. this d_S matrix would still be exactly the same in both representations, but the MATLAB convention needs more keystrokes or ink to express its elements (as well as for F and F*).
> Therefore, by multiplying (Eq 1) on the left with F we > get the shift theorem stated in the language of linear algebra: > > F S = d_S F.
and to go one step further: S = F* d_S F so that circular shift operator can be represented as the result of three linear operators (or matrix multiplications) in this Hilbert space. that's the point you meant to get to, am i correct Andor?
> Nowhere have I used modular addressing or periodic sequences or > anything like it.
that, i disagree with. all of the linear algebra math is fine, but to stand back and look at the whole thing and say "Nowhere ... anything like" circular indexing or periodic extension, i really disagree. (but this is a disagreement of observation, i.e. what we, two different observers, say when we both observe the same phenomenon). on one hand, this operator S, clearly does a circular shift. it is as if your vectors were not a linear string of numbers (with two ends), but that string where the back end of the vector was bent around and attached to the front end (like a strip of paper). the boundary between x[N-1] and x[0] is qualitatively no different than the boundary between x[0] and x[1] or between any other x[n] and x[n+1]. that is, as i observe it, a periodic extension of x and all of the vectors in the _X_ space have that property. on the other hand, this would not be the case if the elements of F and F* and d_S where not special (i wouldn't claim "unique", just "special"). they have to be these particular periodic functions as i have emphasized above.
> It's all just vectors, basis changes and unitary transforms.
*particular* bases (what's the plural of "basis"?) and *particular* unitary transforms. ones that have an effect of representing each element of the metric space as this circular N-dim vector (or as an infinite sequence that is periodic with period N). sure, you can say "no, these are just normal linear vectors and this is just normal linear algebra", but i would say that, because of the particulars of the operators, F, F*, and d_S, you are not observing everything that is in front of us both. i am not saying that all finite sequences of length N are periodically extended (some might be zero extended or have no definition outside the original limits), only that the DFT (and iDFT) does it. if the only linear operators we have are F, F*, S, d_S, and addition and multiplication by a scaler (i think convolution can be created from these ingredients), then the best visualization of the members of space _X_ are circular vectors or infinitely long but periodic vectors (equivalent to N-dim vectors since N numbers suffice to completely describe the members of that space).
> Now in that link I posted, it is shown that from the > relation (Eq 1), the existence of the FFT algorithm (which is a special > factorization of the F matrix) can be deduced. That's pretty cool!
i agree, and am not surprized at all (and can imagine just how that FFT factorization of F might be done but might pee my pants if forced to deal with the detail of it).
> Furthermore, similar fast algorithms for other transforms are shown to > exist, using just a little algebra.
agreed. and not all other linear transforms have this circular or periodic property.
> Perhaps this has been interesting,
it is. we're beholding and appreciating the same sculpture, but we are stricken with different nuance and detail regarding it. or maybe it's just "mind-numbingly stupid". -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
robert bristow-johnson wrote:
> > Robert, after reading through this thread again I think you missed
the
> > point why I believe the linear algebra view is more elegant. > > i guess i did, because i see this inherent periodic extension of the
DFT as
> orthogonal to using constructs commonly used in Linear Algebra (the
linear
> algebra view neither supports nor detracts from the periodic
extension
> property of the DFT). by Linear Algebra i think we agree that it is
more
> general field than just vectors, matrices, and determinants. i am
thinking
> that it includes other metric spaces (in our case, metric spaces that
are
> also Hilbert spaces but perhaps they don't necessarily have to have a
finite
> number of dimensions).
There is no need to worry about infinite dimensional spaces with all their intricacies --- the DFT always acts on finite dimensional Hilbert spaces.
> i am modifying some of your notation to keep it compatible to what
you
> normally see in common DSP texts. one major difference is the
numbering of
> indices, and i really think that the MATLAB/Fortran convention of
numbering
> arrays (or matrices for that matter) is not equivalent elegance to
what is
> normally done in DSP texts and i do not want to change that notation,
so i'm
> changing yours so we can have compatible semantics. sorry, Andor, if
you
> don't like it, but i really think we need more common elements in our > notation and i refuse to yield to the MATLAB convention.
Robert, allow me to say that sometimes you are pretty hard headed :-). If you wish, we can use zero-based C-style array indexing.
> > You have to interpret the mod - indexing as a linear operator. To
be
> > specific, let _X_ be a N-dim complex Hilbert space with standard
("time
> > domain") basis {e[0], ... , e[N-1]},
Chosing "e" as a symbol for the basis vectors was indeed a bit unthoughtful, regarding the issue we are discussing. I just wanted to make the point that two different bases (plural of basis) are important. Let's use {t_1, ..., t_n} (t for time) as the names for the standard basis set.
> are not the e[n] in this basis set simply equal to (... 0, 1, 0, 0,
...)? Well, if x is a vector, then (x[0], ... , x[N-1]) is just shorthand for x = sum_{k=0}^{N-1} x_k t_k. If we use another basis set, for example {f[0], ... , f[N-1]}, then we can also write x' = (x[0]' , ... , x[N-1]') for x' = sum_{k=0}^{N-1} x_k' f_k, in which case (.., 0, 1, 0, ..) means something different than in the first. Let's just call {t[0], ... , t[N-1]} the standard basis.
> > x = (x[0], ... , x[N-1]) a vector from _X_, F the DFT operator, > > which is a square matrix with elements of exp(-j 2 pi n k/N) (where n
& k
> are the column & row indices), right? i might reserve the symbol "X"
for
> > X = F x
Well, for F to be unitary we need to define it with a 1/sqrt(N) factor, but otherwise you are correct, the entries in the matrix are 1/sqrt(N) exp(-j 2 pi n k/N).
> > > and S the cylic left-shift operator, i.e. > > > > S x = (x[1], x[2], .., x[N-1], x[0]). > > > fine, this is an invertible (is the linear algebra term "complete"?) > operator unlike the non-cyclic left-shift linear operator: > > T x = (x[1], x[2], .., x[N-1], 0) > > which destroys information. > > > > Then the operator F S F* (where * denotes the adjoint of F) > > F* is a square matrix with elements of 1/N * exp(+j 2 pi n k/N)
(where n & k
> are the row & column indices), right?
Again, you need 1/sqrt(N) for unitarity.
> > > is in diagonal form > > but not on the main diagonal, right?
YES, in the main diagonal! That's the point! See below:
> > So we have > > > > F S F* = d_S, (Eq 1) > > > > where > > > d_S = diag( 1, exp(j 2 pi /N), exp(j 2 pi 2/N), ..., exp(j 2 pi
(N-1)/N) ),
> > the operator with the eigenvalues of S on its diagonal, zero
everywhere
> > else. > > and those eigenvalues have be ordered a particular way along the
diagonal.
> it can't be, for instance, > > d_S = diag( exp(j 2 pi /N), exp(j 2 pi 2/N), ..., exp(j 2 pi
(N-1)/N), 1 )
> > also, here's a good reason why the MATLAB convention of indexing is
not
> equivalent to zero-based. this d_S matrix would still be exactly the
same
> in both representations, but the MATLAB convention needs more
keystrokes or
> ink to express its elements (as well as for F and F*). > > > Therefore, by multiplying (Eq 1) on the left with F we > > get the shift theorem stated in the language of linear algebra: > > > > F S = d_S F. > > and to go one step further: > > S = F* d_S F > > so that circular shift operator can be represented as the result of
three
> linear operators (or matrix multiplications) in this Hilbert space.
that's
> the point you meant to get to, am i correct Andor?
The point really was the equation above that, which is the shift theorem. Let's look at an example with N=4. I'll use w := e^(-j 2 pi /N) to facilitate notation. Then /1 1 1 1 \ | 1 w w^2 w^3 | F = 1/2| 1 w^2 w^4 w^6 |, \1 w^3 w^6 w^9 / (note that eg w^4 = 1 etc.), /1 1 1 1 \ | 1 w w^-2 w^-3 | F* = 1/2| 1 w^-2 w^-4 w^-6 |, \1 w^-3 w^-6 w^-9 / and /0 1 0 0 \ | 0 0 1 0 | S = 1/2| 0 0 0 1 |. \ 1 0 0 0 / Then a simple computation shows that /1 0 0 0 \ | 0 w^-1 0 0 | F S F* = | 0 0 w^-2 0 | =: d_S, \ 0 0 0 w^-3 / that is the diagonal matrix with the eigenvalues of S on its diagonal. Multiplying each side k times by itself yields (F S F*)(F S F*)...(F S F*) = F S^k F* = d_S^k (where S^k is the linear operator that effects k cyclic rotations to the left), and again by multiplying from the right with F we get, F S^k = d_S^k F (Eq. 1b). If you set X = F x, for any x in _X_, then the left hand side of (Eq. 1b) is F S^k x = F S^k (x[0],...,x[N-1]) = F (x[k], x[k+1], ..., x[k-1]), (Eq. 2) that is we take the DFT of the cyclically rotated vector (x[k], x[k+1], ..., x[k-1]). The right hand side of (Eq. 1b) gives d_S^k F x = d_S^k X (Eq. 3). Comparing components of (Eq.2) and (Eq.3) gives (F (x[k], x[k+1], ... , x[k-1]))[m] = w^(-k m) X[m], for 0 <= m <= N-1, which is the shift theorem as was stated originally. However, I prefer the much more compact notation in (Eq. 1b).
> > Nowhere have I used modular addressing or periodic sequences or > > anything like it. > > that, i disagree with. all of the linear algebra math is fine, but
to stand
> back and look at the whole thing and say "Nowhere ... anything like" > circular indexing or periodic extension, i really disagree. (but
this is a
> disagreement of observation, i.e. what we, two different observers,
say when
> we both observe the same phenomenon). > > on one hand, this operator S, clearly does a circular shift. it is
as if
> your vectors were not a linear string of numbers (with two ends), but
that
> string where the back end of the vector was bent around and attached
to the
> front end (like a strip of paper). the boundary between x[N-1] and
x[0] is
> qualitatively no different than the boundary between x[0] and x[1] or > between any other x[n] and x[n+1]. that is, as i observe it, a
periodic
> extension of x and all of the vectors in the _X_ space have that
property. But we did not need to introduce periodic sequency, we could state everything using just the linear operator S.
> > on the other hand, this would not be the case if the elements of F
and F*
> and d_S where not special (i wouldn't claim "unique", just
"special"). they
> have to be these particular periodic functions as i have emphasized
above. Yes, of course.
> > It's all just vectors, basis changes and unitary transforms. > > *particular* bases (what's the plural of "basis"?) and *particular*
unitary
> transforms. ones that have an effect of representing each element of
the
> metric space as this circular N-dim vector (or as an infinite
sequence that
> is periodic with period N). > > sure, you can say "no, these are just normal linear vectors and this
is just
> normal linear algebra", but i would say that, because of the
particulars of
> the operators, F, F*, and d_S, you are not observing everything that
is in
> front of us both. i am not saying that all finite sequences of
length N are
> periodically extended (some might be zero extended or have no
definition
> outside the original limits), only that the DFT (and iDFT) does it.
if the
> only linear operators we have are F, F*, S, d_S, and addition and > multiplication by a scaler (i think convolution can be created from
these
> ingredients), then the best visualization of the members of space _X_
are
> circular vectors or infinitely long but periodic vectors (equivalent
to
> N-dim vectors since N numbers suffice to completely describe the
members of
> that space).
You can view it that way if you like, it is consistent. However, there is no need to, as all the properties of F and S are just special cases, and the theory of linear algebra completely explains what is going on and why F and S have these properties.
> > Now in that link I posted, it is shown that from the > > relation (Eq 1), the existence of the FFT algorithm (which is a
special
> > factorization of the F matrix) can be deduced. That's pretty cool! > > i agree, and am not surprized at all (and can imagine just how that
FFT
> factorization of F might be done but might pee my pants if forced to
deal
> with the detail of it). > > > Furthermore, similar fast algorithms for other transforms are shown
to
> > exist, using just a little algebra. > > agreed. and not all other linear transforms have this circular or
periodic
> property. > > > Perhaps this has been interesting, > > it is. we're beholding and appreciating the same sculpture, but we
are
> stricken with different nuance and detail regarding it.
I wouldn't say stricken, but perhaps biased. I prefer the view that gives my a better outlook on my surroundings. I think you do too, it is just that your surroundings are different than mine :-). Forgive the long delay in answering. This morning I went to the flea market with my family. We bought a new chair for our baby daughter and a new bike for me! Best regards, Andor