Reply by glen herrmannsfeldt May 15, 20072007-05-15
Steven G. Johnson wrote:
(I wrote)

>>I would think that O(NlogN) would be a better description, as there >>are O(N) and O(logN) terms that are being ignored, under the >>assumption that N isn't too small. If you don't include those, >>then an asymptotic bound seems right to me.
> You have a basic misunderstanding.
> All of these notations (O, \Theta, \Omega, etc) are asymptotic, and > all of them ignore all but the leading asymptotic term. However, > saying that something is \Theta(N log N) is a much stronger statement > than saying it is O(N log N), because the latter is only an *upper* > bound.
Yes, the ones I have seen used O(), and I thought you meant that it was not asymptotic. There has been a lot of discussion about this related to radix sort, which in the usual discussion and over the range it is usually used is O(N), but asymptotically (for really large N) it should be considered NlogN.
> For example, a function that is O(1) is also technically O(N log N), > because a constant function is bounded above by N log N. As another > example, a function f(N) = 3 N log N + 4 N - 2 sqrt(N) + 1 is both O(N > log N) and \Theta(N log N), but the latter tells you (much) more about > f(N).
> In colloquial usage, a lot of people use O when they really mean > \Theta, so perhaps this is what you are used to.
Well, that, and that most people don't want to analyze a system all that closely. Also, more keyboards have Oh then theta. -- glen
Reply by Steven G. Johnson May 14, 20072007-05-14
On May 14, 12:34 pm, glen herrmannsfeldt <g...@ugcs.caltech.edu>
wrote:
> Steven G. Johnson wrote: > > Yes, all known FFT algorithms, including the ones you mention, require > > O(N log N) operations ... or, more precisely, \Theta(N log N) > > operations (O is only an asymptotic upper bound, while \Theta is a > > tight bound). (The exact arithmetic counts vary, of course, as well > > as the algorithmic structures and the N for which they work.) > > I would think that O(NlogN) would be a better description, as there > are O(N) and O(logN) terms that are being ignored, under the > assumption that N isn't too small. If you don't include those, > then an asymptotic bound seems right to me.
You have a basic misunderstanding. All of these notations (O, \Theta, \Omega, etc) are asymptotic, and all of them ignore all but the leading asymptotic term. However, saying that something is \Theta(N log N) is a much stronger statement than saying it is O(N log N), because the latter is only an *upper* bound. For example, a function that is O(1) is also technically O(N log N), because a constant function is bounded above by N log N. As another example, a function f(N) = 3 N log N + 4 N - 2 sqrt(N) + 1 is both O(N log N) and \Theta(N log N), but the latter tells you (much) more about f(N). In colloquial usage, a lot of people use O when they really mean \Theta, so perhaps this is what you are used to. Anyway, this is a basic computer-science/math question that is somewhat off-topic for comp.dsp. If you Google for "big O notation theta", or crack open Cormen/Leiserson/Rivest for that matter, you'll find plenty more explanation and examples. Regards, Steven G. Johnson
Reply by glen herrmannsfeldt May 14, 20072007-05-14
Steven G. Johnson wrote:

(snip)

> Yes, all known FFT algorithms, including the ones you mention, require > O(N log N) operations ... or, more precisely, \Theta(N log N) > operations (O is only an asymptotic upper bound, while \Theta is a > tight bound). (The exact arithmetic counts vary, of course, as well > as the algorithmic structures and the N for which they work.)
I would think that O(NlogN) would be a better description, as there are O(N) and O(logN) terms that are being ignored, under the assumption that N isn't too small. If you don't include those, then an asymptotic bound seems right to me. -- glen
Reply by Steven G. Johnson May 12, 20072007-05-12
On May 11, 10:25 am, VijaKhara <VijaKh...@gmail.com> wrote:
> I always hear that the computational complexity of FFT is O(NlogN). > However I learn that there are many FFT algorithms such as: > Cooley-Tukey FFT algorithm ,Split-radix FFT algorithm ,Prime-factor > FFT algorithm ,Bruun's FFT algorithm ,,Rader's FFT > algorithm ,Bluestein's FFT algorithm > > are they have the same computational complexity O(NlogN)??
Yes, all known FFT algorithms, including the ones you mention, require O(N log N) operations ... or, more precisely, \Theta(N log N) operations (O is only an asymptotic upper bound, while \Theta is a tight bound). (The exact arithmetic counts vary, of course, as well as the algorithmic structures and the N for which they work.) Winograd's algorithm, mentioned by another poster, achieves O(N) multiplications but only at the price of many more additions. However, I should add that there is no rigorous proof that you cannot do better than O(N log N). The limiting factor seems to be the number of additions, but so far there are only proofs that O(N log N) additions are required under limiting assumptions on the algorithm (e.g. if you assume all the multiplicative constants are bounded). So, it is conceivably still possible that someone could discover an asymptotically faster algorithm someday, although it doesn't seem very likely. Regards, Steven G. Johnson
Reply by Vladimir Vassilevsky May 11, 20072007-05-11

robert bristow-johnson wrote:


>>I always hear that the computational complexity of FFT is O(NlogN). >>However I learn that there are many FFT algorithms such as: >>Cooley-Tukey FFT algorithm ,Split-radix FFT algorithm ,Prime-factor >>FFT algorithm ,Bruun's FFT algorithm ,,Rader's FFT >>algorithm ,Bluestein's FFT algorithm >> >>are they have the same computational complexity O(NlogN)?? > > > i think the Winograd FT is supposed to be something less than > O(N*log2(N)) but i dunno the specifics. i believe it's a messy and > complicated alg.
Winograd FFT is about trading off mutiplications for additions. If I remember it correctly, no other FFT algorithm requires less multiplications then Winograd. It makes sense for the FFT realization in the hardware, however it is unimportant for the DSP realization. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by robert bristow-johnson May 11, 20072007-05-11
On May 11, 10:25 am, VijaKhara <VijaKh...@gmail.com> wrote:
> Hi all, > > I always hear that the computational complexity of FFT is O(NlogN). > However I learn that there are many FFT algorithms such as: > Cooley-Tukey FFT algorithm ,Split-radix FFT algorithm ,Prime-factor > FFT algorithm ,Bruun's FFT algorithm ,,Rader's FFT > algorithm ,Bluestein's FFT algorithm > > are they have the same computational complexity O(NlogN)??
i think the Winograd FT is supposed to be something less than O(N*log2(N)) but i dunno the specifics. i believe it's a messy and complicated alg. r b-j
Reply by Vladimir Vassilevsky May 11, 20072007-05-11

VijaKhara wrote:

> I always hear that the computational complexity of FFT is O(NlogN). > However I learn that there are many FFT algorithms such as: > Cooley-Tukey FFT algorithm ,Split-radix FFT algorithm ,Prime-factor > FFT algorithm ,Bruun's FFT algorithm ,,Rader's FFT > algorithm ,Bluestein's FFT algorithm > > are they have the same computational complexity O(NlogN)??
Yes. Those algorithms are optimized for the different variants of implementation and different goals. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by Jerry Avins May 11, 20072007-05-11
VijaKhara wrote:
> Hi all, > > I always hear that the computational complexity of FFT is O(NlogN). > However I learn that there are many FFT algorithms such as: > Cooley-Tukey FFT algorithm ,Split-radix FFT algorithm ,Prime-factor > FFT algorithm ,Bruun's FFT algorithm ,,Rader's FFT > algorithm ,Bluestein's FFT algorithm > > are they have the same computational complexity O(NlogN)??
Yes. O(x) means approximately k * x. k varies from implementation to implementation. Jerry -- Engineering is the art of making what you want from things you can get. &macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;
Reply by VijaKhara May 11, 20072007-05-11
Hi all,

I always hear that the computational complexity of FFT is O(NlogN).
However I learn that there are many FFT algorithms such as:
Cooley-Tukey FFT algorithm ,Split-radix FFT algorithm ,Prime-factor
FFT algorithm ,Bruun's FFT algorithm ,,Rader's FFT
algorithm ,Bluestein's FFT algorithm

are they have the same computational complexity O(NlogN)??

Thanks