DSPRelated.com
Forums

2D discrete hartley transform

Started by Mark E. Berres January 31, 2004
Gordon Sande wrote:
> "Steven G. Johnson" <stevenj@alum.mit.edu> wrote: >>I think you've fallen victim to the persistent myth that an FHT requires >>half as as much computation and/or memory compared to the FFT, but this >>has never been true. Even before the FHT was introduced, specialized >>variants of the FFT for real inputs were known that require ~half as >>much computation/memory as for complex data. > > This myth is probably based on comparing the 30 line FFT from the back > of the book with whatever is being proposed by someone who is rather > uncomfortable with the use of complex analysis.
Except that I've seen the same myth appearing in print journals multiple times, by people who should have known better.
> The DHT also avoids > the mysteries of negative frequencies which seem to cause as much > discomfort as complex analysis.
Recall that negative frequencies do not appear explicitly in the outputs of a real-input FFT (they are eliminated because they are redundant), so I'm not sure that's relevant here. And the single complex number that you get out for each frequency component is, arguably, at least as easy to interpret as the pair of real numbers you get from the DHT (for the cos+sin and cos-sin amplitudes, which require some trigonometry to convert into amplitude and phase information). Moreover, for computing e.g. convolutions, whereas a FFT requires ~n/2 complex multiplications (6 flops per multiply, for ~3n flops), an FHT requires 8 flops per ~n/2 pairwise operations (~4n flops), where these pairwise operations are arguably at least as difficult to explain/remember as a complex multiply. > No need for "specialized variants" that are of interest to the > specialists. In terms of which is of more widespread interest, as far as I can tell a real-input FFT is far more widely used than an FHT...certainly, far more optimized and general implementations are available for the former than for the latter. (As one indication, it's hard to find an FHT code that is not restricted to powers of two.) So, it seems hard to argue that a real-input FFT is of interest only to "specialists" while an FHT is not. Cordially, Steven G. Johnson
In article <4027ef25$0$559$b45e6eb0@senator-bedfellow.mit.edu>,
"Steven G. Johnson" <stevenj@alum.mit.edu> wrote:

>Subject: Re: 2D discrete hartley transform >From: "Steven G. Johnson" <stevenj@alum.mit.edu> >Reply-To: <BC4D045F9668311BC@192.168.0.100> >Date: Mon, 09 Feb 2004 15:35:48 -0500 >Newsgroups: sci.math.num-analysis,comp.dsp > >Gordon Sande wrote: >> "Steven G. Johnson" <stevenj@alum.mit.edu> wrote: >>>I think you've fallen victim to the persistent myth that an FHT requires >>>half as as much computation and/or memory compared to the FFT, but this >>>has never been true. Even before the FHT was introduced, specialized >>>variants of the FFT for real inputs were known that require ~half as >>>much computation/memory as for complex data. >> >> This myth is probably based on comparing the 30 line FFT from the back >> of the book with whatever is being proposed by someone who is rather >> uncomfortable with the use of complex analysis. > >Except that I've seen the same myth appearing in print journals multiple >times, by people who should have known better. > >> The DHT also avoids >> the mysteries of negative frequencies which seem to cause as much >> discomfort as complex analysis. > >Recall that negative frequencies do not appear explicitly in the outputs >of a real-input FFT (they are eliminated because they are redundant), so >I'm not sure that's relevant here. And the single complex number that >you get out for each frequency component is, arguably, at least as easy >to interpret as the pair of real numbers you get from the DHT (for the >cos+sin and cos-sin amplitudes, which require some trigonometry to >convert into amplitude and phase information). > >Moreover, for computing e.g. convolutions, whereas a FFT requires ~n/2 >complex multiplications (6 flops per multiply, for ~3n flops), an FHT >requires 8 flops per ~n/2 pairwise operations (~4n flops), where these >pairwise operations are arguably at least as difficult to >explain/remember as a complex multiply. > > > No need for "specialized variants" that are of interest to the > > specialists. > >In terms of which is of more widespread interest, as far as I can tell a >real-input FFT is far more widely used than an FHT...certainly, far more >optimized and general implementations are available for the former than >for the latter. (As one indication, it's hard to find an FHT code that >is not restricted to powers of two.) So, it seems hard to argue that a >real-input FFT is of interest only to "specialists" while an FHT is not.
You have just made the point that the nonspecialists will be uncomfortable the need to find a real input variant, with other than powers of two, etc. You seem to be seeking a rational explanation for when folks are comfortable enough to get past the 30 line code in the appendix of the book and find all the good things that the specialists know. You could equally well ask why some many folks seem to want to invert matrices when all they are doing is solving equations. Neither your nor this question is likely to have a rational and comfortable response.
>Cordially, >Steven G. Johnson
Gordon Sande wrote:
> You have just made the point that the nonspecialists will be uncomfortable > the need to find a real input variant, with other than powers of two, etc.
You're putting words in my mouth; the argument I was making was that they should be no more uncomfortable with a real-data FFT than with an FHT, since the interpretation/use of the latter is at least as complicated as with the former. (*Especially* in the multidimensional case.) And real-valued FFTs are far more widely available.
> You seem to be seeking a rational explanation for when folks are > comfortable > enough to get past the 30 line code in the appendix of the book and find > all the good things that the specialists know.
No, I'm seeking a rational explanation for why any book or paper, written by someone who knows what they are doing, should ever recommend the use of the FHT, whether to a specialist or nonspecialist. (Of course, people choose algorithms for all sorts of irrational reasons, and will continue to use an algorithm long after it has been superseded, but that's not what I'm wondering about.) If there were some application in which the DHT outputs were a natural fit, easier to use/understand than the DFT outputs, that would be a good reason. However, I haven't seen any examples where this is true. (Another reason would be if the FHT were significantly easier to program, which I don't think it is; anyway, nowadays this has mostly turned into the question of the ease of finding code online.) Cordially, Steven G. Johnson
In article <40280392$0$568$b45e6eb0@senator-bedfellow.mit.edu>,
"Steven G. Johnson" <stevenj@alum.mit.edu> wrote:

>Subject: Re: 2D discrete hartley transform >From: "Steven G. Johnson" <stevenj@alum.mit.edu> >Reply-To: <BC4D6C1496681B7171@192.168.0.100> >Date: Mon, 09 Feb 2004 17:02:58 -0500 >Newsgroups: sci.math.num-analysis,comp.dsp > >Gordon Sande wrote: >> You have just made the point that the nonspecialists will be uncomfortable >> the need to find a real input variant, with other than powers of two, etc. > >You're putting words in my mouth; the argument I was making was that >they should be no more uncomfortable with a real-data FFT than with an
They _should_ not be uncomfortable but they _are_ so you end up making the substance of my arguement with your words when you point out that they do not seem to access the usual tools used by the specialists. My point is that the speical case, whether real or Hermitian symmetric, is something that one has to work at to find and ends up with even more apparent technical complication than than the FT itself.
>FHT, since the interpretation/use of the latter is at least as >complicated as with the former. (*Especially* in the multidimensional >case.) And real-valued FFTs are far more widely available. > >> You seem to be seeking a rational explanation for when folks are >> comfortable >> enough to get past the 30 line code in the appendix of the book and find >> all the good things that the specialists know. > >No, I'm seeking a rational explanation for why any book or paper, >written by someone who knows what they are doing, should ever recommend >the use of the FHT, whether to a specialist or nonspecialist. > >(Of course, people choose algorithms for all sorts of irrational >reasons, and will continue to use an algorithm long after it has been >superseded, but that's not what I'm wondering about.) > >If there were some application in which the DHT outputs were a natural >fit, easier to use/understand than the DFT outputs, that would be a good >reason. However, I haven't seen any examples where this is true.
The Fourier transform arises naturally from invariance agruements but I have never seen an equivalent development for the Hartley transform. The only "developments" I have seen appear to have the style of "just like" the FT but "avoiding" the unwarrented complications of complex analysis. So like you I end up being curious if there really is a substantive development of the HT. I am just a bit more willing to admit that the strong appearance is of being uncomfortable with the perceived technical complexity of the FT and it use of complex analysis to appear simple. I am also willing to observe that there are a variety of parallel situations where theory comes in introductory, intermediate and advanced versions and what is judged to be simple is not the same at the various levels. My comment is that the HT appears simple for some purposes at an intermediate level but that those appearances are false at a more advanced level, as you have been saying.
>(Another reason would be if the FHT were significantly easier to >program, which I don't think it is; anyway, nowadays this has mostly >turned into the question of the ease of finding code online.) > >Cordially, >Steven G. Johnson
"Steven G. Johnson" <stevenj@alum.mit.edu> wrote in message
news:40280392$0$568>
> If there were some application in which the DHT outputs were a natural > fit, easier to use/understand than the DFT outputs, that would be a good > reason. However, I haven't seen any examples where this is true. >
Steven, One attractive aspect (of the DHT) is it is self inverting. Although inverting an DFT is trivial. I never used the DHT in a project but I've certainly experimented with it in MathCad. But I've always remember this property in case something comes up. But so far the DFT has been sufficient. Maybe in a case of simple filtering, one can take the entire block of data and DHT it, and then symmetrically zero out the frequencies (the plus and minus ones) to be removed and then DHT the result. The self inversion property adds a little symmetry to the problem, but as you say the DFT works just fine here. -- Clay S. Turner, V.P. Wireless Systems Engineering, Inc. Satellite Beach, Florida 32937 (321) 777-7889 www.wse.biz csturner@wse.biz
Gordon Sande wrote:
>>You're putting words in my mouth; the argument I was making was that >>they should be no more uncomfortable with a real-data FFT than with an > > They _should_ not be uncomfortable but they _are_ so you end up making > the substance of my arguement with your words when you point out that > they do not seem to access the usual tools used by the specialists. My
Actually, as I pointed out, real-input FFTs *are* far more widespread than FHTs, evidence that most people actualy *do* prefer the real-data FFT to the FHT and are comfortable with it. Nor did I ever say that people who use the FHT do so because they are more "uncomfortable" with the FFT--that was *your* argument, which is why you are putting words in my mouth. My own impression for why a few people still use the FHT is simply that they believed the old wives' tale that it is "more efficient"--an argument that I have heard several times in print, in emailed questions about FFTW, as well as in posts in this very group (e.g. from yesterday). I have *never* heard someone say that they prefer the FHT to the real-valued FFT because they are more "comfortable" with the former. Cordially, Steven G. Johnson PS. Actually, I do know of one case where the DHT seems to be more efficient than the DFT: for large prime-size transforms of real data, where it is difficult to specialize FFT algorithms to exploit the symmetry, but efficient DHT algorithms exist. But this case is not terribly compelling because people doing prime-size transforms don't care much about a factor of 2 or 1.5 in efficiency anyway, as far as I can tell. FFTW uses a DHT-based algorithm in this case, though.
Clay S. Turner wrote:
> One attractive aspect (of the DHT) is it is self inverting.
Yep, that is the other classic argument in favor of the DHT, and the only one that survived Sorensen's refutation of the efficiency argument for composite sizes. It does make coding slightly easier, because you don't have to write two cases (although the real-input and real-output cases are just the duals/transposes of one another), but this is usually moot because most people use a canned routine rather than coding it themselves. Unfortunately, as you point out, no one has found any other useful practical implications of the self-inverse property, as far as I know. Cordially, Steven G. Johnson
software: http://www.jjj.de/fxt/
text: http://www.jjj.de/fxt/#fxtbook

Mark E. Berres wrote:
> Dear Readers, > > I'm trying (unsuccessfully) to find some C code that can perform a 2D-DHT on > real-valued data obtained from a 2D image. Does anyone have workable code > they are willing to share? I've been plugging away with the implementation > of fftw_dht > in fftw, but I'm finding it more than challenging to sort through their > code. > > Sincerely, > > Mark