DSPRelated.com
Forums

2D discrete hartley transform

Started by Mark E. Berres January 31, 2004
Dear Readers,

I'm trying (unsuccessfully) to find some C code that can perform a 2D-DHT on
real-valued data obtained from a 2D image. Does anyone have workable code
they are willing to share? I've been plugging away with the implementation
of fftw_dht
in fftw, but I'm finding it more than challenging to sort through their
code.

Sincerely,

Mark

--
Mark E. Berres
University of Wisconsin - Madison
Genetics and Biotechnology Center
425 Henry Mall
Madison, WI. 53706

http://ravel.zoology.wisc.edu/sgaap
16 T 0303764
     4771858




Mark E. Berres wrote:
> I'm trying (unsuccessfully) to find some C code that can perform a 2D-DHT on > real-valued data obtained from a 2D image. Does anyone have workable code > they are willing to share? I've been plugging away with the implementation > of fftw_dht > in fftw, but I'm finding it more than challenging to sort through their > code.
Note, first of all, that the true 2D DHT is not the separable product of 1D DHTs along the rows and columns (unlike the 2D DFT). FFTW's DHT would compute the latter if you ask it for a 2d transform, as described in the manual...you need an additional post-processing pass for the non-separable 2D DHT (see the FFTW manual for references). I haven't seen any other free code that computes a true 2d DHT lying around, either; it's all 1d DHTs. Second, why do you need a 2D DHT? I'm very curious to know what it is good for that a 2D real-input DFT is not as good or better at. Third, if by "sort through their code" you mean compile it, I'm guessing you are a Windows user, in which case see fftw.org/install/windows.html for precompiled packages; otherwise, compilation is easy. If you want to read source code for learning the algorithms, though, then FFTW is not ideal for beginners. Cordially, Steven G. Johnson
"Mark E. Berres"  wrote...
> Dear Readers, > > I'm trying (unsuccessfully) to find some C code that can perform a 2D-DHT on > real-valued data obtained from a 2D image. Does anyone have workable code > they are willing to share? I've been plugging away with the implementation > of fftw_dht > in fftw, but I'm finding it more than challenging to sort through their > code.
This page has C code from two different authors. The first set appears to have been written for use with BMP image files. I haven't actually run this code, so I can't comment on it's quality. Good luck. http://www.jjj.de/fft/fftpage.html - Brian
Brian Webb wrote:
> This page has C code from two different authors. The > first set appears to have been written for use with BMP > image files. I haven't actually run this code, so I can't > comment on it's quality. Good luck. > > http://www.jjj.de/fft/fftpage.html
Ah, good, it looks like I mis-spoke; both of these codes implement the non-separable transform (by a post-processing pass after computing the 1D DHTs of the rows and columns). (I'm still curious to learn why people find this transform useful.)
On Tue, 03 Feb 2004 22:03:47 -0500, "Steven G. Johnson"
<stevenj@alum.mit.edu> wrote:

>Brian Webb wrote: >> This page has C code from two different authors. The >> first set appears to have been written for use with BMP >> image files. I haven't actually run this code, so I can't >> comment on it's quality. Good luck. >> >> http://www.jjj.de/fft/fftpage.html > >Ah, good, it looks like I mis-spoke; both of these codes implement the >non-separable transform (by a post-processing pass after computing the >1D DHTs of the rows and columns). > >(I'm still curious to learn why people find this transform useful.)
I don't know if this applies much any more, but FHTs used to be really good when computational and memory resources were limited. Many moons ago I did a lot of stuff with FHTs, primarily because if you're input data is real (audio in my case at the time), you can save a lot of computation using FHTs compared to many alternatives. Easy on multiplies, etc., etc. But nowadays all this newfangled technology makes computing power and memory pretty cheap, so I think the utility of the DHT/FHT is not quite what it used to be. Still, though, if resources are tight you can get a small advantage with them, especially if your input signal is real-valued. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
The following article might be of interest to the readers of this thread:

O Alshibami & S Boussakta
Signal Processing 82, pp 121-126 (2002)


It describes a method of directly calculating multidimensional DHTs, which
is 40% more efficient than using multiple iterations of the 1D DHT.

~Glynne



"Eric Jacobsen" <eric.jacobsen@delete.ieee.org> wrote in message
news:4025d25f.298363953@news.west.earthlink.net...
> On Tue, 03 Feb 2004 22:03:47 -0500, "Steven G. Johnson" > <stevenj@alum.mit.edu> wrote: > > >Brian Webb wrote: > >> This page has C code from two different authors. The > >> first set appears to have been written for use with BMP > >> image files. I haven't actually run this code, so I can't > >> comment on it's quality. Good luck. > >> > >> http://www.jjj.de/fft/fftpage.html > > > >Ah, good, it looks like I mis-spoke; both of these codes implement the > >non-separable transform (by a post-processing pass after computing the > >1D DHTs of the rows and columns). > > > >(I'm still curious to learn why people find this transform useful.) > > I don't know if this applies much any more, but FHTs used to be really > good when computational and memory resources were limited. Many moons > ago I did a lot of stuff with FHTs, primarily because if you're input > data is real (audio in my case at the time), you can save a lot of > computation using FHTs compared to many alternatives. Easy on > multiplies, etc., etc. > > But nowadays all this newfangled technology makes computing power and > memory pretty cheap, so I think the utility of the DHT/FHT is not > quite what it used to be. Still, though, if resources are tight you > can get a small advantage with them, especially if your input signal > is real-valued. > Eric Jacobsen > Minister of Algorithms, Intel Corp. > My opinions may not be Intel's opinions. > http://www.ericjacobsen.org
Eric Jacobsen wrote:
> I don't know if this applies much any more, but FHTs used to be really > good when computational and memory resources were limited. Many moons > ago I did a lot of stuff with FHTs, primarily because if you're input > data is real (audio in my case at the time), you can save a lot of > computation using FHTs compared to many alternatives. Easy on > multiplies, etc., etc.
I think you've fallen victem to the persistent myth that an FHT requires half as as much computation and/or memory compared to the FFT, but this has never been true. Even before the FHT was introduced, specialized variants of the FFT for real inputs were known that require ~half as much computation/memory as for complex data. In fact, it was argued by Sorensen in 1987 that the best (fewest-operation) known FFT algorithm for real data actually required fewer operations than the best known FHT algorithm. See also: http://en.wikipedia.org/wiki/Discrete_Hartley_transform Of course, these days the speed of such a transform depends more on quality of implementation than on flop counts per se, but since the FHT and real-data FFT have similar structures I'm not aware of any intrinsic advantage of the former from an implementation standpoint, either. What I'm curious to know is, given the fact that the Discrete Hartley Transform is not intrinsically any more efficient to compute than the Discrete Fourier Transform, are there applications in which the former is still somehow more appropriate? Cordially, Steven G. Johnson
Glynne wrote:
> The following article might be of interest to the readers of this thread: > > O Alshibami & S Boussakta > Signal Processing 82, pp 121-126 (2002) > > It describes a method of directly calculating multidimensional DHTs, which > is 40% more efficient than using multiple iterations of the 1D DHT.
Actually, the total asymptotic arithmetic savings (asymptotic adds+mults) claimed in that paper are about 25% (the 40% is just multiplies). For comparison, you can save 20% just by switching to split-radix 1D FHTs instead of the radix-2 1D FHT algorithm I think they consider. They get additional savings because they use vector-radix (radix 2x2x2); vector-radix is also known to improve arithmetic complexity for multi-dimensional DFTs as well. So, none of this really impacts the DFT vs. DHT question. Cordially, Steven G. Johnson PS. I'm not sure what the "best" (in terms of arithmetic counts) known multi-dimensional DHT algorithm is. There are some other papers claiming the best thing is to compute it with the help of DFTs (FFTs), actually.
Steven G. Johnson wrote:
> I think you've fallen victem to the persistent myth that an FHT requires
s/victem/victim/, grrr
> Of course, these days the speed of such a transform depends more on > quality of implementation than on flop counts per se, but since the FHT > and real-data FFT have similar structures I'm not aware of any intrinsic > advantage of the former from an implementation standpoint, either.
I should amend this to say that, in the multi-dimensional case, the DFT *does* have an intrinsic advantage over the DHT, in that the former is separable and the latter is not. Meaning: you can use separable algorithms in the DFT case (e.g. to leverage highly-optimized 1d code), but not in the DHT case (which requires post-processing passes if you use 1d transforms).
In article <4026f5cb$0$560$b45e6eb0@senator-bedfellow.mit.edu>,
"Steven G. Johnson" <stevenj@alum.mit.edu> wrote:

>Subject: Re: 2D discrete hartley transform >From: "Steven G. Johnson" <stevenj@alum.mit.edu> >Reply-To: <4025d25f.298363953@news.west.earthlink.net> >Date: Sun, 08 Feb 2004 21:51:55 -0500 >Newsgroups: sci.math.num-analysis,comp.dsp > >Eric Jacobsen wrote: >> I don't know if this applies much any more, but FHTs used to be really >> good when computational and memory resources were limited. Many moons >> ago I did a lot of stuff with FHTs, primarily because if you're input >> data is real (audio in my case at the time), you can save a lot of >> computation using FHTs compared to many alternatives. Easy on >> multiplies, etc., etc. > >I think you've fallen victem to the persistent myth that an FHT requires >half as as much computation and/or memory compared to the FFT, but this >has never been true. Even before the FHT was introduced, specialized >variants of the FFT for real inputs were known that require ~half as >much computation/memory as for complex data.
This myth is probably based on comparing the 30 line FFT from the back of the book with whatever is being proposed by someone who is rather uncomfortable with the use of complex analysis. The DHT also avoids the mysteries of negative frequencies which seem to cause as much discomfort as complex analysis. No need for "specialized variants" that are of interest to the specialists. This is not unlike the question of whether frequency is measured in cycles or radians. As a freshman I had a summer job where I measured frequency in cycles per second (actually it was intersting as the range was below the usual instruments and I had to learn first priciples of the task at hand fairly quickly). Then in Advanced Calculus I learned the joys of radian measure. But in graduate school I discovered Harmonic Analysis with kernels on groups and was back to cycles. In Advanced Calculus the important property was that sin(x) had a derivative of 1 at 0 (and had some period) but in Harmonic Analysis the important property is that it has period 1 (and some derivative at 0). Of course the magic constant in both cases is 2 pi and it is an issue of convenience where it appears. In numerical practice the ability to avoid the question of what value to use for 2 pi is often more than just a convenience.
>In fact, it was argued by Sorensen in 1987 that the best >(fewest-operation) known FFT algorithm for real data actually required >fewer operations than the best known FHT algorithm. See also: > > http://en.wikipedia.org/wiki/Discrete_Hartley_transform > >Of course, these days the speed of such a transform depends more on >quality of implementation than on flop counts per se, but since the FHT >and real-data FFT have similar structures I'm not aware of any intrinsic >advantage of the former from an implementation standpoint, either. > >What I'm curious to know is, given the fact that the Discrete Hartley >Transform is not intrinsically any more efficient to compute than the >Discrete Fourier Transform, are there applications in which the former >is still somehow more appropriate? > >Cordially, >Steven G. Johnson