DSPRelated.com
Blogs

Random GPGPU Musings

Shehrzad January 20, 2010

As I procastinate actually sitting down and writing some CUDA kernels to walk you through some of the nuances of CUDA based signal-processing, I hope I can atone for my sins by discussing a few GPGPU items of note.  First, today I had a fruitful discussion with a well-known DSP consulting house, and a major topic of discussion was CUDA.  This conversation reinforced my view that GPGPU is the wave of the future, and is going to be of tantamount importance in the coming years.  Suffice it to say there is significant interest amongst many industry sectors in leveraging this technology.

Also, I ran across an article "Scientific and Engineering Computing Using ATI Stream Technology" (subscription required, hence no link) in the Nov/Dec 2009 issue of the IEEE Computing in Science and Engineering (CiSE).  Stream is to ATI as CUDA is to Nvidia - that is, Stream is ATI's "closed" API for enabling GPGPU on their discrete GPUs.  A few salient points and my presonal takeaways from this article:

  • The underlying architecture between the ATI product line and the Nvidia product line are very similar.  You essentially have groups of streaming multiprocessors that aggregate together to yield a highly (massively) parallel coprocessor.
  • Given that the architectures are so reminscent of each other, it is of little surprise that the programming models are also very similar.  This is even true down to the actual nomenclature employed by the respective APIs.  For instance, in ATI's Stream the programmer launches "kernels" from the host C/C++ program, exactly how it is described in CUDA.
  • I've heard through the grapevine or personally know of a fair number of companies actually using, or trying to use, CUDA in their particular application.  I know of none using or considering using ATI's Stream.  My suspicion is Nvidia has such a head-start in GPGPU that it will be difficult for Stream to gain any traction whatsoever.
  • It's no surprise that ATI is pushing OpenCL, where OpenCL is to GPGPU as OpenGL is to 3D graphics.  In other words, just like OpenGL is an open, cross-platform wrapper around hardware-accelerated 3D graphics engines, OpenCL is supposedly agnostic to whatever GPGPU "coprocessor" it happens to be targeting.
  • Because GPGPU is something of a paradigm shift and truly requires the software developer to think differently about their algorithms, prolific sample codes and the developer community at large is going to be important.  Nvidia has another huge advantage here, for example their CUDA Zone website contains numerous non-trivial sample applications (most with source code) that leverage CUDA.  For example, I know of two optical flow libraries (optical flow is a commonly employed computer vision "primitive").  While I have not spent much time researching the ATI community, my strong suspicion is that it is nowhere near as active as the CUDA community.

This particular article follows close on the heels of another GPGPU CiSE article, which appeared in the Sept/Oct 2009 issue : "Solving Computational Problems with GPU Computing".  There has been a definite uptake in the frequency of articles involving GPU techniques, at least in this journal, over the past couple of years, again reaffirming my view that as Moore's Law reaches its asymptotic denouement on the desktop, the next leap forward will be the GPU.


To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: