DSPRelated.com
BLAS Comparison on FPGA, CPU and GPU

BLAS Comparison on FPGA, CPU and GPU

Srinidhi Kestur, John D. Davis
Still RelevantAdvanced

High Performance Computing (HPC) or scientific codes are being executed across a wide variety of computing platforms from embedded processors to massively parallel GPUs. We present a comparison of the Basic Linear Algebra Subroutines (BLAS) using double-precision floating point on an FPGA, CPU and GPU. On the CPU and GPU, we utilize standard libraries on state-of-the-art devices. On the FPGA, we have developed parameterized modular implementations for the dot product and Gaxpy or matrix-vector multiplication. In order to obtain optimal performance for any aspect ratio of the matrices, we have designed a high-throughput accumulator to perform an efficient reduction of floating point values. To support scalability to large data-sets, we target the BEE3 FPGA platform. We use performance and energy efficiency as metrics to compare the different platforms. Results show that FPGAs offer comparable performance as well as 2.7 to 293 times better energy efficiency for the test cases that we implemented on all three platforms.


Summary

This paper evaluates double-precision BLAS performance across FPGA, CPU and GPU platforms and presents parameterized FPGA implementations for dot product and Gaxpy (matrix-vector) kernels. Readers will learn the FPGA design techniques used to achieve high-throughput floating-point reduction and how those implementations compare in throughput, efficiency and scalability to standard CPU and GPU BLAS libraries.

Key Takeaways

  • Compare double-precision BLAS throughput and efficiency across FPGA, CPU (host BLAS) and GPU (cuBLAS) platforms.
  • Explain how to implement parameterized, modular dot-product and Gaxpy kernels on an FPGA targeting large matrices.
  • Describe a high-throughput floating-point accumulator design for efficient reduction across arbitrary matrix aspect ratios.
  • Apply scalability strategies for large datasets on reconfigurable platforms (BEE3) and evaluate performance vs. resource trade-offs.

Who Should Read This

Hardware and software engineers or researchers experienced with HPC, FPGA/GPU acceleration, or linear-algebra kernels who must select or optimize platforms for high-performance BLAS workloads.

Still RelevantAdvanced

Topics

Real-Time DSPMachine Learning

Related Documents