DSP Algorithm Implementation: A Comprehensive Approach
As DSP engineers, ultimately we are required to design and implement specific DSP algorithms. The first step is to make a choice on which algorithm to use, e.g. for filtering should we use FIR or IIR. Then we can go a little bit deeper into the, high level, implementation details, e.g. use the symmetry in FIR filter to reduce complexity. When the algorithm is clear, the first step is to test and simulate the algorithm in a high level language like MATLAB.
After we reach confidence in our algorithm we move to the harder phase, which is the implementation. The difficulty lies in which platform is chosen for implementation. Widely used platforms, in acsending ordering according to complexity in my opinion, are:
1) General purpose processor (GPP)/ Microcontrollers.
2) Application specific processor (ASP), such as the common DSPs.
3) Field programmable gate array (FPGA).
4) Application specific integrated circuit (ASIC).
Every platform has a different set of design methodology. On the other hand, all the above have a common initial design step that is expressing the algorithm sequential pseudocode in what is known nested loop program (NLP) [1], which is basically a for loop. This pseucode is promptly mapped to a high level implementation language such as assembly or the more popular C/C++. In the case of using GPP, this is all what it needs for implementation. As for ASPs with improved instruction set, further performance can be gained by using the previous NLP in conjunction with dependency graph (DG) [1] to allocate resources.
If further performance is requires, engineers elude to ASIC or FPGA. The design methodology used there is complicated compared to the other platforms due to its reliance on the low level hardware description language (HDL) like Verilog and VHDL. The main advantage though is the leveraging of parallelism in the design which significantly boosts performance. To expose the parallelism in the algorithm, data flow graph (DFG) [1] is used. The DFG represents the algorithm as a network of functional units (FUs) that are the inner kernel of the NLP. Obviously, there is a gap between the NLP description and the HDL description. This gap usually is the main challenge in the design.
A new design trend is now surfacing to bridge this gap, called transaction level modelling (TLM). In a nut shell, it models the high level "what to do" instead of "how to do". A poineering open source language is the SystemC that is a C++ class with hardware description components such as concurrency. With SystemC, you can easily change the C++ code for the NLP into C++ code that can describe hardware effectively. With this new description, detailed hardware simulation can be carried with order of magnitude improvement of speed compared to HDL simulation. Furthermore, it gives insight into the high level features of the hardware implementation, as such, it constitute a good starting point to develop the HDL code for the actual hardware implementation.
To conclude, a suggested comprehensive design methodology for most platforms is:
NLP(C/C++) --> DFG --> TLM(SystemC) --> HDL(Verilog/VHDL)
[1] M.D., Ciletti,. Advanced digital design with the Verilog HDL. City: Prentice Hall, 2011.
- Comments
- Write a Comment Select to add a comment
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Please login (on the right) if you already have an account on this platform.
Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: