# Unit Testing for Embedded Algorithms

December 21, 2009

Happy Holidays! For my premier article, I am writing about my favorite technique to use when designing and developing software- unit testing. Unit testing is a best practice when designing software. It allows the designer to verify the behavior of the software units before the entire system is complete, and it facilitates the change and growth of the software system because the developer can verify that the changes will not affect the behavior of other parts of the system. I have used this technique over the years to rapidly write high-quality software for the medical devices industry as a Software Engineer.

After I transitioned to the role of an Algorithms Engineer, and I wanted to continue the practice of unit testing for algorithm development. Embedded algorithms provide a unique challenge as an Engineer because the design and development platforms are usually vastly different from the target platform. Ideally, I would like to develop the algorithm off-line, using data that was sampled from either the intended host platform, or a platform nearly identical to it, and then port the algorithm to the embedded environment. Then, I would like a method to verify that the embedded algorithm performs in an identical manner in the development environment and the embedded environment. Unit testing, when done properly, provides the tools to complete this type of verification.

One of the challenges of unit testing for an embedded algorithm is handling any target-specific functions. For example, the Analog Devices Blackfin processor has support for 40-bit accumulators that support saturation. I want to use these 40-bit accumulators in my algorithm, and I want to develop and verify the performance of the algorithm off-line. What do I do? I implement those target specific functions in the development environment, and unit test them to verify that they behave as expected in the development environment. Then, I can write unit tests against the algorithm, and I can verify that the algorithm behaves as expected in both the development environment and the target environment. This type of unit testing becomes a three-step process:

1. Design the algorithm in the workstation environment, with a full suite of unit tests.
2. Decide how to optimize the algorithm for the target, using the target-specific functions.
3. Write the unit tests for the target-specific functions, implement the target-specific functions in the workstation environment, and use them to optimize your algorithm.

# Example Problem

For a simple example, we will create a simple algorithm that calculates the average of eight consecutive 16-bit fractional values. This algorithm will need to calculate these averages for multiple channels at the same time. We will use the following three Blackfin-specific functions to calculate our averages:

• fract2x16 compose_fr2x16(fract16 f1, fract16 f2)
• fract2x16 add_fr2x16(fract2x16 f1,fract2x16 f2)
• fract2x16 shr_fr2x16(fract2x16 f1,short shft);

The first function composes a fract2x16 type from two fract16 types. The second function adds two fract2x16 values. This is perfect for summing multiple channels at the same time. The third function arithmetically shifts both fract16 values to the right; i.e., they divide each 16-bit value by a power of two. The basic algorithm is fairly simple, but we want to verify this algorithm in the development environment.

The remainder of this article dicusses the development of this example algorithm with a set of functioning unit tests. I have uploaded the source code for this example problem to a project in sourceforge.net. Please take a minute to download the source code. It is divided into three examples (unit testing, porting and optimizing). Each example is referenced in the remainder of this article.

# Unit Testing and Test Driven Development

## Unit Testing

First, we will create a unit test for the algorithm. Example 1 contains the listings for the calcAverage() function and for the unit tests for this function. In this case, I wrote three basic unit tests:

• The first test verifies the functionality of the algorithm when all of the channels have an input of 0
• The second test verifies the functionality of the algorithm when all of the channels have an input of 1
• The third test verifies the functionality of the algorithm when all of the channels have an incrementing number for each sample.

Obviously, there are additional tests that could be written, but this set of tests is sufficient to verify the normal behavior of the algorithm.

## Test Driven Development

When developing our algorithm, it is a good idea to first write unit tests and then write the algorithm. In fact, many experts believe that the best method is the following:

• First, write a basic test. Compile and run it. It should fail.
• Next, provide the minimum amount of design to get the test to pass. Compile and run the test. It should pass.
• Next, write a second test that exposes a different part of the design. Compile and run the tests. The new test should fail.
• Next, provide the minimum amount of design to get the test to pass. Compile and run the test. It should pass.
• Continue until you are satisfied that the tests and the design are complete.

Implementing the software in this manner has a few advantages. First, the software is designed to be testable. Often, it is difficult or impossible to add unit tests to legacy systems because the code was not written in a manner to be testable. Also, the software is fully documented in the source code. It is easy to understand how your units work when the unit tests expose their behavior.

# Porting

After implementing the basic algorithm, we are ready to start porting it. Flipping through the Blackfin C Compiler manual, we discover the three functions that we will use for the algorithm:

• fract2x16 compose_fr2x16(fract16 f1, fract16 f2)
• fract2x16 add_fr2x16(fract2x16 f1,fract2x16 f2)
• fract2x16 shr_fr2x16(fract2x16 f1,short shft)

To port the algorithm, we first need to create a set of functions that will work in the workstation so we can run our tests at the workstation. Then, we can run the same tests in the target environment. This will verify that the port is successful. Use the following procedure to create the set of target-specific functions:

•
1. Create a header and source file to hold the target-specific functions for the workstation environment.
2. Create a declaration for each function in the header file, and an empty definition of each function in the source file.
3. Pick a target function and create a unit test for that function. Create multiple tests if necessary.
4. Implement the function to fulfill the unit test or tests.
5. Continue steps 3 and 4 unit all of the target-specific functions are implemented.

Example 2 builds on the first example by including the target-specific functions in the “fract_stub.h” header file and the “fract_stub.c” source file. For the add_fr2x16 and shr_fr2x16 functions, I decided to add the single 16-bit versions of these functions to facilitate the creation of the two 16-bit versions. Both versions include a fairly full set of unit tests to support them. Finally, I included the new “fract_stub.h” header file in the “calcavg.h” header with preprocessing directives that include our simulated header file when compiling in our workstation environment, and that include the Blackfin header file when compiling with the target compiler.

# Optimizing

With a set of unit tests for the algorithm, and a set of unit tests for a target functions, the algorithm is ready for optimization. Example 3 is an example of this optimization.  In Example 3, the definition of the calcAverage() function was modified to accept a pointer to an array of fract2x16 values instead of an array of fract16 values for the average values. The declaration of the calcAverage() function was changed to utilize our three, target-specific functions. With these two simple changes, we can build and execute the unit tests. All of the unit tests still pass without modifying the tests!

# Conclusion

Unit testing and test driven development are powerful tools in the Software Engineers toolbox. These best practices are easily extended to the embedded environment, where they can have a huge impact on the quality, maintainability and reliability of the source code. A bit of extra effort in the design phase of the software will provide larger gains in the future.

All of the examples for this article are provided in the ADR Example Code project at Source Forge. I built the projects using gnu make, gcc and cygwin. I used the eclipse IDE for all software development. I selected the “cutest” unit-testing framework because of its past performance in the embedded environment.

There is a wealth of information on unit testing on the web. Here are a few links to start down the path of unit testing.

·        Unit testing defined – The extreme programming definition of unit testing

·        Writing Great Unit Tests – A good explanation of unit testing versus integration testing.

·        The Craftsman – A series of great articles on TDD.

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.