Thinking Beyond Unit Testing

0
4959

 

Unit testing is a highly effective verification and validation technique in software engineering. You can use unit testing to improve code quality. In this article, we discuss unit testing using the UnitTest++ tool. We explore how to decipher code coverage using lcov, and then move on to valgrind to check for memory leaks.

Prerequisites

You need to install UnitTest++, lcov and valgrind. The compilation and installation process mainly needs GCC, g++ and Perl on your system. I have successfully installed these tools under Fedora 10 and RHEL5. In Ubuntu 9.10, I had to manually install g++. If you get into dependency trouble during installation, I would recommend that you use the package management software of your distribution to install these packages and their dependencies. The commands for those would be: yum install <packagename> (Redhat/Fedora); apt-get install <packagename> (Debian/Ubuntu); zypper install <packagename> (OpenSuSE); and urpmi <packagename> (Mandriva).

Let’s assume that you are going to install the tools in your $HOME/tools directory, and that your source and test code is in the $HOME/src and $HOME/test directories, respectively. If this is not the case, use the paths that are specific to your system, while setting up the environment variables below.

Note: Your login account should have sudo privileges to execute make install, as in the command snippets below. Alternately, you can run the commands as the root.

1. Export the paths to your folders as environment variables:

bash> export TOOLSROOT=$HOME/tools

bash> export SRCROOT=$HOME/src

bash> export TESTROOT=$HOME/test

2. Download (see the Links section at the end for source URLs) and extract the tools:

bash> cp lcov-1.8.tar.gz unittest-cpp-1.4.zip valgrind-3.5.0.tar.bz2 $TOOLSROOT

bash> cd $TOOLSROOT

bash> tar -xvzf lcov-1.8.tar.gz

bash> unzip unittest-cpp-1.4.zip

bash> tar -xvjf valgrind-3.5.0.tar.bz2

3. Configure and build UnitTest++:

bash> cd $TOOLSROOT/UnitTest++/

bash> make

4. Configure, build and install lcov:

bash> cd $TOOLSROOT/lcov-1.8

bash> make

bash> sudo make install

5. Configure, build and install valgrind:

bash> cd $TOOLSROOT/valgrind-3.5.0

bash> ./configure

bash> make

bash> sudo make install

Getting started with unit testing

Here’s a snippet of source code that compares two integers. Leave the commented lines as they are; we will uncomment them later in the article.

bash> cat $SRCROOT/test.c

1 #include <stdio.h>

2 #include <stdlib.h>

3 int compare_function(int a, int b)

4 {

5 int result = 0;

6 int *p;

7 if ( a > b ) {

8 result = 1;

9 } else if (a < b ){

10 result = -1;

11 }

12

13 // p = malloc(sizeof(int) * 10);

14 // free(p);

15 return result;

16 }

bash> cat $SRCROOT/test.h

int compare_function(int a, int b);

Unit testing generally includes a data generation part that feeds test data to the code that is being tested, and a set of logically related test cases that are grouped into one or more test suites. The following test program is explained below the code snippet.

bash> cat $TESTROOT/testUT.cpp

#include <UnitTest++.h>

#include <TestReporterStdout.h>

#ifdef __cplusplus

extern “C” {

#endif

#include <stdio.h>

#include “test.h”

extern int compare_function(int,int);

#ifdef __cplusplus

}

#endif

class dataFixture {

public :

dataFixture() {}

~dataFixture() {}

int getGreaterElt(int a) { return (a+1) ; }

int getLesserElt(int a) { return (a-1) ; }

};

SUITE(TestUtSuite)

{

TEST(TestUTCompareGreater)

{

int result;

result = compare_function(2,3);

CHECK(result == -1);

}

TEST_FIXTURE(dataFixture, TestUTCompareGreaterFixture)

{

int result;

result = compare_function(2,getGreaterElt(2));

CHECK_EQUAL(result,-1);

}

TEST(TestUTCompareEqual)

{

int result;

result = compare_function(2,2);

CHECK_EQUAL(result,0);

}

TEST_FIXTURE(dataFixture, TestUTCompareLesser)

{

int result;

result = compare_function(2,getLesserElt(2));

CHECK_EQUAL(result,1);

}

}

int main()

{

return UnitTest::RunAllTests();

}

In the code above, the dataFixture class generates a number that is one higher or lower than the passed number (this is the data generation part). The SUITE(<suitename>) macro embeds the set of test cases into a single suite. The TEST_FIXTURE(<datafixture>, <testname>) macro uses the data fixture class to obtain the data to be used in the test. The TEST(<testname>) macro is used for simple tests. CHECK or CHECK_EQUAL macros are used for comparing the results.

UnitTest++ also provides macros for boundary checking, condition assertions, a timed constraint test and exception checking: UNITTEST_TIME_CONSTRAINT, UNITTEST_TIME_CONSTRAINT_EXEMPT, CHECK_CLOSE, CHECK_THROW, CHECK_ARRAY_EQUAL, CHECK_ARRAY_CLOSE, and so on. You can explore the UnitTest++/docs directory for more information.

Here is the Makefile that we will use to build the test_ut.bin binary, which is linked with UnitTest++ and the gcov library. The various make targets provide for testing builds as well as release builds, prior to distributing the application to users. The compilation options -fprofile-arcs and -ftest-coverage are needed for code coverage checking, which we will discuss in the next section.

bash> cat $TESTROOT/Makefile

DEFAULT : all

CC=gcc

CXX=g++

CROSS_COMPILE=arm-linux-

TCC=${CROSS_COMPILE}${CC}

RM = rm

release.o :

${TCC} -c ${SRC_ROOT}/test.c ${SRC_ROOT}/main.c -I ${SRC_ROOT}

release_cov_valgrind.o :

${CC} -fprofile-arcs -ftest-coverage -c ${SRC_ROOT}/test.c ${SRC_ROOT}/main.c -I ${SRC_ROOT}

test_cov.o :

${CC} -g -fprofile-arcs -ftest-coverage -c ${SRC_ROOT}/test.c -I ${SRC_ROOT}

test.o :

${CC} -g -c ${SRC_ROOT}/test.c -I ${SRC_ROOT}

unittest : test.o

${CXX} test.o ${TEST_ROOT}/testUt.cpp -o test_ut.bin -I ${SRC_ROOT}

-I ${TOOLS_ROOT}/UnitTest++/src/ -L${TOOLS_ROOT}/UnitTest++ -lUnitTest++

unittest_cov : test_cov.o

${CXX} test.o ${TEST_ROOT}/testUt.cpp -o test_ut.bin -I

${SRC_ROOT} -I ${TOOLS_ROOT}/UnitTest++/src/ -L${TOOLS_ROOT}/UnitTest++ -lUnitTest++ -lgcov

release : release.o

${TCC} test.o main.o -o release.bin -I ${SRC_ROOT}

release_cov_valgrind : release_cov_valgrind.o

${CC} test.o main.o -o release_cov_valgrind.bin -I ${SRC_ROOT} -lgcov

all : unittest_cov

clean:

-@${RM} *.o *.bin *.html *.gcda* *.gcno* *.info* *.png *.bin *.css 2>/dev/null

Let’s compile the code and run the test:

bash> cd $TEST_ROOT

bash> make unittest

bash> ./test_ut.bin

Success: 4 tests passed.

Test time: 0.00 seconds.

You can play around with conditional assertions to get different results.

Viewing code coverage

lcov is an extension of gcov, a GNU test coverage tool. lcov code coverage is used to examine the parts of the source code that are executed—the branches taken, and so on. It also gives us an execution count for each line of the source code.

To get the code coverage, in the Makefile, we added the fprofile-arcs option to instrument the program flow, and thus record how many times each function call, branch or line is executed. During the program run (execution of the test binary test_ut.bin), the generated information is saved in a .gcda file. The ftest-coverage option we added in the Makefile generates the test coverage note files .gcno files for coverage analysis.

The geninfo command converts the coverage data files into trace files, which are encoded ASCII text files containing information about the file location, functions, branch coverage, frequency of execution and so on. The genhtml command can then convert these to a readable HTML output (it creates a file named index.html):

bash> make unittest_cov

bash> geninfo .

bash> genhtml test.gcda.info

A quick look at index.html shows the frequency of the source code statements’ execution. This can suggest test or source code enhancements:

1        : #include <stdio.h>

2        : #include <stdlib.h>

3        : int compare_function(int a, int b)

4        4 : {

5        4 : int result = 0;

6 :      int *p;

7        4 : if ( a > b ) {

8        1 : result = 1;

9        3 : } else if (a < b ){

10     2 : result = -1;

11    : }

12     :

13     : //p = malloc(sizeof(int) * 10);

14      : //free(p);

15     4 : return result;

16     : }

A closer look at coverage can unearth redundant code, unexpected branches taken, functions that are not executed, potential bugs, and so on.

Let’s take a separate code coverage example to illustrate a simple case:

1      1: int var = 4;

2     1: if(var = 5) {

3     1: printf(“ 5 “);

4     1: }

In the example above, the if statement on Line 2 was intended to compare the value of the variable var with the numeric value 5. Due to a typo, it instead accidentally assigns the value of 5 to the var variable. The coverage shows that the branch is taken at line Number 2, which is an unexpected path. On investigating the reason, it becomes obvious that a typo has occurred, and it can then be corrected.

Tip: In your code, suppose a function a() calls b(), and b() in turn calls c(), always try to call the function a() in your unit testing, providing it with the necessary data. This adds practical value to unit testing and coverage analysis.

Memory leak checking

Valgrind is a powerful binary-level debugging and profiling tool for executables, which is employed to check for memory leaks, cache profiling, deadlock detection and so on. You need to compile your code with the -g option to generate more debugging information in your executables.

Memory leaks and similar errors occur when free() or delete is used inappropriately on allocated data, if a double free is done, or allocated data is not freed.

The generic valgrind invocation syntax is as follows:

bash> valgrind –tool=toolname <program arguments>

To see valgrind in action, uncomment lines 13 and 14 in test.c, which allocate and free memory. Rebuild the test binary (run make again). Proceed to run test_ut.bin again, as shown below, to view a sample of memory leakage detection:

bash> make unittest_cov

bash> valgrind ./test_ut.bin

==14817== Memcheck, a memory error detector

==14817== Copyright (C) 2002-2009, and GNU GPL’d, by Julian Seward et al.

==14817== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info

==14817== Command: ./test_ut.bin

==14817==

Success: 4 tests passed.

Test time: 0.07 seconds.

==14817==

==14817== HEAP SUMMARY:

==14817== in use at exit: 160 bytes in 4 blocks

==14817== total heap usage: 5 allocs, 1 frees, 512 bytes allocated

==14817==

==14817== LEAK SUMMARY:

==14817== definitely lost: 160 bytes in 4 blocks

==14817== indirectly lost: 0 bytes in 0 blocks

==14817== possibly lost: 0 bytes in 0 blocks

==14817== still reachable: 0 bytes in 0 blocks

==14817== suppressed: 0 bytes in 0 blocks

==14817== Rerun with –leak-check=full to see details of leaked memory

==14817==

==14817== For counts of detected and suppressed errors, rerun with: -v

==14817== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 15 from 8)

The leak summary shows that there was definitely a memory leak. You can use the options –tool=memcheck –leak-check=full to obtain a more detailed output. Refer to the documentation in the valgrind-3.5.0/docs directory to get more information.

You might wonder what’s the use of running valgrind on test binaries rather than source binaries? It’s handy when the source is cross-compiled to run on different target architectures, and thus not realistic to test it frequently. In our test.c example, the release executable is intended for the ARM architecture:

bash>make release

bash>file release.bin

release.bin: ELF 32-bit LSB executable, ARM, version 1 (ARM), for GNU/Linux 2.6.4, dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not stripped

Note: A discussion of cross-compilation is beyond the scope of this article, but I’ve mentioned a reference at the end of the article that will provide you with more information, if you’re interested.

You can also try the code coverage and valgrind check on the source binary without building it into the unit testing target—you can build with the release_cov_valgrind make target:

bash>make release_cov_valgrind

Useful definitions

Native compilation: Building executables for the same platform as the one on which the compiler is run, to compile the code.

Cross-compilation: Creation of executables for a target platform other than the one on which the compiler is run (which is the build platform)

Here are a few exciting ideas before signing off:

1. You can implement the idea of unit testing in coding contests, to validate and compare the submitted results. Evaluation is made easier by automating the testing of the submitted code.

2. This could be helpful in examinations—for teachers who are assessing students’ program submissions.

3. A logical mix of unit testing, code coverage, memory leak and error checking is a valuable validation, verification and code quality measurement, especially in corporate projects.

Finally, I would like to acknowledge all the people who contributed to the UnitTest++, gcov, lcov, and valgrind open source projects, and thank them for their efforts.

LEAVE A REPLY

Please enter your comment!
Please enter your name here