SDCC Regression Testing

From SDCC wiki
Jump to: navigation, search

written by Borut Ražem, based on and partially copied from the Proposed Test Suite Design document, written by Michael Hope

Contents

What is it good for?

Sdcc regression test suite is the main quality assurance (QA) mechanism used by the sdcc project. The regression test results indicate the quality of the sdcc compiler build. Regression tests are designed in the way that they pass in case the sdcc compiler works correctly.

When to execute them?

Auto-magically executed regression tests

Sdcc regression tests are auto-magically executed on a daily basis on sdcc snapshot builds. They are executed on the sdcc distributed compile farm ( DCF) servers on different platforms. The results are shown in the RT column on the SDCC - Snapshot Builds web page. They can be analyzed on the SDCC - Regression Tests web page.

Manual execution of regression tests

Regression tests should also be run locally on the developers machine by the developer implementing new functionality or fixing a bug before committing changes to the sdcc subversion repository.

How to execute them manually?

  • Executing the complete test suite:
cd support/regression
make
  • Execute regression tests for one target:
cd support/regression
make test-<target>
  • Re-execute only one test case:
cd support/regression
touch tests/<test-case>.c
make

Limitations

  • Since the tests must pass compilation and linkage it is impossible to test proper generation of error messages.
  • And since the output to stderr is not parsed either it is also impossible to check for warnings.

When to implement a new regression test?

  • when implementing new functionality
  • when fixing a bug

How it works?

Sdcc regression testing consists of the following steps preformed by support/regression/Makefile:

  • m4 macro preprocessor:
 * regression test source files with .m4 file extension from support/regression/tests/ are converted to .c files at support/regression/gen/.
  • python generate-cases.py preprocessor:
 *  .c files from support/regression/tests/ and support/regression/gen/ (those generated from .m4 files) are preprocessed to .c files at support/regression/gen/<target>/<test>/.
  • sdcc compiler / assembler / linker:
 * compiles the .c files from support/regression/gen/<target>/<test>/ and produces an .ihx binary file. The exception is host target, which uses the host native gcc compiler or gcc cross compiler for cross compiled platforms (mingw*). In this case a native binary is generated instead of .ihx.
  • simulator (ucsim, gpsim, ...):
 * executes the .ihx binary from support/regression/gen/<target>/<test>/ by simulating the target MCU and generates the .out and .sim files at support/regression/gen/<target>/<test>/. The exception is host target, where native binaries are run directly or cross compiled binaries are run with a simulator (wine for mingw*).
  • python get_ticks.py script:
 * extracts the size and ticks from the .sim files in support/regression/gen/<target>/<test>/ and appends this info to the .out files in support/regression/gen/<target>/<test>/.
  • all the .out files from support/regression/gen/<target>/<test>/ are combined into support/regression/results/<target>/<test>.out by the make process.
  • python compact-results.py sript:
 * collects all the .out files from support/regression/gen/<target>/<test>/, extracts the relevant data and prints it on the stdout.
  • python collate-results.py script:
 * collect all support/regression/results/<target>/<test>.out and generates the regression test summary for the given target.

M4 macro preprocessor

The regression test source files with extension .m4 are first preprocessed by the m4 macro preprocessor. The m4 macros implemented in file support/regression/m4include/rtmacros.m4 are included in preprocessing.

The following macros are currently implemented:

  • forloop(var, from, to, stmt)
  • foreachq(x, `item_1, item_2, ..., item_n', stmt)
  • foreach(x, (item_1, item_2, ..., item_n), stmt)

An example

foreach(`int_right', (0x0080, 0x0100, 0x0200, 0x0400, 0x0800, 0x1000, 0x2000, 0x4000, 0x8000), `
int
lwr_if_`'int_right (unsigned left)
{ 
  if (left < int_right)
    return 1;
  else
    return 0;
}

int
lwr_`'int_right (unsigned left)
{
  return left < int_right;
}

void
test_lwr_`'int_right (void)
{
  ASSERT (lwr_if_`'int_right (int_right - 1));
  ASSERT (!lwr_if_`'int_right (int_right));
  ASSERT (!lwr_if_`'int_right (int_right + 1));

  ASSERT (lwr_`'int_right (int_right - 1));
  ASSERT (!lwr_`'int_right (int_right));
  ASSERT (!lwr_`'int_right (int_right + 1));
}
')dnl


Python generate-cases.py preprocessor

Regression test source files are not complete c programs: they don't include the `main()` function. The `main()` function is generated by the generate-cases.py preprocessor. It includes calls to all functions in the regression test source file, whose name begins with `test`, for example `testMyFunctionality()` or `test_bugXXXX(void)`. The `test` functions should be of type `void` and without parameters.

The generate-cases.py also generates permutations using metadata in the source file. Meta data includes permutation information, exception information, and permutation exceptions.

Meta data shall be global to the file. Meta data names consist of the lower case alphanumerics. Test case specific meta data (fields) shall be stored in a comment block at the start of the file. This is only due to style.

A field definition shall consist of

  • The field name
  • A colon.
  • A comma separated list of values.

The values shall be stripped of leading and trailing white space.

Permutation exceptions are by port only. Exceptions to a field are specified by a modified field definition. An exception definition consists of

  • The field name.
  • An opening square bracket.
  • A comma separated list of ports the exception applies for.
  • A closing square bracket.
  • A colon.
  • The values to use for this field for these ports.

An instance of the test case shall be generated for each permutation of the test case specific meta data fields.

The runtime meta fields are

  • port - The port this test is running on.
  • testcase - The name of this test case.
  • function - The name of the current function.

Most of the runtime fields are not very usable. They are there for completeness.

Meta fields may be accessed inside the test case by enclosing them in curly brackets. The curly brackets will be interpreted anywhere inside the test case, including inside quoted strings. Field names that are not recognized will be passed through including the brackets. Note that it is therefore impossible to use some strings within the test case.

Test case function names should include the permuted fields in the name to reduce name collisions.

An example

The following code generates a simple increment test for all combinations of the storage classes and all combinations of the data sizes. This is a bad example as the optimizer will often remove most of this code. Note the comma after class `static` which permutes into an empty string. The corresponding filename will use `none` for the permutation.

/** Test for increment.
  type: char, int, long
  Z80 port does not fully support longs (4 byte)
  type[z80]: char, int
  class: register, static, */

static void
testInc{class}{types}(void)
{
  {class} {type} i = 0;
  i = i + 1;
  ASSERT((i == 1));
}

How to design a regression test?

When designing a regression test, the designer has to make a compromise between:

  • test case coverage
  • testing time

The regression test case should test the generated code as much as possible by keeping the testing time as short as possible.

In case of bug fixing usually the minimal code that was used to reproduce the bug is included into the regression test suite.

In the case of new implementation the new functionality and critical corner cases are tested.

M4 macro preprocessor vs. python generate-cases.py preprocessor

The regression tests often generate different code for handling different types (Byte, Word, DWord, and the signed forms). Meta information could be used to permute the different test cases across the different types and other items.

The python generate-cases.py preprocessor generates one file per each permutation. This means that if the regression test source file includes some tests which don't depend on permuted items, the same tests are executed number of permutations times. Each permutation source file is compiled and executed / simulated independently, which means an overhead in compilation and execution / simulation.

Using the m4 macro `foreach` or `foreachq` directives it is possible to generate permutations in a much more granular way: just on a single function or even on a group of lines. Only the code included in the `foreach[q]` loop will be permuted (repeated number of permutations times), all the other code will be included only once. This means shorter compilation and execution / simulation times. The weak side of using m4 macros is the code size: since all permutations are included in the same binary executable file, the maximum available amount of data and program memory can be easily exceeded.

Taking into account the limitations of each approach:

  • Use the python generate-cases.py preprocessor for larger pieces of code or / and with big number of permutations. Don't include the code which is not using the permuted items in the same source if possible, except if the code is small and it doesn’t consume much execution / simulation time. In other cases divide such a code in a separate test case.
  • Use the m4 macros when permuting smaller pieces of code which don't use a lot of data and program memory. Several permuted pieces of code using different permuted items and pieces of code which don't depend on permuted items can be used in the same test case source file.

It is also possible to use the mixed approach.

Borut

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox