Chapter 2. Design

Table of Contents

2.1. TestCases
2.1.1. Defining a Test Case
2.1.2. Implementing the Test Functions
2.1.3. setUp and tearDown Functions
2.1.4. Logging and Status Detection
2.1.5. Handling Changes to the Environment
2.1.6. Test Case Suites
2.2. Master-Slave Approach
2.3. TestSupport Python Package
2.4. Handling Input/Output Data
2.4.1. Handling Ground Truth Data
2.4.2. Handling Result Data
2.4.3. Result Handling

In the following chapter, the design of the TestCenter is discussed.

2.1. TestCases

The test cases are the key elements of the testing framework. To add a new one, two steps are necessary:

  • Defining the basic parameters of the test case, like test type, name and script file. With this information, the TestCenter knows about the test case.

  • Defining the actual test, for example adding the instructions that should be executed to the script file defined above.

2.1.1. Defining a Test Case

Test cases use the MeVisLab infrastructure. Similar to the database for modules, there is a database for test cases. This database is filled with entries by searching for *.def files in the TestCases subdirectories of all packages. Each *.def file may contain the definition of multiple test cases.

The two test types are defined by blocks of source like the following:

FunctionalTestCase FunctionalExample {
  scriptFile = $(LOCAL)/

GenericTestCase GenericExample {
  scriptFile = $(LOCAL)/

The given example creates a functional and a generic test case. The information provided is collected in an entry that can be accessed using the testCaseInfo method of the TestCaseDatabase.

The scriptFile key specifies which file contains the actual test functions (see Section 2.1.2, “Implementing the Test Functions”). The script files are used to load the actual test cases, which behave like macro modules in the MeVisLab context. This has the benefit that a network named after the test case and located in the same folder (for example $(LOCAL)/GenericExample.mlab) is loaded as part of the context, too.

Table 2.1. Keys for Test Case Definition

authorComma-separated list of author names. For functional tests, these authors are used to send email notifications for failed automatic tests.
commentA short information what the test case is going to test.
timeoutSeconds the test case should take at most. If it takes longer, it is terminated. The default value is given in the TestCenter configuration file, see Chapter 5, TestCenter Configuration File.
scriptFileThe file containing the actual test specification.
testGroupsComma-separated list of group names the test belongs to. This information can be used to run the test cases of only one group (for example all test cases of the examples group). There are two special test groups: automatic and manual. All test cases that do not belong to the manual group are part of the automatic group by default. Test cases that for some reason cannot be used in automatic testing should be a part of the manual group.
dataDirectoryPath to a directory containing the ground truth data for the test case. The default value is "$(LOCAL)/data". This directory should used as if it were read-only. See Section 2.4.1, “Handling Ground Truth Data” for further information.
showTestFunctionSortingLiteralsIf set to True/Yes/1, the sorting literals (the string between TEST and the first underscore before the actual test name) is shown in the function list in the TestCaseManager and in the resulting HTML report for an easy referencing.
preferredRendererOptional key that indicates what type of renderer should be used for testing. Possible values are software and hardware. When the key is set to software the test case is run using software rendering. When it is set to hardware hardware rendering is used for the test case. If software rendering has been forced via the environment variable MLAB_FORCE_MESA a warning about conflicting settings is logged. When the key is not set rendering uses the system default.

All test cases must reside in a package's TestCases directory. It is recommended to use a substructure similar to the modules directory (for example TestCases/FunctionalTests/[module_type]/[module_name]).

The location of generic tests is important as the set of modules being tested by them depends on the package they are in.

  • In general, a generic test is run on all modules in that package.

  • If the generic test is located in a group's Foundation package, all modules of that group will be tested.

  • Generic tests located in the MeVis/Foundation package will be executed for every module available.

2.1.2. Implementing the Test Functions

Test cases are scripted in Python. Each test case consists of an arbitrary number of test functions. These functions all have their own status. The worst status of a test case's functions will determine the overall status, see Section 2.1.4, “Logging and Status Detection”.

Similar to the Google test framework, test functions are given a special prefix like TEST.

There are three major types of test functions that can be used: simple, iterative and field-value test functions. In addition to this, multiple simple test functions can be combined to a group. The following sections describe the available test function prefixes and types. Simple Functions

TEST: The simple test function consists of a single function that is run once. Advanced Functions

For the more advanced test function types, the concept of virtual test functions was introduced. A simple test function maps to a single virtual test function, like each iteration of an iterative test function is mapped to one. The virtual functions are named after the non-virtual parent function. For simple test functions, both names match. For field-value tests, the name of the field-value test case executed in the test function is appended. The iterative tests are more complicated: In case of returning a list, the index of the parameter will be appended to the function name. If a dictionary is returned, the keys are appended to the name and the values of the keys are used as parameters.

The following advanced test types are available:

  • ITERATIVTEST: Iterative test functions run a function for every specified input. They return a tuple consisting of the function object called and the inputs iterated over. The following example would call the function f three times with the parameter being 1 in the first call, 2 in the second and 3 in the third.

    def ITERATIVETEST_ExampleTestFunction ():
      return [1, 2, 3], f
    def f (param):
      print param

    The iterative test functions are useful if the same function should be applied to different input data. These could be input values, names of input images, etc.

  • FIELDVALUETEST: The field-value test functions integrate the field-value tests concept into the TestCenter. The path to the XML file containing the field-value test specification is returned and an optional list of contained field-value test cases. The following example would execute all field-value test cases defined in the fieldvaluetestcase.xml file located in $(LOCAL)/data.

    def FIELDVALUETEST_ExampleTestFunction ():
      return "$(LOCAL)/data/fieldvaluetestcase.xml"

    The field-value test case concept was introduced to be able to easily set a parameterization in the network and verify the values of different fields.

  • GROUP: This test function combines multiple simple test functions to a named group. These functions just return the function objects of the simple test cases that should be member of the group. There is no technical reason for having these test groups but they improve the understanding of complex test cases with many test functions.

  • UNITTEST: This test function wraps existing Python tests implemented with the unittest module. It returns a TestSuite that can contain test functions and other TestSuites. For each unit test function, a test function is added to a group with the name of the wrapper function. This test function is named after the test suite and unit test function.

    from backend import getSuite
    def UNITTEST_ExampleUnitTestWrapper ():
      return getSuite()

All function names follow the PREFIX[order]_[name] syntax. The [order] substring can be an arbitrary number of numbers that will define the order of the test functions. The [name] part is later used to identify the method and should therefore be selected with care.

2.1.3. setUp and tearDown Functions

Two special functions are available that can be used to maintain global information for a test case. They are called setUpTestCase and tearDownTestCase and are called prior to the execution of the first test function and after the last, respectively. The outputs of setUpTestCase or tearDownTestCase are appended to the first / last test function.


Please note that this is different from what is common in other unit-testing frameworks; there, the setup function would usually be called prior to every test function and teardown after every function.

2.1.4. Logging and Status Detection

A testing framework must somehow determine whether a test has failed or succeeded.

The MeVisLab debug console shows messages with a type. All sources of output are mapped to this console: standard output, standard error and other special methods provided inside MeVisLab for logging. Based on this information, the TestCenter determines the current test function status.

Aside of these status types, there are the TestCenter-specific types “Timeout” and “Crash”. They are used for situations where the testing fails. For more information on failure handing, see Section 2.2, “Master-Slave Approach”.

The test case status is determined by the worst status of one of its test functions. In addition to this, the TestCenter collects all messages printed in the debug console to display them in the reports for detailed introspection on what happened.

The following example shows a simple test function that verifies a condition and prints an error message if the condition was not met. This results in the test function having the status “Error”.

import mevis
def TEST_Simple_Test ():
  if not condition:
    mevis.MLAB.logError("The condition wasn't met!")
    mevis.MLAB.log("Everything is fine.")

As the mechanism leads to messages being collected from resources besides the test functions, it allows detecting all possible problems occurring while testing. For example, if an ML module fails and prints messages of type “ML Error”, this will result in the test function being marked as failed.

Messages coming from the TestCenter itself may look different to messages from modules. This way, irrelevant parts of the messages can be filtered out more easily. The distinction is done using special logging methods inside the TestCenter which are part of the TestSupport Python package (see Section 2.3, “TestSupport Python Package”). Example:

from TestSupport import Logging
def TEST_Simple_Test ():
  if not condition:
    Logging.error("The condition wasn't met!")
  else:"Everything is fine.")


Other examples for info output methods would be print "The variable has changed." and MLAB.log("The variable has changed."), but using the Logging Python module makes it easy to distinguish the test as message source.

2.1.5. Handling Changes to the Environment

During a test, the environment is usually changed. For example, a test function would set field values and touch triggers to start a computation based on the set values. This is a problem when the next test function is influenced by these changes.

  • Unit testing frameworks prevent this problem by reloading the environment for every single test function. In case of the TestCenter, this is not an option as MeVisLab and the test network might take tremendous effort to load.

  • However, test functions also cannot rely on the state of previous test functions as there is a mechanism to detect crashes and retry the according test function. This second attempt would have a different state than expected as the previous functions are not run again.

  • Manually resetting all values a computation relies on would be possible but also a daunting task.

In the TestCenter, the problem is solved by the TestSupport.Fields Python module. It provides methods to set a field value in a way that makes resetting the original values possible.

The foundation is the ChangeSet concept (see the TestCenterReference, It relies on the idea that a subset of input field values are set to parameterize the following computation. Setting the input values to their original values should be sufficient to (re)set the environment to an expected state. This is only heuristic!


The ChangeSets only work for field value changes made using the according methods. They have no effect on indirect changes, for example changes due to the computation started by touching trigger buttons.

The TestCenter itself maintains a stack of such ChangeSets to split the TestCase execution into logical blocks. A first ChangeSet is pushed while loading a TestCase and popped while destructing it. When running a test function, a second ChangeSet is pushed before and popped afterwards.


You can also use the stack to push your own ChangeSets (see the pushChangeSet and popChangeSet methods of the TestSupport.Base Python module, see Section 2.3, “TestSupport Python Package”). Make sure to clean up the stack afterwards, otherwise error messages will be printed.

The output values could also be input values of the next computation. However, this requires a special handling like a manual reset to circumvent the ChangeSet concept.

2.1.6. Test Case Suites

Test cases can be grouped into test case suites. This allows to run several related tests via the TestCaseManager. Test case suites can be declared in any *.def file below the TestCases subdirectory of any packages. Each *.def file may contain the definition of multiple test case suites and test cases.

The following code shows how to define a TestCaseSuite:

TestCaseSuite ExampleTestCaseSuite {
  testCases = "ExampleTest1, ExampleTest2, ..."

The available test case suites are listed in the TestCaseManager and can be selected and executed. The result report will show an overview of all executed tests and links to the individual tests.