TestCenter Reference
TestCenter Reference

These are the reference pages for the TestCenter. The TestCenter is an easy-to-use system for writing tests in MeVisLab. Tests are similar to macro modules, i.e., they have a definition file, a script file, and optionally a network. The script file contains a set of test functions that are executed automatically. The Python scripting language is used for specifing the test functions.

For a more detailed introduction to the TestCenter have a look at the "Getting Started" document. Example TestCases can be found in the "MeVisLab/Examples" package. You can use the TestCaseManager module to load, view, and run those example tests.

Generic vs Functional Testing

The TestCenter supports two general test types: functional and generic tests.

Functional tests are used to verify specific functionality of the scripting API, a single module, or a whole network. This type of test will be described in more detail in this document.

A generic test case will be called for each module of a set. Generic testing is a special aspect of testing and is not meant to be used specifically, but it allows for testing general features, such as completeness of a module's defintion, existence of an example network, or in case of a macro module, correct dependencies to the packages where the internal modules are defined in. Generic tests generate a single report for each of the modules for a large set of modules, typically all modules of a package.

Test Functions

The building blocks of a functional test case are the test functions. These allow a structuring and can help to get a more meaningful report as problems can much easier be related to certain operations.

The following subsections will describe how to create different types of test functions and how to apply an ordering on these test functions.

Single Test Functions

A single test function is the most basic form. The name of these test functions must start with the TEST_ prefix. The string following this identifier will be extracted and used as the function's name in reports with underscores ("_") being replaced by spaces.

The following example will define a test function with the name Single Test Function. The function's body is left empty here for simplicity.

def TEST_Single_Test_Function():
pass

Please note that test functions must not have any parameters and any return value is ignored!

Grouping Test Functions

It is possible to group a set of single test functions. This can help to organize test cases with many functions. On the TestCenter's GUI, a test group can be disabled or enabled with a single click.

A group is built from a set of existing test functions by returning the list of function objects in a method with the GROUP_ prefix. The following example will add the three single test functions TEST_TestFunction1, TEST_TestFunction2, and TEST_TestFunction3 to the group GROUP_Test_Group.

def GROUP_Test_Group():
return (TEST_TestFunction1, TEST_TestFunction2, TEST_TestFunction3)

Iterative Test Functions

Sometimes it is required to run the same function on different entities.

For example, to verify that an algorithm works for multiple different input images, a list of image file names can be uses in an iterative test.

This can be achieved using iterative test functions. Internally a list of virtual functions will be built that maps function names to a real (non test) function with appropriate parameters.

The definition of such a iterative test function is split into two parts. A function is needed that will inform the TestCenter that a certain function should be called for different input data. This is done using a special function with the function name having the ITERATIVETEST_ prefix. The base name of the virtual test functions is determined by the string following the underscore.

The ITERATIVETEST_ function must return two values with the first being either a list or a dictionary, and the second a function. The given function will be called with the parameters specified in the first object. If the first object is a list, its items are the parameters that are passed to the actual test function with the virtual name being extended by the index of the list item. If it is a dictionary, the keys are appended to the virtual function name and the values are being passed to the test function.

This first example generates three virtual functions named Simple_Iterative_Test_0, Simple_Iterative_Test_1, and Simple_Iterative_Test_2 which would call the function actualTestFunction with the parameters first, second, and third.

def ITERATIVETEST_Simple_Iterative_Test():
return ["first", "second", "third"], actualTestFunction
def actualTestFunction(parameter):
MLAB.log(parameter)

The second example generates the three virtual functions Simple_Iterative_Test_One, Simple_Iterative_Test_Two, and Simple_Iterative_Test_Three.

def ITERATIVETEST_Simple_Iterative_Test():
return {"One":"first", "Two":"second", "Three":"third"}, actualTestFunction
def actualTestFunction(parameter):
MLAB.log(parameter)

If you need to pass more parameters, the list returned in the constructor function would contain lists of parameters and in case of a dictionary the values would be lists.

Field-Value Test Functions

The concept of field-value test cases is often required for testing (see the The TestSupport Package section or the Field-Value Test Cases page). Therefore, a simple mechanism can be used to run those:

def FIELDVALUETEST_A_Field_Value_Test ():
return os.path.join(Base.getDataDirectory(), "test.xml"), ['test1', 'test4']

This method would call the field-value test cases test1 and test2 that must be specified in the test.xml file. If the list of test cases is not given or empty, all available test cases will be executed.

Unit Test Wrapper Functions

Unit tests implemented with Python's unittest module can be integrated into the TestCenter. The unit tests can still be executed in their own in pure Python, or along with high-level tests.

The integration follows the the pattern used by iterative tests. You need to implement a wrapper function with the prefix UNITTEST_ that returns a unittest.TestSuite. All test functions inside that TestSuite, and possibly nested TestSuites, are added as functions to a group with the name of the wrapper function.

from backend import getSuite
def UNITTEST_backendUnitTests():
return getSuite()

Creation of Virtual Test Functions

The GROUP, ITERATIVETEST, UNITTEST, and FIELDVALUETEST methods are evaluated first to generate a list of virtual test functions. It is possible to change field values to generate the required parameters, but the changes are reverted afterwards, i.e., test functions should not rely on values set in a previous test function!

Disabling Test Functions

If a test shall be disabled, for example because it takes too long or has strong side effects, but should still be available to all developers, one may prefix the function name with DISABLED_. The TestCenter will still list such test functions, but they will not be executed by default.

Ordering of Test Functions

If the test functions must be called in a specific ordering, the TEST_, ITERATIVETEST_, UNITTEST_, and FIELDVALUETEST_ prefix can be extended to have a substring defining that ordering, e.g., TEST005_Fifth_Test_Function and TEST006_Sixth_Test_Function. The ordering string will be removed and will not be part of the test function's name that is shown in the report.

Please note that the actual test function names (everything following the first underscore) must be unique, e.g., two functions with the names TEST001_test_function and TEST002_test_function are not allowed as the function names in reports would be equal.

Testing and Status

Testing requires the generation of a status for test functions, e.g., one would like to assert that results have certain values or in case of unexpected results, those test functions should be marked as failed. The TestCenter uses MeVisLab's debug console to achieve this. All messages going to MeVisLab's debug console will be collected for each test function. The type of a message will determine the status, e.g., if messages of type \i error are logged to the console, the test will be marked as failed.

But the TestCenter allows for a more detailed status classification than just passed and failed. Test can have one of the following status:

  • ok : there have only been messages of type info.
  • warning : at least one message was of type warning.
  • error : there have been messages of type error.
  • timeout : the TestCenter will abort the testing if a certain amount of time has passed.
  • crash : the TestCenter will detect failure of tests. The last two statuses are necessary to prevent the system from failing in case of infinite loops or crashes.

The status of a test case is determined by the worst status the test functions report.

Setup and Teardown of a Test Case

Sometimes it is necessary to have special setup and teardown methods that will initialize some sort of fixture. This can be done by defining the methods setUpTestCase and tearDownTestCase. Those methods will be called prior and after calling the actual test functions. Messages generated in these functions will be added to the succeeding or preceding test function.

Please note that after crashes the TestCenter will restart MeVisLab, retry to execute the failed function, and afterwards run the remaining test functions. Prior to calling the first function, setUpTestCase will be called again, so that the environment is set to a defined state.

The Definition of a New Test

The first thing to create for a new test is a definition file like in the following example:

FunctionalTestCase SimpleTestCase {
scriptFile = "$(LOCAL)/SimpleTestCase.py"
}

The filename referenced with the scriptFile tag contains the test functions of the test case. The following tags are supported:

  • author : The author of the test case.
  • comment : Describes the intention of the test case.
  • timeout : Defines how long the test case should take at most in seconds. After this amount of time passed the test case will be cancelled and marked as timed out.
  • testGroups : Defines the groups that the test will belong to. This can be used to filter out certain tests. In automatic testing all tests being in the "manual" group will be excluded.
  • dataDirectory : Defines where required data is located. This allows for having test data like input images or ground truth data at a location outside the test case directory. But keep in mind that using this feature could prevent the test from running in an automatic fashion if this directory is defined only locally on a machine!

The test cases must be located in a package's TestCases directory to be found correctly. There is no defined structure inside these directories but it seems useful to have an ordering that is similar to the modules directory.

The tag associatedTests can be added to a module's definition to specify the list of test case names that are the testing features of this module. This can be used to run all tests that are using this module easily from the module's context menu.

The TestSupport Package

There is a Python package to support developers in writing tests. It contains a lot of functions to return important data (like the context of the current test case or the path to the report directory) or help finishing certain tasks (like creating screenshots). Have a look at the TestSupport reference documentation to get a better idea what it is all about.

There are two additional Python modules that are not exactly part of the TestCenter but should be mentioned here, as they help developing test cases:

  • FieldValueTests: Often there is the need to set a lot of fields to certain values, apply some triggers, and afterwards verify that some fields have certain values. This is what the field-value test cases are about. There is a special module "FieldValueTestCaseEditor" to create such parameterizations for a network and save them to an XML file. For more information have a look at the FieldValueTests Python module (also see Field-Value Test Cases page).
  • ChangeSet: As changing fields also changes the environment, it is important to safely revert those field changes. This cannot be done for every field as triggered action cannot be reverted easily, but the initial state of the network can mostly be regenerated by setting all changed fields back to their initial value. The ChangeSet class achieves this by storing the original field value of fields changed through its interface. With the destruction of objects of this class the fields are reset.