Chapter 17. Profiling

Table of Contents

17.1. Introduction to Profiling
17.2. Using Profiling
17.2.1. Modules
17.2.2. Fields
17.2.3. Functions

The Profiling view generally measures time and memory consumption of modules in a network.

17.1. Introduction to Profiling

Profiling is dynamic program analysis (as opposed to static code analysis) and is used to identify slow functions, frequently called functions, and memory usage during runtime. Outside of MeVisLab, a number of profilers exists: gprof, GlowCode, Valgrind, DevPartner/BoundsChecker.

The advantages of profiling inside of MeVisLab are:

  • Network performance can be analyzed by profiling on the C++ and Python level, with an inherent awareness of MeVisLab entities like modules, PagedImages, etc.

  • No code recompilation is required.

  • No additional programs are necessary, which makes profiling possibly faster.

What can be profiled?

Figure 17.1. Functions to be Profiled

Functions to be Profiled

  • All ML modules offer profiling as the profiling is implemented in the base class ml::Module.

  • WEM and CSO Modules also support the profiling of time consumption. However, in general the profiling of the memory consumption is not supported as this requires the memory being managed by the internal memory manager of the ML. So the memory managed by these modules is either not profiled at all or only the portions of the module that use ML methods are profiled.

  • Python functions, scripts definitions, and Python Qt wrappers.

  • Inventor Bindings

  • MDL commands and field notifications


The Profiling view marks processes that use multi-threading and shows only their accumulated time in this view. If you need detailed profling information about each thread, use the view described in Chapter 9, ML Parallel Processing Profiler View.