Table of Contents
The Profiling view generally measures time and memory consumption of modules in a network.
Profiling is dynamic program analysis (as opposed to static code analysis) and is used to identify slow functions, frequently called functions, and memory usage during runtime. Outside of MeVisLab, a number of profilers exists: gprof, GlowCode, Valgrind, DevPartner/BoundsChecker.
The advantages of profiling inside of MeVisLab are:
Network performance can be analyzed by profiling on the C++ and Python level, with an inherent awareness of MeVisLab entities like modules, PagedImages, etc.
No code recompilation is required.
No additional programs are necessary, which makes profiling possibly faster.
What can be profiled?
All ML modules offer profiling as the profiling is implemented in the base class ml::Module
.
WEM and CSO Modules also support the profiling of time consumption. However, in general the profiling of the memory consumption is not supported as this requires the memory being managed by the internal memory manager of the ML. So the memory managed by these modules is either not profiled at all or only the portions of the module that use ML methods are profiled.
Python functions, scripts definitions, and Python Qt wrappers.
Inventor Bindings
MDL commands and field notifications
Note | |
---|---|
The Profiling view marks processes that use multi-threading and shows only their accumulated time in this view. If you need detailed profling information about each thread, use the view described in Chapter 9, ML Parallel Processing Profiler View. |
© 2024 MeVis Medical Solutions AG