Spend enough memory for the ML cache!
The ML image processing benefits strongly from sufficient cache memory. Usually, 30-50% of the main memory is a good value.
Reduce field notifications!
The more notifications are sent around through the network the more changes and calculations take place. Find out the really necessary field connections and changes and limit them to the minimum.
Avoid global image processing modules or take them outside critical network branches!
Global image processing modules (unfortunately, there are some in most networks) are often extremely expensive because they pull the entire image through the module pipeline and thus negate many advantages of page-based image processing. Solutions can be:
"Outsource" large images and expensive calculations.
Calculate them once and store the results on disk. Then replace
it by a Load
module in the network. This,
however, is often not possible, e.g., if module results change
often.
Try to replace those module by other page-based solutions. Maybe other modules provide similar functionalities.
Move expensive calculations to less frequently used and
changing parts of the data flow. Often - not always - the image
data flow and the number of changes are higher near the output
or viewer modules than directly after e.g., a
Load
module.
Reimplement the module and make it page-based, e.g., by
using the VirtualVolume
concept (see
Section 2.3.7, “
VirtualVolume
”). Although this is sometimes
difficult and a page-based approach may be slower considering
the local processing in the module, the page-based image flow is
not interrupted. This can result in a significant performance
boost since data flow can be reduced.
Avoid or reduce unnecessary changes of image properties (especially page extents, data types, image extents, etc.) in the image data flow!
Changing image properties from one module to another usually requires expensive casting and/or copying of the image data or also a recomposition of pages.
Set number of permitted threads to the number of CPUs in your system!
Multithreading (parallelization) currently works optimally if the number of permitted threads in the ML matches the number of CPUs in your system.
Increase performance by reducing the memory optimization mode!
If there is enough memory, you can usually increase performance by reducing the memory optimization mode to lower numbers or even to zero. Hence more intermediate results are saved in the cache and the number of recalculations is reduced.
Consider the image format, compression and source when loading data from files!
Loading data can become slow when the file needs to be transferred via network connections or when the file format is compressed. Try to load files from local disks and/or store them uncompressed if you have enough disk space. Compressing files does not save memory when the image is compressed with ML modules. If the file format supports paging, store the file with a page extent adequate for image processing.
Increment the memory optimization mode to optimize memory usage!
If your network suffers from a lack of memory, increment the memory optimization mode to optimize memory usage; more pages are recalculated and less pages are buffered in the cache. This, however, usually reduces image processing speed.
Use release versions of the ML and MeVisLab!
When you develop your own software with the ML or with MeVisLab, you may probably work in debug mode and non-optimized code. Compiling release-mode code with optimizations may drastically speed up your applications.
Disable (symbol controlled) debugging!
Working in debug mode with symbol-controlled debugging may degrade performance during operation, because information is printed to the output. Disable symbol-controlled debugging or use release version code which automatically does not contain such code.
© 2024 MeVis Medical Solutions AG