ApplyTileProcessor

MLModule

genre

ML_Inference_Global

author

Jan-Martin Kuhnigk

package

FMEstable/ReleaseMeVis

dll

MLApplyTileProcessor

definition

MLApplyTileProcessor.def

see also

ApplyTileProcessorPageWise, ProcessTiles, ExampleCppTileProcessor

keywords

deep, learning, nachine, cnn, dnn, model, apply, deeplearning, deep, machine, learning, nference, predict, classification, classify, regression, global, tensor, regres, mult, ApplyModel

Purpose

The module ApplyTileProcessor allows to perform inference for models with an arbitrary number of inputs and outputs, but with less convenience than the single input/output inference module ApplyTileProcessorPageWise.

Usage

  1. Connect a TileProcessor object (provided e.g. by ONNXTileProcessor) that declares its inputs and output tensors. The module will then auto-setup its input/output connectors for those, also showing the name of the corresponding tensor at each connector.

  2. Connect correctly formatted input images. Note that the module will just relay your input images unmodified to the model, assuming that they already have correct/supported dimension order, extent and data format.

  3. Optionally restrict the output tensors you want to compute via Requested Outputs (you can just leave the field empty to request all outputs).

  4. Press Update, check for errors, possibly adapting the input image format until they are gone.

  5. Set up output postprocessing: Unless you are doing image classification where the output tensor is a single voxel, you may need to post-process the results so that they have the expected dimension order and world matrix.

Details

In order to be maximally flexible, the module ApplyTileProcessor is also maximally “stupid”: It does not support page-based processing, and it just provides the inputs 1:1 as model input tensors the module, and the output tensors 1:1 as module outputs. So unlike when using the module ApplyTileProcessorPageWise, you will not only have to manage any kind of tiling/patching yourself (in case your model/hardware can’t take the entire input in one batch), you also have to pre-process your inputs and post-process your outputs “manually” to work for your particular model:

For each input image

  • Ensure the correct dimension order. Note that for almost all inference frameworks, an ML dimension order notation of ‘x, y, z, c’ corresponds to the reversed notation ‘c, z, y, x’, which is nothing you need to correct for. However, often the channel and batch dimensions need to be corrected. Any of the modules ReorderDimensions or SwapFlipDimensions can be used for that purpose. Leading or trailing trivial dimensions (i.e., of extent 1) can be neglected.

  • Ensure the correct/a compatible extent. Useful modules for that purpose include SubImage or ModifyRegion.

  • Ensure the correct/a compatible data type e.g. via ImagePropertyConvert or Scale, as the input image will be cast into the TileProcessor’s expected input data type as defined in its properties. If the properties do not specify an expected input type, the input image data type will be used.

See the connected TileProcessor’s input properties for hints on the expected input tensor properties.

For each output image

  • Verify the expected dimension order. If you needed to change the order for the inputs, you may need to change it back for the outputs to get the expected results.

  • Verify the expected world matrix: The module applies the world matrix of the input marked as referenceInput in the tile processor’s properties to each output, or an identity matrix, if no referenceInput was specified. If the location of the result images in the 3d(+t) world space matters (e.g. for image segmentation tasks), you may need to adapt each output manually. Especially if your module uses non-zero padding or up/downsampling is involved (outputs having a stride different from 1), this will be necessary. Modules such as SetWorldMatrix, SetWorldOrigin, MergeRegions, ModifyRegion can help you there. Over time, more modules specifically designed to support this task will emerge to simplify this task.

Debugging/Profiling of Individual Batch Processing Requests

Set the environment variable TILEPROCESSING_ENABLE_REQUEST_DEBUGGING to a non “0” value to enable sending a unique request ID string with each individual tile request. This simplifies tracking for debugging and profiling in the inference providers (i.e. tile processors).

Tips

If you have just one input and require just one of the outputs, use ApplyTileProcessorPageWise instead, which has integrated support for all the stuff mentioned in the “Details” section.

Windows

Default Panel

../../../Projects/TileProcessing/ApplyTileProcessor/MLApplyTileProcessor/Modules/mhelp/Images/Screenshots/ApplyTileProcessor._default.png

Input Fields

TileProcessor inputs, dynamically set visible/invisible and documented with the corresponding tensor names when a model is connected.

Note that there is a hardcoded upper limit for available inputs (documented via outMaxNumModuleInputs). If it turns out there are modules that need more, the limit can be easily increased in the C++ sources.

inTileProcessor

name: inTileProcessor, type: TileProcessor/TileProcessorContainer(MLBase), deprecated name: inModelConnector

Connector to the model that is used for inference. Different providers for different kinds of inference frameworks may be available, e.g. ONNXTileProcessor for inference via Onnx Runtime.

Must be derived from the TileProcessor class. Search for TileProcessor to look for provider modules in MeVisLab.

Output Fields

TileProcessor outputs, dynamically set visible/invisible and documented with the corresponding tensor names when a model is connected.

Note that there is a hardcoded upper limit for available outputs (documented via outMaxNumModuleOutputs). If it turns out there are modules that need more, the limit can be easily increased in the C++ sources.

outParameterInfoCpp

name: outParameterInfoCpp, type: ParameterInfo(MLBase)

Some info about the module’s parameterization, including the input model’s parameter info. Just for documentation, should not be relied upon for processing.

For accessing this object via scripting, see the Scripting Reference: MLParameterInfoWrapper.

Parameter Fields

Field Index

[]: Trigger

outMaxNumModuleOutputs: Integer

Available Outputs: String

Requested Outputs: String

Clear: Trigger

Status Code: Enum

doNotClearOnFailedUpdate: Bool

Status Message: String

Expected Inputs: String

Update: Trigger

Has Valid Output: Bool

On Input Change Behavior: Enum

outMaxNumModuleInputs: Integer

Visible Fields

Update

name: update, type: Trigger

Initiates update of all output field values.

Clear

name: clear, type: Trigger

Clears all output field values to a clean initial state.

On Input Change Behavior

name: onInputChangeBehavior, type: Enum, default: Clear, deprecated name: shouldAutoUpdate,shouldUpdateAutomatically

Declares how the module should react if a value of an input field changes.

Values:

Title

Name

Deprecated Name

​Update

​Update

​TRUE

​Clear

​Clear

​FALSE

Status Code

name: statusCode, type: Enum, persistent: no

Reflects module’s status (successful or failed computations) as one of some predefined enumeration values.

Values:

Title

Name

​Ok

​Ok

​Invalid input object

​Invalid input object

​Invalid input parameter

​Invalid input parameter

​Internal error

​Internal error

Status Message

name: statusMessage, type: String, persistent: no

Gives additional, detailed information about status code as human-readable message.

Has Valid Output

name: hasValidOutput, type: Bool, persistent: no

Indicates validity of output field values (success of computation).

[]

name: updateDone, type: Trigger, persistent: no

Notifies that an update was performed (Check status interface fields to identify success or failure).

Requested Outputs

name: inRequestedOutputs, type: String

Allows to restrict the set of outputs requested from the model. Must contain a comma-separated string of available output names (see Available Outputs), or remain empty to include all outputs.

Expected Inputs

name: availableInputs, type: String, persistent: no, deprecated name: outAvailableInputs

Lists all inputs for the connected model.

Available Outputs

name: availableOutputs, type: String, persistent: no, deprecated name: outAvailableOutputs

Lists all outputs for the connected model.

Hidden Fields

doNotClearOnFailedUpdate

name: doNotClearOnFailedUpdate, type: Bool, persistent: no

Prevents automated clear after update failed. This does not affect status fields. It enables the developer to analyze module’s state after failure.

outMaxNumModuleInputs

name: outMaxNumModuleInputs, type: Integer, persistent: no

Upper limit for available inputs. If it turns out there are modules that need more, the limit can be easily increased in the module sources.

outMaxNumModuleOutputs

name: outMaxNumModuleOutputs, type: Integer, persistent: no

Documents the upper limit for available outputs. If it turns out there are modules that need more, the limit can be easily increased in the module sources.