TileProcessorProperties

Type: object

Comprises all generic properties a TileProcessor can have, which is mostly relevant to
describe its inputs/output.

We differentiate between two different application modes for a TileProcessor:

needs to be provided so that the processor application module (e.g. ProcessTiles) knows how to create the individual
input tiles/patches/batches from the input image and put the resulting output tiles back together into a
comprehensive output image.

For details on the properties, see member inputs (InputProperties) and outputs (OutputProperties) documentation.

No Additional Properties

Type: integer Default: 2

Version number for the properties format.

Type: object

Dictionary of InputProperties, where the keys are the input names.

Each additional property must conform to the following schema

Type: object

InputProperties describe a tile processor's input. These properties define how input tiles are to be prepared.

All properties are optional, i.e. can also have null/None value, in which case a default handling kicks in.

No Additional Properties

Type: enum (of string)

Data type for this input's image values.

  • Required for both, page-wise and global processing.

If null/None or omitted, "float32" will be assumed.

Must be one of:

  • "uint8"
  • "uint16"
  • "uint32"
  • "uint64"
  • "int8"
  • "int16"
  • "int32"
  • "int64"
  • "float32"
  • "float64"

Type: array of integer

For page-wise processing, it defines how much to add to a corresponding output's tileSize to derive the input tile size (when applied symmetrically).

  • If this results in requesting regions outside the input image, the field fillMode becomes relevant.
  • If fewer than 6 components are specified, zeroes are appended to full size.

For global processing, the parameter is currently not used.
If null/None or omitted, [0, 0, 0, 0, 0, 0] (no padding) will be assumed.

Must contain a minimum of 6 items

Must contain a maximum of 6 items

Each item of this array must be:

Type: enum (of string)

Defines how to fill any undefined input areas that may be required due to padding or becausethe input image size is not an integer multiple of the input tile size.

For global processing, the parameter is currently not used, as the input tile always has the exact size of the input image and there are no undefined regions.
If null/None or omitted, Reflect is assumed.

Must be one of:

  • "FillValue"
  • "Reflect"

Type: number

Value with which to fill undefined (but required) input areas if fillMode is "FillValue"

For global processing, the parameter is currently not used, as the input tile always has the exact size of the input image and there are no undefined regions.
If null/None or omitted, 0 is assumed.


Input dimension order as expected by the processor for page-wise processing (currently not used forglobal processing).

  • May contain actual dimensions such as X or Y but also semantic placeholders such as CHANNEL1, CHANNEL2 or BATCH to indicate which dimension order the processor (model) expects its input, and, together with externalDimensionForChannel1/2/Batch, how to rearrange ML/MeVisLab input image dimensions to get the expected result.
  • This also defines the inverse order in which corresponding output images should be reformatted before providing them to MeVisLab.
  • Currently, the dimensions are only relevant for page/patch-wise processing, e.g. with ProcessTiles or ApplyTileProcessorPageWise.
  • For convenience, you may specify the value as a comma-separated string such as "X, CHANNEL1, BATCH", or as a list of stings/enum items [ "X", "CHANNEL1", "BATCH" ], which is a bit more cumbersome, but has the advantage that items can be verified by your JSON-linter while typing using the schema.
  • If fewer than 6 components are specified, UNUSED is internally appended up to full size.

If null/None or omitted, "X, Y, CHANNEL1, BATCH" is assumed.

Type: array

Each item of this array must be:

Type: enum (of string)

Enumeration for external to classifier dimension mappings. The enum values for all items starting with "IN" must not
be changed, as they correspond to ML image dimension indices.

Must be one of:

  • "CHANNEL2"
  • "CHANNEL1"
  • "BATCH"
  • "UNUSED"
  • "X"
  • "Y"
  • "Z"
  • "C"
  • "T"
  • "U"

Type: enum (of string)

External (input image) dimension to map to the CHANNEL1 entry in dimensions (page-wise processing only).

If null/None or omitted, C is assumed.

Must be one of:

  • "NONE"
  • "X"
  • "Y"
  • "Z"
  • "C"
  • "T"
  • "U"

Type: enum (of string)

External (input image) dimension to map to the CHANNEL2 entry in dimensions (page-wise processing only).

If null/None or omitted, U is assumed.

Same definition as externalDimensionForChannel1

Type: enum (of string)

External (input image) dimension to map to the BATCH entry in dimensions (page-wise processing only).
In page-wise/patch-based processing, the BATCH dimension is often used to combine multiple individually and independently processable items (patches) into a larger "batch" for performance reasons. It is therefore typically the last entry in dimensions.
If null/None or omitted, a suitable dimension is guessed by the application module, i.e. the largest otherwise "unused" dimension is chosen.

Same definition as externalDimensionForChannel1

Type: object

Dictionary of OutputProperties, where the keys are the output names.

Each additional property must conform to the following schema

Type: object

Properties a tile processor input can have. These properties define how input tiles are to be prepared.

No Additional Properties

Type: string

Name of the reference input associated with this output. For an associated input/output pair, the following is assumed:

  • For page-wise processing: input.tileSize = output.tileSize * output.stride + 2*input.padding
  • Always: The output's world matrix (position, orientation and scale in the world/patient coordinate system) is derived from this input's (possibly taking into account translation and/or scaling differences because of padding and/or stride.

If null/None or omitted: An arbitrary input is assumed as reference input (the first one, if sorting is stable).

Must be at least 1 characters long

Type: enum (of string)

Data type for this output's image values.

  • For global processing, the value is currently not used, because it need not be known in advance.

If null/None or omitted, page-wise processing will assume"float32".

Same definition as dataType

Type: array of integer

Proposed size for output tiles to be requested.

  • For page-wise processing, see referenceInput documentation on how input tile size is derived from this and other parameters.
  • For global processing, the value is not used, because it need not be known in advance and the output tile will always have full extent.

Any 0 entries are mapped to the full extent of the reference input image (in the corresponding dimension)
If null/None or omitted [128, 128, 1, 1, 1, 1] will be assumed (possibly corrected according to tileSizeMinimum and tileSizeOffset).

Must contain a minimum of 6 items

Must contain a maximum of 6 items

Each item of this array must be:

Type: integer

Value must be greater or equal to 0

Type: array of integer

Minimum output tile size: tileSize must not be smaller (component-wise).

  • For page-wise processing, this parameter may be used (in combination with tileSizeOffset) to snap unsuitable tileSize proposals to valid values by assuming that all sizes that can be expressed as tileSizeMinimum + n*tileSizeOffset (for n=0, 1, 2, 3, ...) are valid.
  • For global processing, the value is not used.

If null/None or omitted, [1, 1, 1, 1, 1, 1] will be assumed.

Must contain a minimum of 6 items

Must contain a maximum of 6 items

Each item of this array must be:

Type: array of integer

(Minimum) offset between two valid `tileSize``s.

  • For page-wise processing, this parameter may be used (in combination with tileSizeMinimum) to snap unsuitable tileSize proposals to valid values by assuming that all sizes that can be expressed as tileSizeMinimum + n*tileSizeOffset (for n=0, 1, 2, 3, ...) are valid.
  • For global processing, the value is not used.

If null/None or omitted, [1, 1, 1, 1, 1, 1] will be assumed.

Must contain a minimum of 6 items

Must contain a maximum of 6 items

Each item of this array must be:

Type: array of number

"Stride" for the output tile, in relation to its referenceInput tile at in page-wise processing.

  • Component values >1 correspond to a "downsampling" operation in that dimension. E.g. a stride of 2 indicatesthat an input tile that is twice as large (neglecting padding).
  • Component values in ]0, 1[ correspond to an "upsampling" operation in that dimension. E.g. a stride of 0.5 indicatesthat an input tile that is half as large as an output tile (neglecting padding).
  • See referenceInput documentation on how exactly input tile size is derived from stride and other parameters.

Component values are positive floating point numbers:

If null/None or omitted, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0] will be assumed.

Must contain a minimum of 6 items

Must contain a maximum of 6 items

Each item of this array must be:

Type: number

Value must be strictly greater than 1e-06

Type: number

Smallest possible image value (assumed). Will not be used to clamp values, but to adapt ML image properties.

  • In page-wise/demand-driven mode, this value is especially important, as it cannot be easily computed.
  • In global mode, the exact minimum value will be auto-computed if valueMinimum is not specified, so you only have to use it if for some reason you want a value that is different from the actual minimum (e.g. for classificationtasks, you may want fix the minimum to 0 (if that is your background)).
  • If unsure, always be conservative. Subsequent algorithms may depend on there not being any values outside of [valueMinimum, valueMaximum].

If null/None or omitted:

  • Page-wise processing will assume 0.
  • Global processing will compute the actual image minimum from the output tile.

Type: number

Largest possible image value (assumed). Will not be used to clamp values, but to adapt ML image properties.

  • In page-wise/demand-driven mode, this value is especially important, as it cannot be easily computed.
  • In global mode, the exact maximum value will be auto-computed if valueMaximum is not specified, so you only have to use it if for some reason you want a value that is different from the actual maximum (e.g. for classificationtasks, you may want use the number of actual classes possible).
  • If unsure, always be conservative. Subsequent algorithms may depend on there not being any values outside of [valueMinimum, valueMaximum].

If null/None or omitted:

  • Page-wise processing will assume 1.
  • Global processing will compute the actual image maximum from the output tile.