Version number for the properties format.
Dictionary of InputProperties
, where the keys are the input names.
Each additional property must conform to the following schema
Type: objectInputProperties
describe a tile processor's input. These properties define how input tiles are to be prepared.
All properties are optional, i.e. can also have null
/None
value, in which case a default handling kicks in.
For page-wise processing, it defines how much to add to a corresponding output's tileSize
to derive the input tile size (when applied symmetrically).
fillMode
becomes relevant.For global processing, the parameter is currently not used.
If null
/None
or omitted, [0, 0, 0, 0, 0, 0]
(no padding) will be assumed.
Must contain a minimum of 6
items
Must contain a maximum of 6
items
Defines how to fill any undefined input areas that may be required due to padding
or becausethe input image size is not an integer multiple of the input tile size.
For global processing, the parameter is currently not used, as the input tile always has the exact size of the input image and there are no undefined regions.
If null
/None
or omitted, Reflect
is assumed.
Input dimension order as expected by the processor for page-wise processing (currently not used forglobal processing).
X
or Y
but also semantic placeholders such as CHANNEL1
, CHANNEL2
or BATCH
to indicate which dimension order the processor (model) expects its input, and, together with externalDimensionForChannel1/2/Batch
, how to rearrange ML/MeVisLab input image dimensions to get the expected result.ProcessTiles
or ApplyTileProcessorPageWise
."X, CHANNEL1, BATCH"
, or as a list of stings/enum items [ "X", "CHANNEL1", "BATCH" ]
, which is a bit more cumbersome, but has the advantage that items can be verified by your JSON-linter while typing using the schema.UNUSED
is internally appended up to full size.If null
/None
or omitted, "X, Y, CHANNEL1, BATCH"
is assumed.
Enumeration for external to classifier dimension mappings. The enum values for all items starting with "IN" must not
be changed, as they correspond to ML image dimension indices.
External (input image) dimension to map to the CHANNEL1
entry in dimensions
(page-wise processing only).
If null
/None
or omitted, C
is assumed.
External (input image) dimension to map to the CHANNEL2
entry in dimensions
(page-wise processing only).
If null
/None
or omitted, U
is assumed.
External (input image) dimension to map to the BATCH
entry in dimensions
(page-wise processing only).
In page-wise/patch-based processing, the BATCH
dimension is often used to combine multiple individually and independently processable items (patches) into a larger "batch" for performance reasons. It is therefore typically the last entry in dimensions
.
If null
/None
or omitted, a suitable dimension is guessed by the application module, i.e. the largest otherwise "unused" dimension is chosen.
Dictionary of OutputProperties
, where the keys are the output names.
Each additional property must conform to the following schema
Type: objectProperties a tile processor input can have. These properties define how input tiles are to be prepared.
No Additional PropertiesName of the reference input associated with this output. For an associated input/output pair, the following is assumed:
input.tileSize = output.tileSize * output.stride + 2*input.padding
padding
and/or stride
.If null
/None
or omitted: An arbitrary input is assumed as reference input (the first one, if sorting is stable).
Must be at least 1
characters long
Data type for this output's image values.
If null
/None
or omitted, page-wise processing will assume"float32"
.
Proposed size for output tiles to be requested.
referenceInput
documentation on how input tile size is derived from this and other parameters.Any 0
entries are mapped to the full extent of the reference input image (in the corresponding dimension)
If null
/None
or omitted [128, 128, 1, 1, 1, 1]
will be assumed (possibly corrected according to tileSizeMinimum
and tileSizeOffset
).
Must contain a minimum of 6
items
Must contain a maximum of 6
items
Value must be greater or equal to 0
Minimum output tile size: tileSize
must not be smaller (component-wise).
tileSizeOffset
) to snap unsuitable tileSize proposals to valid values by assuming that all sizes that can be expressed as tileSizeMinimum + n*tileSizeOffset
(for n=0, 1, 2, 3, ...) are valid.If null
/None
or omitted, [1, 1, 1, 1, 1, 1]
will be assumed.
Must contain a minimum of 6
items
Must contain a maximum of 6
items
(Minimum) offset between two valid `tileSize``s.
tileSizeMinimum
) to snap unsuitable tileSize proposals to valid values by assuming that all sizes that can be expressed as tileSizeMinimum + n*tileSizeOffset
(for n=0, 1, 2, 3, ...) are valid.If null
/None
or omitted, [1, 1, 1, 1, 1, 1]
will be assumed.
Must contain a minimum of 6
items
Must contain a maximum of 6
items
"Stride" for the output tile, in relation to its referenceInput
tile at in page-wise processing.
>1
correspond to a "downsampling" operation in that dimension. E.g. a stride of 2 indicatesthat an input tile that is twice as large (neglecting padding).]0, 1[
correspond to an "upsampling" operation in that dimension. E.g. a stride of 0.5 indicatesthat an input tile that is half as large as an output tile (neglecting padding).referenceInput
documentation on how exactly input tile size is derived from stride
and other parameters.Component values are positive floating point numbers:
If null
/None
or omitted, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
will be assumed.
Must contain a minimum of 6
items
Must contain a maximum of 6
items
Value must be strictly greater than 1e-06
Smallest possible image value (assumed). Will not be used to clamp values, but to adapt ML image properties.
valueMinimum
is not specified, so you only have to use it if for some reason you want a value that is different from the actual minimum (e.g. for classificationtasks, you may want fix the minimum to 0 (if that is your background)).[valueMinimum, valueMaximum]
.If null
/None
or omitted:
0
.Largest possible image value (assumed). Will not be used to clamp values, but to adapt ML image properties.
valueMaximum
is not specified, so you only have to use it if for some reason you want a value that is different from the actual maximum (e.g. for classificationtasks, you may want use the number of actual classes possible).[valueMinimum, valueMaximum]
.If null
/None
or omitted:
1
.