Labels Matching MultiClass

This is an experimental module from the Xtra Library: https://xtras.amira-avizo.com.

Extension of the Labels Matching module for quality metrics of segmentation and classification.

In addition to the (binary) Labels Matching information, this module also computes the confusion matrix of the different instances.

Known Limitations

·         The module assumes that the Label Images provided as inputs are such that:

o    individual objects correspond to connected components, i.e.: instances are obtained by binarizing and labeling the connected components the label field.

o    the pixel values (labels) represent the class

Connections

Input

Description

Image1 (Ground Truth): [required]

Label image of ground truth instances.

Image2 (Prediction): [required]

Label image of predicted instances. The classes should correspond with those defined in the ground truth

Parameters (Ports)

Parameter

Description

Console:

Opens the Python Script Object console of the module as the active console window.

Mode:

Specify whether the inputs are interpreted as a 3D volume or a stack of 2D images for processing.

Matching Criterion:

For each pair of ground truth and predicted objects, the criterion used to define whether the pair is matching or not. All proposed metrics are normalized in [0-1]

Matching Threshold:

A pair of objects is considered as matching, if the Criterion value is above the Threshold.

Optional outputs:

Options to output analysis spreadsheet relative to the Prediction and/or Ground Truth instances (see Labels Matching)

Outputs

The main output ‘matchingInstances’ adds a new Tab to the ‘matching’ table generated by Labels Matching, with an additional tab ‘Classification Summary’, the provides a full class confusion matrix describing the quality of the instance classification.

-          table: the main quality metrics (see description), in an output dataset named with suffix ‘.matchingGT

See also

LabelsMatching

MLObjectClassification_Tuto.pdf