Module: Segmentation Metrics ()

Description:

This is an experimental module from the Xtra Library: https://xtras.amira-avizo.com.

This Xtra implements common image segmentation metrics to record accuracy, F1, precision, and recall. The outputs are sent to the console. View the console pressing ‘Show’ button on the Console port. Press ‘Apply’ to compute the metrics. Note that the metrics computed are binary for every material provided in the label field. Pairwise comparisons are made between materials with the same number between the Ground Truth and Comparison label fields. Therefore, be sure that the material IDs are equivalent in both data sets before interpreting the metrics. This script will support 2D or 3D arrays as long as the two inputs share the same lattice size. The following metrics are computed in this script:
1: Accuracy (Binary case defined as Jaccard Index) : J ( y i , y ^ i ) = | y i y ^ i | | y i y ^ i | . .
2: F1 score: F1 = 2 * (precision * recall) / (precision + recall)
3: Precision score: TP/(TP+FP) where ‘TP’ is number of pixels or voxels that are true positive and ‘FP’ is the number of pixels or voxels that are false positive
4: Recall score: TP/(TP+FN) where ‘TP’ is the number of pixels or voxels that are true positive and ‘FN’ is the number of pixels or voxels that are false negative

Connections:

Groundtruth Image/Truth Image/Gold Standard: [required]
Groundtruth/Gold Standard/Truth image in 2D or 3D. Must be a material/label image (integer valued).
Predicted Labels/Prediction Image: Segmentation or prediction image in 2D or 3D. Must be a material/label image (integer valued) with the same lattice as the groundtruth image in the above port

Citations:

[1] Scikit-learn: Machine Learning in Python, Pedregosa et al., JMLR 12, pp. 2825-2830, 2011.