Confidence and IOU thresholds

Filter data in model runs by confidence scores and IOU thresholds.

Model runs allow users to specify a confidence threshold and an IOU threshold. These controls are helpful for ML teams to do error analysis, surface model errors, discover labeling mistakes, and do active learning.

By default, confidence and IOU thresholds are set to 0.5

By default, confidence and IOU thresholds are set to 0.5

Confidence threshold

Predictions with a confidence score lower than the confidence threshold are ignored. These predictions do not show up in the model run, and they do not contribute to model run metrics.

ML teams typically exclude predictions that are lower than a given confidence threshold from their analysis. Besides, ML teams typically inspect how model metrics (e.g. precision, recall, ...) evolve as the confidence threshold changes.

IoU threshold

In the case of Segmentation (seg masks), Object Detection (bounding boxes), or Named Entity Recognition (NER entities), predictions and annotations are considered a True Positive if two conditions are met:

  • the prediction and the annotation overlap enough: their Intersection over Union (IoU) is high enough
  • the prediction and the annotation have the same class

The IoU threshold controls the former: how much do predictions and annotations need to overlap, to be matched together as a True Positive?

Depending on your use case, you might be fine matching together predictions and annotations that have a low IOU, or you might want to match together predictions and annotations that have high IOUs only.

Labelbox lets you control this IOU threshold and see its impact on model metrics.