Consensus

The Consensus tool allows you to automatically compare the annotations on a given asset to all other annotations on that asset. Consensus works in real-time so you can take immediate and corrective actions towards improving your training data and model performance.

Once an asset is labeled more than once, a Consensus score is automatically calculated. Whenever an annotation is created, updated, or deleted, the consensus score will be recalculated as long as at least 2 Labels exist on that data row. Recalculations may take up to 5 minutes or so depending on the complexity of the labeled asset.

🚧

Caution

Switching from Benchmarks to Consensus (or vice versa) mid-project may result in duplicate labels.

Also, modifying the Consensus configuration after labeling is started is not recommended and may lead to duplicate labels or unexpected behavior.

Consensus agreement calculations are only supported for the following:

Asset type

Bounding box

Polygon

Polyline

Point

Segmentation mask

Entity

Relationship

Radio

Checklist

Dropdown

Free-form text

Images

N/A

Tiled imagery

N/A

Text

N/A

N/A

N/A

N/A

N/A

Video

N/A

📘

Queuing Consensus submissions is supported for all data types in Labelbox

Consensus task management (collecting votes) works for all data types. However, if the annotation type or data type is not supported by our calculation, no consensus score will be shown in our application but the submissions will be grouped together.

How are object-type calculations factored into the Consensus calculation?

Consensus agreement for the following Bounding box, Polygon, Segmentation mask annotations is calculated using Intersection over Union (IoU). The agreement between Point annotations and Polyline annotations is calculated based on proximity.

  1. First, Labelbox compares each annotation to its corresponding annotation to generate IoU scores for each annotation. The algorithm first finds the pairs of annotations to maximize the total IoU score, then it assigns the IoU value of 0 to any unmatched annotations.

  2. Labelbox then averages the IoU scores for each annotation belonging to the same annotation class to create an overall score for that annotation class.

"Tree" annotation class agreement = 0.99 + 0.99 + 0.97 + 0 + 0 / 5 = 0.59

How are text (NER) annotations factored into the Consensus calculation?

The Consensus score for two Entity annotations is calculated at the character level. If two Entity annotations do not overlap, the Consensus score will be 0. Overlapping Entity annotations will have a non-zero score. When there is overlap, Labelbox computes the weighted sum of the overlap length ratios, discounting for already counted overlaps. Whitespace is included in the calculation.

  1. Since the Consensus agreement for NER is calculated at the character level, spans of text are partly inclusive. For example, if labeler #1 labels a word like this

and labeler #2 labels the same word in the text file like this

the agreement score between these two annotations would be 0.75.

  1. Labelbox then averages the agreements for each annotation created using that annotation class to create an overall score for that annotation class.

How are classifications factored into the Consensus calculation?

The calculation method for each classification type is different. One commonality, however, is that if two classifications of the same type are compared and there are no corresponding selections between the two classifications at all, the agreement will be 0%.

  • A Radio classification can only have one selected answer. Therefore, the agreement between two radio classifications will either be 0% or 100%. 0% means no agreement and 100% means agreement.

  • A Checklist classification can have more than one selected answer, which makes the agreement calculation a little more complex. The agreement between two checklist classifications is generated by dividing the number of overlapping answers by the number of selected answers.

  • A Dropdown classification can have only one selected answer, however, the answer choices can be nested. The calculation for dropdown is similar to that of checklist classification, except that the agreement calculation divides the number of overlapping answers by the total depth of the selection (how many levels). Answers nested under different top-level classifications can still have overlap if the classifications at the next level match. On the flip side, answers that do not match exactly can still have overlap if they are under the same top-level classification.

For child classifications, if two annotations containing child classifications have 0 agreement (false positive), the child classifications will automatically be assigned a score of 0 as well.

Labelbox then creates a score for each annotation class by averaging all of the annotation scores.

Radio: "Is it daytime?" = "Yes" & "Yes = 1.00

How is the Consensus score calculated for the Data Row?

Labelbox averages the scores for each annotation class (object-type & classification-type) to create an overall score for the asset. Each annotation class is weighted equally. Below is a simplified example.

Consensus score = (Tree annotation class agreement + Radio class agreement) / Total annotation classes

0.795 = (0.59 + 1.00) / 2

You can use the metric as an initial indicator of label quality, the clarity of your ontology, and/or the clarity of your labeling instructions.

How do I view Consensus results?

The chart at the bottom of the Overview tab displays the Consensus scores across all labels in the project. The x-axis indicates the agreement percentage and the y-axis indicates the label count.

The Consensus column in the Activity table contains the agreement score for each Label and how many Labels are associated with that score. When you click on the Consensus icon, the Activity table will automatically apply the correct filter to view the labels associated with that consensus score.

When you click on an individual labeler in the Performance tab, the Consensus column reflects the average Consensus score for that labeler.

How do I set up Consensus?

  1. Create a project or select an existing one.

  2. Navigate to Settings > Quality and select Consensus to turn this QA feature on for your project.

  3. Choose the Coverage percentage and the number of Votes. The number of Votes indicates how many times the assets in the Coverage percentage get labeled. For example, 25% of the assets will get labeled 3 times.


Did this page help you?