Quality control and assurance

Tools and workflows to achieve desired data quality

Labelbox offers modern tools and proven workflows that you can configure to achieve your desired data quality while keeping human supervision costs low. In Labelbox, data and its associated labels can be edited, reviewed, and re-used anytime, enabling the machine learning team to rapidly iterate.

Available tools and workflows

Issues & comments
Labeling data is an inherently collaborative process that requires continuous feedback between labelers, reviewers, and the machine learning team to ensure high-quality outcomes. In the Labelbox Editor, you can facilitate this collaboration throughout the labeling and reviewing process by creating an issue on the asset and opening it up to discussion in the comments section.

Review step
Review labels (approve, reject, and/or make corrections) with a review team after the first labels are created.

The Benchmark tool allows you to designate a labeled asset as a “gold standard” and automatically compare all other annotations on that asset to the Benchmark.

The Consensus tool allows you to automatically compare the annotations on a given asset to all other annotations on that asset. Consensus works in real-time so you can take immediate and corrective actions towards improving your training data and model performance.

How to chose the right configuration?

It is recommended that you start simple by configuring the review process and utilizing Issues & comments. Familiarize yourself with the data, discover edge cases in data, and educate the labeling team through iteration with labeling instructions.

Benchmarks and Consensus are particularly useful in highly specialized and subjective labeling tasks.

Did this page help you?