Labelbox offers modern tools and proven workflows that you can configure to achieve your desired data quality while keeping human supervision costs low. In Labelbox, data and its associated labels can be edited, reviewed, and re-used anytime, enabling the machine learning team to rapidly iterate.
Issues & comments
Labeling data is an inherently collaborative process that requires continuous feedback between labelers, reviewers, and the machine learning team to ensure high-quality outcomes. In the Labelbox Editor, you can facilitate this collaboration throughout the labeling and reviewing process by creating an issue on the asset and opening it up for discussion in the comments section.
Review labels (approve, reject, and/or make corrections) with a review team after the first labels are created.
The Benchmarks tool allows you to designate a labeled asset as a “gold standard” and automatically compare all other annotations on that asset to the Benchmark.
The Consensus tool allows you to automatically compare the annotations on a given asset to all other annotations on that asset. Consensus works in real-time so you can take immediate and corrective actions towards improving your training data and model performance.
Labelbox recommends that you start off simple when configuring the review process and using Issues & comments. Familiarize yourself with the data, discover edge cases in data, and educate the labeling team through iteration with labeling instructions.
Updated 22 days ago