Analyze predictions and metrics
After you upload model predictions to the Model product, Labelbox auto-generates model metrics. Then, you can start analyzing your neural network.
The Model product is designed to help you analyze the performance of your machine learning models, find low-performing slices of data, surface labeling mistakes, and identify high-impact data to label in order to improve model performance maximally.
Turn your model predictions into insights to prioritize what to label next, and repeat.
The Model product supports both setups: improving neural networks during model development (predictions and annotations) and models inference (predictions only).
Model development
During model development, ML teams typically work with labeled data and with model predictions. You can compare model predictions and annotations to inform your insights.
Error analysis is the process of diagnosing the differences between model predictions and ground truths. You can use that information to do the following:
- Find model errors: Gain visual insights into where your model is performing poorly, and why
- Fix model errors: Surface unlabeled data (similar to model mispredictions) to label in priority in order to fix model failures
- Find and fix label errors: Use your trained model to identify mislabels and send these labeling mistakes to relabeling
Model inference
During Model Inference, ML teams typically work with unlabeled data and with model predictions. Model predictions and their confidence scores are used as a signal.
By analyzing the distribution of model predictions and confidence scores, ML teams do active learning and data selection: prioritize high-value data to label in priority so that it maximally improves model performance.
Labelbox helps ML teams do these tasks efficiently, visually, and at scale.
Updated about 2 months ago