What are experiments and model runs?
At the heart of model evaluation in Labelbox are two key concepts: experiments and model runs.- Experiment: An Experiment is a container for all your work on a specific model. It’s where you’ll house your data, your model’s predictions, and all the iterations you go through as you develop and refine your model.
- Model run: A Model Run represents a single iteration or version within an Experiment. Each time you train your model with a new set of hyperparameters, on a different slice of data, or with a new set of labels, you’ll create a new Model Run to track the results.
Model evaluation workflow
The model evaluation workflow in Labelbox is designed to be a continuous cycle of improvement.- Create an experiment: Start by creating an experiment to house your model runs.
- Create a model run: Select the experiment and create the model run. You may also create new model runs under existing experiments.
- Upload predictions: Upload your model’s predictions to the model run.
- Analyze performance: Use Labelbox’s powerful tools to analyze your model’s performance.
- Identify opportunities: Discover areas where your model is struggling and identify high-impact data to improve it.
- Take action: Send data for re-labeling, find similar data in your catalog, and export data for further analysis.
- Iterate: Create a new model run and repeat the process.