Learn how to create and manage model training experiments in the app UI.

What is an experiment?

An experiment is a container in Labelbox that houses all of the information related to the iterative development of a specific model. It contains the data rows for training, model error analysis metrics, model versioning, and versioned snapshots (model runs) of data rows, predictions, etc, associated with a model’s development.

Experiments are designed to help you track and compare all of the iterations associated with your model development.

Create an experiment

There are two places in the app UI where you can create an experiment:

Option 1: from Catalog

To create a new experiment from Catalog:

  1. Go to Catalog and select a set of data rows.
  2. Select Manage selection.
  3. Then select New experiment from the dropdown menu.
  4. This will bring you to the Create a model run step.
Select **New experiment** from the **Manage selection** menu in Catalog.

From Catalog, select Manage selection > New experiment.

Option 2: from Model

To create a new experiment from Model:

  1. Go to Model.
  2. Click the Create button. Then, select Experiment.
  3. Select the data type, then select the data rows to send to the experiment.
  4. Click Next to proceed to the Create a model run step.
Click **Create** > **Experiment** to create a new experiment.

From Model, select Create > Experiment.

View/manage your experiments

To view all of the experiments in your org, go to Model and select Experiment. Use the search bar to filter the experiments in the list by name.

Select **Experiment** to view all of the experiments in your organization.

From Model, select Experiment to view all of the experiments in your organization.

When you select an experiment from the list, you will see three subtabs within the experiment: Model runs, History, and Settings.

Model runs

A model run represents a single iteration within a model training experiment. Visit these resources to learn more about model runs:


In the History subtab, you can view all of the model runs associated with an experiment. From this view, you can see the following configurations for each model run: Model run name, Time, Number of data rows, and Data splits.

View all model runs in the **History** subtab.

View all model runs in the History subtab.


From the Settings subtab, you can configure the following for your experiment:

View of the **Ontology** page in the **Settings** subtab.

View of the Ontology page in the Settings subtab.

  • On the Ontology page, you can view and configure the ontology connected to the model run.
  • On the Model training page, you can also select a model to fine-tune from the available Foundry models.
  • In the Danger zone section, you can delete your model experiment. Caution: This action cannot be reverted.