Skip to main content
Model-assisted labeling is a feature in Labelbox that uses AI to automatically generate labels on your data. This can help you label your data faster and more accurately. Here’s how it works:
  • You provide a model: You can choose from a list of pre-trained models in Labelbox or bring your own.
  • The model predicts labels: The model will analyze your data and suggest labels based on what it has learned.
  • You review and approve: You can then review the suggested labels and make any necessary corrections.
This process can significantly speed up your labeling workflow, especially for large datasets.

How to enable model assisted labeling

Here’s a step-by-step guide to using model-assisted labeling in your Labelbox project.
  1. Set up a new project: To get started, create a new project in Labelbox. Make sure your project has an ontology and a batch of data that is compatible with Foundry. Once your project is set up, the Model assisted labeling button next to Start labeling will be enabled.
  2. Choose your model: Click on the Model assisted labeling button. You will be taken to a page where you can choose the model you want to use. Labelbox offers a variety of models to choose from. The models at the top of the list are the easiest to set up, but you can also use one of the other models if you prefer.
  3. Configure your model: After you have selected a model, you will need to configure it. This involves two main steps:
    • Align the model with your ontology: This means mapping the labels that the model can predict to the labels in your project’s ontology.
    • Set the confidence threshold: This is a value between 0 and 1 that tells the model how certain it needs to be before it suggests a label. A higher confidence threshold will result in fewer, but more accurate, suggestions.
If your ontology is open-ended, you can also provide your own criteria to the model, similar to prompt engineering.
  1. Preview the selected model: Before you apply the model to your entire dataset, you can generate a preview to see how it performs on a small sample of your data. This is a great way to test your model and make sure it is performing as you expect. During the preview, you can:
    • Adjust the confidence threshold to find the optimal setting for your use case.
    • Highlight annotations to see which tool was used and the model’s confidence for each annotation. If the model is not performing well, you can go back and choose a different model. If you are happy with the results, click Submit.
  2. Submit the model run: After you submit the model, it will start running on your data. You can track the progress of the model in the notification tab at the bottom left of the page. When the model has finished running, you will see a confirmation in the Notifications tab and the results will be shown in your project.
  3. Review the pre-labels: Once the model has finished running, the suggested annotations will be superimposed on your data rows. You can then review the suggestions and either submit them as they are or remove them and create your own annotations.