View and analyze predictions
You can view the predictions generated by your model run in two ways: From the Models tab- Go to the Models tab and select your completed model run.
- In the Predictions tab of the model run, you’ll see a list of all data rows with generated predictions.
- Click on any data row to open it in the viewer and see the model’s predictions overlaid on the asset.
- Navigate to the Catalog.
- Use the filters to find your model run. This will show you all the data rows that were part of that run.
- Click on a data row to view its predictions.
Step-by-step instructions
One of the most powerful features of Foundry is its ability to create a tight feedback loop between your model and your human labeling teams. This process, often called model-assisted labeling or active learning, allows you to use your model’s predictions as a starting point for your labelers, dramatically accelerating the creation of high-quality training data. Instead of labeling from scratch, your team can simply review, correct, or approve the labels generated by the model. By focusing human effort on the most uncertain or incorrect predictions, you can improve your model’s performance more efficiently.Workflow and strategy
Before sending predictions to a project, consider your goal. You don’t need to send every prediction for review. A targeted approach is more effective. Common strategies include:- Focusing on Low-Confidence Predictions: This is the core of most active learning workflows. By filtering for predictions where the model had a low confidence score, you are prioritizing the data that the model found most difficult to understand. Correcting these examples provides the most valuable signal for fine-tuning your next model.
- Targeting Specific Classes: If you know your model struggles with a particular object or class, you can filter for all predictions of that class and send them for review to improve the model’s performance on that specific area.
- Random Sampling for Quality Control: To get a general sense of your model’s performance across all classes, you can send a random sample of predictions for review. This helps you spot systemic issues and calculate overall accuracy metrics.
Step 1: Navigate to your model run
- From the main menu, select Model.
- Find the completed model run you want to work with and click on it to open its details page.
Step 2: Filter and select predictions
- Click on the Predictions tab within your model run. You will see a list of all data rows that have predictions.
- Use the filter panel on the left to narrow down the predictions you want to send for review. The most common filter is Confidence, which you can use to isolate predictions below a certain threshold (e.g.,
confidence < 0.75). - Once you have filtered the list, select the specific predictions you want to send. You can use the checkbox at the top of the list to select all filtered predictions.
Step 3: Initiate “Send to Project”
With your desired predictions selected, click the Send to project button located at the bottom of the screen. A configuration dialog will appear.Step 4: Configure the project and ontology mapping
This is a critical step to ensure the predictions are correctly imported into your labeling project.- Select Project: Choose the destination labeling project from the dropdown menu. This project’s queue will receive the predictions as new labeling tasks.
- Map Ontology: You must map the model’s output features to your project’s ontology.
- On the left, you’ll see the feature classes from the model (e.g.,
car,person). - On the right, use the dropdowns to select the corresponding feature class from your project’s ontology.
- This “translation” step ensures that a prediction for
carfrom the model is correctly understood as acarannotation in your project.
- On the left, you’ll see the feature classes from the model (e.g.,
Step 5: Submit for review
- After confirming the project and ontology mapping, click Submit.
- The selected predictions will be added to the chosen project’s labeling queue.