Once predictions have been uploaded to a Model Run, you can use Models to visually compare model predictions against ground truth labels. This qualitative inspection should help you better understand the model's behavior.
Here are the steps to easily visualize predictions in a Model Run:
Go to the Models tab.
Select the Model want to visualize.
Select the Model Run you want to visualize.
- By default, you will see the model predictions for each Data Row in the Gallery view. You can come back to this view, by clicking on this icon in the top right, above the thumbnails.
In the gallery view, annotations are green with solid lines and predictions are red with dashed lines.
Instead of viewing all of the predictions at once, it can be helpful to visually inspect the predictions on a subset of the Model Run data.
For example, you might want to gain an understanding of your model's behavior on a particular data split. You can do so by clicking on Train, Validate, or Test. The gallery view will only show the Data Rows in the train, validation, and test splits of the Model Run. Clicking on All shows the entire data of the Model Run.
You may also want to inspect your model's behavior on a specific label class. You can do so by adding a filter. If you add the filter "Label contains = airplane”, the gallery view will show all of the Data Rows that contain an “airplane” ground truth annotation.
Labelbox supports the following filters in Models: Label contains, Label ID, Datarow ID, Datasets, Project, Metrics. You can combine these filters using And and Or conditions.
You can save any set of filters, by turning the filtered Data Rows into a Slice. To do so, once are happy with your current set of filters, click on the "Save slice" button.
You can then easily access these Data Rows by clicking on "Select slice" and by selecting any previously created Slice.
If you have uploaded metrics to the Model Run, you can sort data rows in the grid view based on these metrics. To do so:
- Click on "Sort"
- Select a metric to sort by
- Select an ordering: ascending or descending
All metrics in the Model Run are available for sorting:
- scalar metrics or confusion matrix metrics
- confusion matrix metrics are available as global metrics (value is averaged across classes) or as class-level metrics
You can combine filtering and sorting, in the grid view, to unlock powerful workflows. Learn more about the following workflows, to improve model performance, increase labeling quality and efficiency, and reduce labeling budget.
- Finding and fixing model errors.
- Finding and fixing labeling mistakes.
- Identifying high value data to label in priority (active learning).
You can access the detailed view, by clicking on any thumbnail. The detailed view is designed to inspect in details a particular data row.
In the detailed view, labels are green with solid lines and predictions are red with dashed lines. You can zoom in on the data with the mouse wheel.
Shift + c
Shift+c, you can view annotations by their ontology or source (Model or Label)
Using the right nav you can toggle on/off predictions/annotations. Entire features can be toggled in addition to all annotations on the data row.
Shift + s
Shift+s, you can toggle all annotations on/off. You can use this to better inspect the underlying data.
Updated 5 months ago