Access Permission
Only the workspace admin can run AI critic and view the feedback.Private preview feature
AI critic using the SDK is a private preview feature. For the in-editor AI critic feature, see AI critic.Set up
Before you start adding the AI critic logic, set up the Labelbox API key and client connection.Export the project
Export the project to fetch data rows with labels from your Labelbox dataset for running AI critic:Define critic instructions and criteria
Define the criteria and construct the instructions in Markdown to guide the multi-modal model on how to rate labels and provide feedback. The following example defines an end-to-end instruction with guidelines and examples:Create system prompt
Create a system prompt for the multi-modal model on how to evaluate labels and output scores, like the following example:Evaluate labels and submit results
After setting up the project and defining the criteria, the next step is to use the generative model to evaluate each data row and submit feedback. You can create a helper function to handle the process of constructing the feedback in markdown format and submitting it to Labelbox, like the following example:Markdown onlyWhen constructing feedback for submitting back to Labelbox, only use the Markdown format like the code sample. Any other format doesn’t work.
View and filter feedback
After submitting the scores and feedback back to Labelbox, if you are the admin of the Workspace, you can view them on the label editor and filter data rows using label score range:1
On the Annotate projects page, select the project that you ran AI critic.
2
On the Data Rows tab, navigate to the Search your data dropdown menu. Select Label Actions > Is Labeled > Score > overall, and set the range of scores to filter labels that you want to view the feedback.
3
Select the data rows with low scores or critical feedback to move them to a custom task for your team members to review or rework.