- AI critic: Detects code and grammar issues and suggests improvements for your input.
- Code editor: Lets you write and edit code using the Monaco code editor for an improved coding experience.
- Code runner: Allows you to run and test code in both model and labeler responses.
AI critic
AI critic provides in-editor AI assistance that helps identify grammar issues and code inconsistencies. It can assist in the following example areas:- Refining code style and consistency: For tasks requiring style adjustments, AI critic identifies areas to enhance consistency and clarity, ensuring the code adheres to specified style guidelines without changing its logic.
- Ensuring dependency clarity: In projects where an LLM generates code that depends on non-default packages or libraries, AI critic reviews your response to verify that the code can compile and run without missing dependencies or setup issues.
- Evaluating documentation quality: When reviewing code with docstrings (comments describing a function’s purpose), AI critic assesses whether the docstring is clear and accurately represents the expected outcomes shown in unit tests.
- Checking grammar: For natural language tasks that don’t involve code, AI critic helps identify and correct grammar issues.
Use AI critic
To use AI critic for your multimodal chat or prompt and response generation projects:- In the project Overview > Workflow tasks, click Start labeling, Start reviewing, or Start reworking.
- Depending on your project type, add a prompt and response. In the Markdown editor, AI critic automatically generates suggestions for your input.88
- Click each AI suggestion to review the associated comments explaining it. Then, click PREVIEW, APPLY, or DISCARD to manage the suggested edit.

Example AI critic suggestions
Code runner
Code runner lets you run and test code in model-generated and labeler-written responses for live and offline multimodal chat projects. It detects the programming language, sends the code to the appropriate runtime environment, and provides results in an easily analyzable format, including standard output (stdout), standard error (stderr), execution time, and runtime warnings or errors. The code runner allows you to pass environment variables (ENV) for credentials to connect with external databases and APIs. It also supports multi-file projects for testing complex code structures. Currently, it supports the following languages:- Python
- Javascript
- PHP
- Java
- Typescript
- Swift
Use code runner
To use the code runner for your multimodal chat projects that involve coding:- In the project Overview > Workflow tasks, click Start labeling.
-
Send a prompt to generate a response that includes code.
- To run code for model-generated responses, use the Model response field
- To run code for your own responses, click Write and enter your response.
- Code runner automatically detects the runtime based on your code language. Click CODE > RUN CODE to execute the code or select a different runtime.
- View the Outputs field at the bottom of the response to see code execution results or errors. Update your response to fix any issues.

Example code runner outputs
Code editors
You can use the following code editors integrated into Labelbox when adding your own responses:- Monaco Editor: A lightweight, built-in code editor for quick, in-line code editing with syntax highlighting, line numbers, and code folding. It’s s block-based and runs independently, allowing multiple instances for different code blocks.
- Visual Studio Code: A full VS Code IDE running in a web instance with access to a remote host environment. It supports working on entire code repositories, running CLI tools, executing code, using debuggers, writing tests, installing extensions, and leveraging GitHub Copilot for AI-assisted coding.