Code and grammar assistance
Learn how to use in-editor tools like AI critic and code runner to identify grammar errors and validate code in multimodal chat evaluation and prompt-response projects.
Labelbox offers in-editor assistance tools to help identify and fix grammar and code issues for AI model evaluation projects, including multimodal chat evaluation and prompt and response generation. Currently, the following tools are available:
- AI critic: Detects code and grammar issues and suggests improvements for your input.
- Code editor: Lets you write and edit code using the Monaco code editor for an improved coding experience.
- Code runner: Allows you to run and test code in both model and labeler responses.
AI critic
AI critic provides in-editor AI assistance that helps identify grammar issues and code inconsistencies. It can assist in the following example areas:
- Refining code style and consistency: For tasks requiring style adjustments, AI critic identifies areas to enhance consistency and clarity, ensuring the code adheres to specified style guidelines without changing its logic.
- Ensuring dependency clarity: In projects where an LLM generates code that depends on non-default packages or libraries, AI critic reviews your response to verify that the code can compile and run without missing dependencies or setup issues.
- Evaluating documentation quality: When reviewing code with docstrings (comments describing a function's purpose), AI critic assesses whether the docstring is clear and accurately represents the expected outcomes shown in unit tests.
- Checking grammar: For natural language tasks that donβt involve code, AI critic helps identify and correct grammar issues.
Both multimodal chat editor and prompt-response generation editor automatically apply AI suggestions when you add or edit prompt and response fields. AI critic is available for all workflow tasks, including labeling, reviewing, and reworking.
Use AI critic
To use AI critic for your multimodal chat or prompt and response generation projects:
- In the project Overview > Workflow tasks, click Start labeling, Start reviewing, or Start reworking.
- Depending on your project type, add a prompt and response. In the Markdown editor, AI critic automatically generates suggestions for your input.88
- Click each AI suggestion to review the associated comments explaining it. Then, click PREVIEW, APPLY, or DISCARD to manage the suggested edit.
When you make changes in the Markdown editor, AI critic runs automatically and provides real-time suggestions. You can also click the refresh button to re-generate AI critic comments.

Example AI critic suggestions
Code runner
Code runner lets you run and test code in model-generated and labeler-written responses for live and offline multimodal chat projects. It detects the programming language, sends the code to the appropriate runtime environment, and provides results in an easily analyzable format, including standard output (stdout), standard error (stderr), execution time, and runtime warnings or errors.
The code runner allows you to pass environment variables (ENV) for credentials to connect with external databases and APIs. It also supports multi-file projects for testing complex code structures. Currently, it supports the following languages:
- Python
- Javascript
- PHP
- Java
- Typescript
- Swift
The code runner clears all execution environments and related resources to ensure security and performance.
Beta feature
Code runner is a beta feature.
Use code runner
To use the code runner for your multimodal chat projects that involve coding:
- In the project Overview > Workflow tasks, click Start labeling.
- Send a prompt to generate a response that includes code.
- To run code for model-generated responses, use the Model response field
- To run code for your own responses, click Write and enter your response.
- Code runner automatically detects the runtime based on your code language. Click CODE > RUN CODE to execute the code or select a different runtime.
- View the Outputs field at the bottom of the response to see code execution results or errors. Update your response to fix any issues.

Example code runner outputs
Code editors
You can use the following code editors integrated into Labelbox when adding your own responses:
- Monaco Editor: A lightweight, built-in code editor for quick, in-line code editing with syntax highlighting, line numbers, and code folding. It's s block-based and runs independently, allowing multiple instances for different code blocks.
- Visual Studio Code: A full VS Code IDE running in a web instance with access to a remote host environment. It supports working on entire code repositories, running CLI tools, executing code, using debuggers, writing tests, installing extensions, and leveraging GitHub Copilot for AI-assisted coding.
Enable code editors
Code editors are available when you write your own response for a prompt using the Markdown editor.
Enable Monaco Editor
To enable the Monaco Editor, add a code block with a specified language in the Markdown editor:
Markdown text...
<!-- Specify the code block for enabling editing in code editor -->
```python
...
```
More Markdown text...
<!-- Specify more code blocks for enabling editing in code editor -->
```java
...
```
If the syntax is correct, a < > button appears next to the code block. Click this button to open the code editor and modify the block.
Enable VS Code IDE
To enable the VS Code IDE, click CODE > EDIT IN VS CODE. See Visual Studio Code documentation to get familiar with its offerings and learn how to code faster with the AI Copilot.
Updated 13 days ago