AI critic
Learn how to enable in-editor AI critic tool for multimodal chat evaluation projects to identify and fix grammar and code issues.
AI critic is feature in the live multimodal chat editor providing in-editor AI assistance to identify grammar issues and code inconsistencies. You can use this tool to refine your own responses to prompts to create more accurate and consistent multimodal chat evaluation labels.
AI critic can assist the following example areas:
- Refining code style and consistency: For tasks requiring style adjustments, AI critic identifies areas to enhance consistency and clarity, ensuring the code adheres to specified style guidelines without changing its logic.
- Ensuring dependency clarity: In projects where an LLM generates code that depends on non-default packages or libraries, AI critic reviews your response to verify that the code can compile and run without missing dependencies or setup issues.
- Evaluating documentation quality: When reviewing code with docstrings (comments describing a function's purpose), AI critic assesses whether the docstring is clear and accurately represents the expected outcomes shown in unit tests.
- Checking grammar: For natural language tasks that don’t involve code, AI critic helps identify and correct grammar issues.
Set up AI critic
- Select or set up a live multimodal chat project.
- In the project Overview > Workflow tasks, click Start labeling.
- Add and submit a prompt.
- After reviewing the model response, click Write to add your own response.
- Once you've completed your response, click ADD RESPONSE.
- In your response window, the system automatically generates AI critic comments for your response. You can also click GET SUGGESTIONS to generate AI critic comments.
- Review all AI critic comments. Click each critic comment to select PREVIEW, APPLY, or DISCARD for the suggested changes.
Updated about 1 month ago