- You can now classify model responses on conversational text using the new LLM human preference editor.
- To learn how to import model response predictions on conversational text, visit this Google Colab notebook.
- Custom editor users can now use Exports v2 to export their labels.
- Sampling data rows of any media type in Catalog works as expected.
- When you upload segmentation mask annotations for model error analysis, the IOU metrics for the segmentation masks will appear in the data row preview.
- When submitting a segmentation mask on multiple video frames, the time to submit has been reduced.
- By the end of December, customers still labeling with the dropdown input type will no longer be able to reuse ontologies containing the dropdown schema for new projects. To learn more, read our Deprecations page.
The latest version of our Python SDK is v3.57.0. See our full changelog in Github for more details on what was added recently.
- Global key support for Project move_data_rows_to_task_queue
- Project name required for project creation
- Updates to Image and Video notebook format
- Added additional byte array examples for Image/Video import and Image prediction import notebook
- Added a new LLM folder for new LLM import (MAL/MEA/Ground truth)
- Support for importing raster video masks from image bytes as a source
- Add new ExportTask class to handle streaming of exports
- Check for empty fields during webhook creation
- Updates to use bytes array for masks (video, image), and add examples of multiple notations per frame (video)