Overview

To import annotations in Labelbox, you need to create an annotations payload. In this section, we provide this payload for every supported annotation type.

Annotation payload types

Labelbox supports two formats for the annotations payload:
  • Python annotation types (recommended)
    • Provides a seamless transition between third-party platforms, machine learning pipelines, and Labelbox.
    • Allows you to build annotations locally with local file paths, numpy arrays, or URLs
    • Easily convert Python Annotation Type format to NDJSON format to quickly import annotations to Labelbox
    • Supports one-level nested classification (radio, checklist, or free-form text) under a tool or classification annotation.
  • JSON
    • Skips formatting annotation payload in the Labelbox Python annotation type
    • Supports any levels of nested classification (radio, checklist, or free-form text) under a tool or classification annotation.

Label import types

Labelbox supports two types of label imports:
  • Model-assisted labeling (MAL)
    • This workflow allows you to import computer-generated predictions (or simply annotations created outside of Labelbox) as pre-labels on an asset.
  • Ground truth
    • This workflow functionality allows you to bulk import your ground truth annotations from an external or third-party labeling system into Labelbox Annotate. Using the label import API to import external data is a useful way to consolidate and migrate all annotations into Labelbox as a single source of truth.

Supported annotations

The following annotations are supported for a LLM human preference data row:
  • Tool
    • Message ranking
    • Single message selection
    • Multiple message selection
  • Classification
    • Radio (single-choice)
    • Checklist (multi-choice)
    • Free-form text

Message and global-based annotations

Radio and free-form text annotations can be both message and global based. To make a message-based annotation global, remove the message_id key inside the annotation.

Tools

Message ranking

message_ranking_annotation = lb_types.MessageEvaluationTaskAnnotation(
    name="Message ranking",
    value=MessageRankingTask(
        parent_message_id="message-0",
        ranked_messages=[
            OrderedMessageInfo(
                message_id="message-1",
                model_config_name="model-config-1",
                order=1,
            ),
            OrderedMessageInfo(
                message_id="message-2",
                model_config_name="model-config-2",
                order=2,
            ),
        ],
    ),
)

Single message selection

single_message_selection_annotation = lb_types.MessageEvaluationTaskAnnotation(
    name="Single message selection",
    value=MessageSingleSelectionTask(
        message_id="message-1",
        parent_message_id="message-0",
        model_config_name="model-config-1",
    ),
)

Multiple message selection

multiple_message_selection_annotation = lb_types.MessageEvaluationTaskAnnotation(
    name="Multi message selection",
    value=MessageMultiSelectionTask(
        parent_message_id="message-0",
        selected_messages=[
            MessageInfo(
                message_id="message-1",
                model_config_name="model-config-1",
            ),
            MessageInfo(
                message_id="message-2",
                model_config_name="model-config-2",
            ),
        ],
    ),
)

Classifications

Radio

radio_annotation = lb_types.ClassificationAnnotation(
    name="Choose the best response",
    value=lb_types.Radio(answer=lb_types.ClassificationAnswer(
        name="Response B")))

Checklist

checklist_annotation= lb_types.ClassificationAnnotation(
  name="checklist_convo", # must match your ontology feature"s name
  value=lb_types.Checklist(
      answer = [
        lb_types.ClassificationAnswer(
            name = "first_checklist_answer"
        ),
        lb_types.ClassificationAnswer(
            name = "second_checklist_answer"
        )
      ]
    ),
  message_id="message-1" # Message specific annotation
 )

Free-form text

text_annotation = lb_types.ClassificationAnnotation(
    name="Provide a reason for your choice",
    value=lb_types.Text(answer="the answer to the text questions right here")
)

Example: Import pre-labels or ground truths

The steps to import annotations as pre-labels (machine-assisted learning) are similar to the steps to import annotations as ground truth labels, and we will describe the slight differences for each scenario.

Before you start

The below imports are needed to use the code examples in this section.
import labelbox as lb
import uuid
import labelbox.types as lb_types

from labelbox.types import (
Label,
MessageEvaluationTaskAnnotation,
MessageInfo,
MessageMultiSelectionTask,
MessageRankingTask,
MessageSingleSelectionTask,
OrderedMessageInfo,
)

Step 1: Import data rows

You need to import data rows to Catalog to attach annotations. This example shows how to create a data row in Catalog by attaching it to a dataset .
mmc_asset = "https://storage.googleapis.com/labelbox-datasets/conversational_model_evaluation_sample/offline-model-chat-evaluation.json"
global_key = "offline-multimodal_chat_evaluation"

# Upload data rows
convo_data = {
  "row_data": mmc_asset ,
  "global_key": global_key
}

# Create a dataset
dataset = client.create_dataset(name="offline-multimodal_chat_evaluation_demo")
# Create a datarow
task = dataset.create_data_rows([convo_data])
task.wait_till_done()
print("Errors:",task.errors)
print("Failed data rows:", task.failed_data_rows)

Step 2: Set up ontology

Your project ontology needs to support the classifications required by your annotations. To ensure accurate schema feature mapping, the value used as the name parameter needs to match the value of the name field in your annotation. For example, if you provide a name annotation_name for your message ranking annotation, you need to name the message ranking tool as anotations_name when setting up your ontology. The same alignment must hold true for the other tools and classifications that you create in the ontology. This example shows how to create an ontology containing all supported annotation types .
ontology_builder = lb.OntologyBuilder(
    tools=[
        lb.Tool(
            tool=lb.Tool.Type.MESSAGE_SINGLE_SELECTION,
            name="Single message selection",
        ),
        lb.Tool(
            tool=lb.Tool.Type.MESSAGE_MULTI_SELECTION,
            name="Multi message selection",
        ),
        lb.Tool(tool=lb.Tool.Type.MESSAGE_RANKING, name="Message ranking"),
    ],
  classifications=[
    lb.Classification(
      class_type=lb.Classification.Type.RADIO,
      scope=lb.Classification.Scope.GLOBAL,
      name="Choose the best response",
      options=[lb.Option(value="Response A"), lb.Option(value="Response B"), lb.Option(value="Tie")]
    ),
    lb.Classification(
      class_type=lb.Classification.Type.TEXT,
      name="Provide a reason for your choice"
    ),
    lb.Classification(
      class_type=lb.Classification.Type.CHECKLIST,
      scope=lb.Classification.Scope.INDEX,
      name="checklist_convo",
      options=[
        lb.Option(value="first_checklist_answer"),
        lb.Option(value="second_checklist_answer")
      ]
    )
  ]
)
# Create ontology
ontology = client.create_ontology(
    "MMC ontology",
    ontology_builder.asdict(),
    media_type=lb.MediaType.Conversational,
    ontology_kind=lb.OntologyKind.ModelEvaluation,
)

Step 3: Set up a labeling project

Use the following code to create an offline multimodal evaluation project:
# Create Labelbox project
project = client.create_offline_model_evaluation_project(
    name="Offline MMC Import Demo",
    description="<project_description>",  # optional
    media_type=lb.MediaType.Conversational,
)

# Setup your ontology

project.connect_ontology(ontology) # Connect the ontology to your project

Step 4: Send data rows to project

Use the following code to send data rows to the project you just created:
# Create a batch to send to your project
batch = project.create_batch(
  "first-batch-convo-demo", # Each batch in a project must have a unique name
  global_keys=[global_key], # Paginated collection of data row objects, list of data row ids or global keys
  priority=5 # priority between 1(Highest) - 5(lowest)
)

print("Batch: ", batch)

Step 5: Create annotation payloads

To declare payloads, you can use Python annotation types (preferred) or NDJSON objects. To understand annotation payloads, see overview. These examples demonstrate each format and how to compose annotations into labels attached to data rows.

Replace placeholder fields with actual values

Replace message_id and model_config_name with the actual message ID and model configuration name before appending annotations.
label = []
label.append(
  lb_types.Label(
    data={"global_key" : global_key },
    annotations=[
      message_ranking_annotation,
      single_message_selection_annotation,
      multiple_message_selection_annotation,
      text_annotation,
      checklist_annotation,
      radio_annotation,
    ]
  )
)

Step 6: Import annotation payload

For prelabeled (model-assisted labeling) scenarios, pass your payload as the value of the predictions parameter. For ground truths, pass the payload to the labels parameter.

Option A: import as prelabels (model assisted labeling)

This option is helpful for speeding up the initial labeling process and reducing the manual labeling workload for high-volume datasets.
# Upload MAL label for this data row in project
upload_job = lb.MALPredictionImport.create_from_objects(
    client = client,
    project_id = project.uid,
    name="mal_job"+str(uuid.uuid4()),
    predictions=label
)

print(f"Errors: {upload_job.errors}", )
print(f"Status of uploads: {upload_job.statuses}"

Option B: Import as ground truth labels

This option is helpful for loading high-confidence labels from another platform or previous projects that just need review rather than manual labeling effort.
# Upload label for this data row in project
upload_job = lb.LabelImport.create_from_objects(
    client = client,
    project_id = project.uid,
    name="label_import_job"+str(uuid.uuid4()),
    labels=label
)

print(f"Errors: {upload_job.errors}", )
print(f"Status of uploads: {upload_job.statuses}")