LLM human preference

A guide to creating an LLM human preference project for comparing & classifying model outputs on conversation text data.

With the LLM human preference editor, you can create human preference data for model comparison or RLHF (reinforcement learning with human feedback). You can compare model outputs side-by-side and select the most favorable model output on a conversational text thread by assigning the model output a classification.

The LLM human preference editor helps human reviewers evaluate and compare chat responses

This editor solves two important problems that are critical to ensuring responsible AI aligned with human preferences:

  • Model comparison: Conduct the evaluation and comparison of model configurations for a given use case and decide which one to pick.
  • RLHF: Create preference data for training a reward model for RLHF based on multiple outputs from a single model.

Supported classifications for model response classification

Classification typeImport formatExport format
RadioSee sampleSee sample
Free-form textSee sampleSee sample
ChecklistSee sampleSee sample

Set up an LLM human preference project

For this version of the model comparison editor, Labelbox assumes you already have a large language model set up in your own environment that you can use to generate preliminary model responses on your conversational text data.

Step 1: Import conversation data & model responses

Import using your conversation data & model responses.

To import secured URLs, we recommend setting up your IAM delegated access integration. For instructions on setting this up, see IAM integration.

  1. In your cloud bucket, create a JSON file for each conversational text data row. Each JSON file should contain the model output. You may import content as markdown or text. Use this sample as a guide for creating the JSON files for each conversational text thread and the model output on that thread.
{
  "type": "application/vnd.labelbox.conversational",
  "version": 1,
  "messages": [
    {
      "messageId": "message-0",
      "timestampUsec": 1530718491,
      "content": "Hi! How can I help?",
      "user": {
        "userId": "Bot 002",
        "name": "Bot"
      },
      "align": "left",
      "canLabel": false
    },
    {
      "messageId": "message-1",
      "timestampUsec": 1530718503,
      "content": "I'm looking to buy a vacuum cleaner this Black Friday. Help me with the best deal?",
      "user": {
        "userId": "User 00686",
        "name": "User"
      },
      "align": "right",
      "canLabel": true
    },
    {
      "messageId": "message-2",
      "timestampUsec": 1530718516,
      "content": "Of course!! I'll be your personal shopping assistant. So tell me, what are you looking for? Is it house or an apartment? What kind of floors? Finally, do you have a budget?",
      "user": {
        "userId": "Bot 002",
        "name": "Bot"
      },
      "align": "left",
      "canLabel": false
    },
    {
      "messageId": "message-3",
      "timestampUsec": 1530718528,
      "content": "Hmm...I have a 2 bedroom apartment with hardwood floors. The square footage is around 1000 sqft. And yes on budget, looking to land something below 600 bucks",
      "user": {
        "userId": "User 00686",
        "name": "User"
      },
      "align": "right",
      "canLabel": true
    }
  ],
  "modelOutputs": [
    {
      "title": "Response A",
      "content": "I have 2 options for you\n- The Dyson V15 [Product page](https://www.dyson.com/vacuum-cleaners/cordless/v15)\n- The Shark Stratos [Product page](https://www.sharkclean.com/products/shark-stratos-cordless-vacuum-with-free-steam-mop-zidIZ862HB)",
      "modelConfigName": "GPT-3.5 with temperature 0"
    },
    {
      "title": "Response B",
      "content": "Although I have a couple of options, I would recommend the Dyson V15 [Product page](https://www.dyson.com/vacuum-cleaners/cordless/v15) based on what you need.\n",
      "modelConfigName": "GPT-4 with temperature 0.2"
    }
  ]
}

  1. In a separate JSON file, put the URLs to the cloud-hosted JSON files containing the conversation text data and model outputs. This is the file you will import to Labelbox. Use the sample file below as a guide for formatting your import file.
[
    {
      "row_data": "https://storage.googleapis.com/labelbox-datasets/conversational-sample-data/pairwise_shopping_1.json",
      "global_key": "global_key_1"
    },
    {
        "row_data": "https://storage.googleapis.com/labelbox-datasets/conversational-sample-data/pairwise_shopping_2.json",
        "global_key": "global_key_2"
    },
    {
        "row_data": "https://storage.googleapis.com/labelbox-datasets/conversational-sample-data/pairwise_shopping_3.json",
        "global_key": "global_key_3"
    }
]
  1. Upload your data via Python SDK

To send your data to Labelbox, refer to our guidelines on conversational text upload

To learn how to upload the URLs to your cloud-hosted JSON files (and import human preference prelabels with Model-assisted labeling) via the Python SDK, see this developer guide.

Alternative data types

You can also make this editor multi-modal (image, video, audio) by adding HTML to your JSON payload.

Image

{
    "type": "application/vnd.labelbox.conversational",
    "version": 1,
    "messages": [
        {
            "messageId": "message-0",
            "content": "Ideogram prompt: Generate an image of a red tulip",
            "user": {
                "userId": "User 000",
                "name": "User"
            },
            "align": "left",
            "canLabel": false
        }
    ],
    "modelOutputs": [
        {
            "title": "Image 1",
            "content": "<img style='width:50%' src=\"https://storage.googleapis.com/labelbox-datasets/image_sample_data/a-stunning-high-resolution-image-of-a-vibrant-red--FKZ_pkdaRDKZxQC1hJ45yw-9OoHdU5OQzyhpL-cwSz1aw.jpeg\" />",
            "modelConfigName": "ideogram_image_1"
        },
        {
            "title": "Image 2",
            "content": "<img style='width:50%' src=\"https://storage.googleapis.com/labelbox-datasets/image_sample_data/a-stunning-close-up-image-of-a-vibrant-red-tulip-t-mqx4mXo_S5ubGn7kU-oxJA-9OoHdU5OQzyhpL-cwSz1aw.jpeg\" />",
            "modelConfigName": "ideogram_image_2"
        }
    ]
}

Using the markdown view, your images will be rendered in the editor.

An example screenshot of how the markdown editor renders images


video

{
    "type": "application/vnd.labelbox.conversational",
    "version": 1,
    "messages": [
        {
            "messageId": "message-0",
            "content": "<video style='width:100%' src='https://storage.googleapis.com/gtv-videos-bucket/sample/ForBiggerMeltdowns.mp4' controls></video>",
            "user": {
                "userId": "User 000",
                "name": "User"
            },
            "align": "left",
            "canLabel": false
        }
    ],
    "modelOutputs": [
        {
            "title": "Transcription 1",
            "content": "[background music] Argh... I am a monster... arghhh....arghhh",
            "modelConfigName": "Gemini_LLM_Response"
        },
        {
            "title": "Transcription 2",
            "content": "[background music] Ahhhh... I'm a monster ahhhhhhh...ahhhhh",
            "modelConfigName": "GPT_LLM_Response"
        }
    ]
}

Using the markdown view, your videos will be rendered in the editor.


audio

{
    "type": "application/vnd.labelbox.conversational",
    "version": 1,
    "messages": [
        {
            "messageId": "message-0",
            "content": "LLM sound prompt : firework finals popping",
            "user": {
                "userId": "User 000",
                "name": "User"
            },
            "align": "left",
            "canLabel": false
        }
    ],
    "modelOutputs": [
        {
            "title": "Sound 1",
            "content": "<audio controls> <source src=\"<https://storage.googleapis.com/labelbox-datasets/audio-sample-data/Firework_1.m4a>\" type=\"audio/mpeg\"> </audio>",
            "modelConfigName": "LLM_sound_1"
        },
        {
            "title": "Sound 2",
            "content": "<audio controls> <source src=\"<https://storage.googleapis.com/labelbox-datasets/audio-sample-data/Firework_2.m4a>\" type=\"audio/mpeg\"> </audio>",
            "modelConfigName": "LLM_sound_2"
        }
    ]
}

Using the markdown view, your audio will be rendered in the editor.

An example screenshot of how the markdown editor renders audio

Step 2: Create an LLM human preference project

After you have imported your data, go to Annotate and click + New project. Select LLM human preference. Provide a name and optional description for your project and configure your quality mode settings.

Step 3: Select data rows from Catalog

During the project setup, click Add data. This will bring you to Catalog, where you can select the data rows you wish to label in this project. Use the Catalog filters to query your data rows.


Step 4: Create the project ontology

Create an ontology for classifying the model responses on each data row. Below is an example of an ontology for an LLM human preference project.

Step 5: Classify model responses

You have two options for assigning classifications to model responses.

  1. Label from scratch: Upload model outputs and have your team assign classifications in the editor from scratch.
  2. Import pre-labels via Model-assisted labeling: Upload classification predictions to your model outputs via Model-assisted labeling to give your labeling team a warm start.
    To learn how to import pre-labels, see Import LLM human preference annotations.

Toggle markdown view

You can render your content in markdown or text format. Use this toggle in the editor to switch the view. This option also allows you to have other types of media, including images and videos to render.