Dataset

A developer guide for creating and modifying datasets via the Python SDK.

The most common method for importing data to Labelbox is via Python SDK after setting up a cloud storage integration. With an IAM delegated access integration, you can keep your data in your cloud bucket and grant Labelbox limited access to the data on demand.

📘

Examples for all data types

The examples in this developer guide primarily use image assets. For sample approaches with other data modalities, please view the developer guide for importing data nested under each asset type in the Import/Export section of the table of contents.


Client

import labelbox as lb
client = lb.Client(api_key="<YOUR_API_KEY>")

Create a dataset

The only required argument when creating a dataset is the name.

dataset = client.create_dataset(
  name='<dataset_name>',
  description='<dataset_description>',	# optional
  iam_integration=None		# if not specified, will use default integration, set as None to not use delegated access.
)

Get a dataset

dataset = client.get_dataset("<dataset_id>")

# alternatively, you can get a dataset by name
dataset = client.get_datasets(where=labelbox.Dataset.name == "<dataset_name>").get_one()

Methods

Create data rows

The only required argument when creating a data row is the row_data. However, Labelbox strongly recommends supplying each data row with a global key upon creation.

# this example uses the uuid package to generate unique global keys
from uuid import uuid4

dataset.create_data_rows(
  [
    {
      "row_data": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/basic.jpg",
     	"global_key": str(uuid4())
    },
	{
      "row_data": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/basic.jpg",
      "global_key": str(uuid4())
    }
  ]
)

🚧

Special character handling

Please note that certain characters like # are not supported in URLs and should be avoided in your file names to prevent loading issues.

A good test for the handling of special characters is to test URLs in your browser address bar — if the URL doesn't load properly in your browser, it won't load in Labelbox.

You can also create data rows with metadata, attachments, and image overlays in the same task. The code below contains an end-to-end example for creating a dataset with data rows that include these elements.

import labelbox as lb
from uuid import uuid4
import datetime

# insert your API key
LB_API_KEY = "<API KEY>"
client = lb.Client(api_key=LB_API_KEY)

# get the metadata ontology
metadata_ontology = client.get_data_row_metadata_ontology()

# create the dataset
dataset = client.create_dataset(name="Bulk import example")

# build the assets
assets = [
  {"row_data": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/basic.jpg", "global_key": str(uuid4())},
  {"row_data": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/basic.jpg", "global_key": str(uuid4())},
  {"row_data": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/basic.jpg", "global_key": str(uuid4())}
]

# build the metadata
asset_metadata_fields = [
  {"name": "captureDateTime", "value": datetime.datetime.utcnow()},
  {"name": "tag", "value": "tag_string"},
  {"name": "split", "value": "train"}
]

# build the attachments
asset_attachments = [
  {"type": "IMAGE_OVERLAY", "value": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/rgb.jpg", "name": "RGB" },
  {"type": "IMAGE_OVERLAY", "value": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/cir.jpg", "name": "CIR"},
  {"type": "IMAGE_OVERLAY", "value": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/weeds.jpg", "name": "Weeds"},
  {"type": "TEXT", "value": "IOWA, Zone 2232, June 2022 [Text string]"},
  {"type": "TEXT", "value": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/text_attachment.txt"},
  {"type": "IMAGE", "value": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/disease_attachment.jpeg"},
  {"type": "VIDEO", "value":  "https://storage.googleapis.com/labelbox-sample-datasets/Docs/drone_video.mp4"},
  {"type": "HTML", "value": "https://storage.googleapis.com/labelbox-sample-datasets/Docs/windy.html"}
]

# connect the metadata and attachments to the data rows
for item in assets:
  item["metadata_fields"] = asset_metadata_fields
  item["attachments"] = asset_attachments

# create the data rows
task = dataset.create_data_rows(assets)
task.wait_till_done()
print(task.errors)

📘

Limits

See this page to learn the limits for creating data rows in one bulk operation. Regardless of your tier's limit, if you have a large dataset to upload, you can split the data rows into chunks and upload them sequentially.

Create a singular data row

# simplest example
dataset.create_data_row(
  row_data="https://storage.googleapis.com/labelbox-sample-datasets/Docs/basic.jpg",
  global_key=str(uuid4())
)

Upload a local file

# get a local file and write some text data
local_data_path = "/tmp/test_data_row.txt"
with open(local_data_path, 'w') as file:
  file.write("sample data")

# create the data row
task = dataset.create_data_rows([local_data_path])
task.wait_till_done()
# note that you cannot set global keys when creating a data row from a local file
# first we must upload the file then create the data row
local_data_path = "/tmp/test_data_row.txt"
with open(local_data_path, 'w') as file:
  file.write("sample data")

# upload the file to Labelbox storage  
file_url = client.upload_file(local_data_path)

# create the data row with a global key
task = dataset.create_data_rows([{
  "row_data": item_url,
  "global_key": "<unique_global_key>"
}])
task.wait_till_done()

Append to an existing dataset

You can add data rows to an existing dataset using the same methods described above for creating data rows. First, get a dataset, then create the rows.

# get existing dataset
dataset = client.get_dataset("<dataset_id>")

# use a method for data row creation as shown in sections above

Export data rows from a dataset

data_rows = dataset.export_data_rows()

# optionally, you can include metadata in the export
data_rows = dataset.export_data_rows(include_metadata=True)

Export data rows with labels and predictions

For more details, see Export data rows from Catalog.

# set the export params to include/exclude certain fields
export_params={
  "attachments": True,
  "metadata_fields": True,
  "data_row_details": True,
  "project_details": True,
  "performance_details": True,
  "project_ids": ["<project_id_1>", "<project_id_2>"],
  "model_run_ids": ["<model_run_id_1>", "<model_run_id_2>"]
}

# set filters
filters={
  "last_activity_at": ["2000-01-01 00:00:00", "2050-01-01 00:00:00"],
  "label_created_at": ["2000-01-01 00:00:00", "2050-01-01 00:00:00"]
}

# get a dataset
dataset = client.get_dataset("<dataset_id>")

# run the export task
export_task = dataset.export_v2(params=export_params, filters=filters)
export_task.wait_till_done()

# view errors and results
if export_task.errors:
  print(export_task.errors)
  
export_json = export_task.result
print("results: ", export_json)

Update a dataset

dataset.update(name="new_dataset_name")

Delete a dataset

❗️

Deleting a dataset cannot be undone

This method deletes the dataset along with all data rows in the dataset and any labels made on these data rows. This action cannot be reverted without the assistance of Labelbox support.

dataset.delete()

Attributes

Get the basics

# name (str)
dataset.name

# description (str)
dataset.description

# updated at (datetime)
dataset.updated_at

# created at (datetime)
dataset.created_at

# created by (relationship to User object)
user = dataset.created_by()

# organization (relationship to Organization object)
organization = dataset.organization()

Get the data rows

The data_rows() attribute is a relationship to the DataRow objects in the dataset. The relationship retrieves a paginated collection of data rows.

data_rows = dataset.data_rows()

# inspect one data row
next(data_rows)

# inspect a number of data rows
for data_row in data_rows:
  print(data_row)
  
# for ease of use, you can convert the paginated collection to a list
list(data_rows)

Get the number of data rows

The row_count is a cached attribute, thus you must re-fetch the dataset after creating data rows to retrieve the updated value.

dataset = client.get_dataset("<dataset_id>")
dataset.row_count