How to import annotations on DICOM data and sample import formats.
Open this Colab for an interactive tutorial on importing annotations on DICOM data.
Supported annotations
To import annotations in Labelbox, you need to create an annotations payload. In this section, we provide this payload for every supported annotation type.
Labelbox supports two formats for the annotations payload:
- Python annotation types (recommended)
- NDJSON
Both are described below.
Polyline
polyline_annotation = [
lb_types.DICOMObjectAnnotation(
name="line_dicom",
group_key=lb_types.GroupKey.AXIAL,
frame=1,
value=lb_types.Line(points=[
lb_types.Point(x=10, y=10),
lb_types.Point(x=200, y=20),
lb_types.Point(x=250, y=250)
]),
segment_index=0,
keyframe=True,
),
lb_types.DICOMObjectAnnotation(
name="line_dicom",
group_key=lb_types.GroupKey.AXIAL,
frame=20,
value=lb_types.Line(points=[
lb_types.Point(x=10, y=10),
lb_types.Point(x=200, y=10),
lb_types.Point(x=300, y=300)
]),
segment_index=1,
keyframe=True,
),
]
polyline_annotation_ndjson = {
'name': 'line_dicom',
'groupKey': 'axial', # should be 'axial', 'sagittal', or 'coronal'
'segments': [
{
'keyframes': [{
'frame': 1,
'line': [
{'x': 10, 'y': 10},
{'x': 200, 'y': 20},
{'x': 250, 'y': 250},
]
}]},
{
'keyframes' : [{
'frame': 20,
'line': [
{'x': 10, 'y': 10},
{'x': 200, 'y': 10},
{'x': 300, 'y': 300},
]
}]}
],
}
Segmentation Masks
MaskData is mask data in a uint8 array of [H, W, 3]. You can also convert a polygon annotation or a 2D array to MaskData. You can also specify a URL to a cloud-hosted mask (can be hosted on any cloud provider).
Mask size limits
To be valid for import, masks must be smaller than:
- height: 9000 px
- width: 9000 px
mask_annotation = [
lb_types.DICOMMaskAnnotation(
group_key='axial',
frames=[
lb_types.MaskFrame(
index=1,
instance_uri="https://storage.googleapis.com/labelbox-datasets/dicom-sample-data/sample-mask-1.png"
),
lb_types.MaskFrame(
index=5,
instance_uri="https://storage.googleapis.com/labelbox-datasets/dicom-sample-data/sample-mask-1.png"
)
],
instances=[
lb_types.MaskInstance(
color_rgb=(255, 255, 255),
name="segmentation_mask_dicom"
)
])
]
mask_annotation_ndjson = {
'groupKey': 'axial',
'masks': {
'frames': [{
'index': 1,
'instanceURI': "https://storage.googleapis.com/labelbox-datasets/dicom-sample-data/sample-mask-1.png"
}, {
'index': 5,
'instanceURI': "https://storage.googleapis.com/labelbox-datasets/dicom-sample-data/sample-mask-1.png"
}],
'instances': [
{
'colorRGB': (255, 255, 255),
'name': 'segmentation_mask_dicom'
}
]
}
}
End-to-end example: Import pre-labels or ground truth
Whether you are importing annotations as pre-labels or as ground truth, the steps are very similar. Steps 5 and 6 (creating and importing the annotation payload) is where the process becomes slightly different and is explained below in detail.
Before you start
You will need to import these libraries to use the code examples in this section.
import labelbox as lb
import labelbox.types as lb_types
import uuid
Replace with your API key
API_KEY = ""
client = lb.Client(API_KEY)
Step 1: Import data rows into Catalog
global_key = "sample-dicom-1.dcm"
asset = {
"row_data": "https://storage.googleapis.com/labelbox-datasets/dicom-sample-data/sample-dicom-1.dcm",
"global_key": global_key,
}
dataset = client.create_dataset(name="dicom_demo_dataset")
task = dataset.create_data_rows([asset])
task.wait_till_done()
print("Errors :",task.errors)
print("Failed data rows:" ,task.failed_data_rows)
Step 2: Create/select an ontology
Your project should have the correct ontology setup with all the tools and classifications supported for your annotations, and the tool names and classification instructions should match the name fields in your annotations to ensure the correct feature schemas are matched.
For example, when we create the line annotation above, we provided the name
as line_dicom
. Now, when we set up our ontology, we must ensure that the name of my line tool is also line_dicom
. The same alignment must hold true for the other tools and classifications we create in our ontology.
ontology_builder = lb.OntologyBuilder(
tools=[
lb.Tool(tool=lb.Tool.Type.RASTER_SEGMENTATION, name="segmentation_mask_dicom"),
lb.Tool(tool=lb.Tool.Type.LINE, name="line_dicom"),
]
)
ontology = client.create_ontology("Ontology DICOM Annotations", ontology_builder.asdict(), media_type=lb.MediaType.Dicom)
Step 3: Create a labeling project
Connect the ontology to the labeling project
project = client.create_project(name="dicom_project_demo", media_type=lb.MediaType.Dicom)
## connect ontology to your project
project.setup_editor(ontology)
Step 4: Send a batch of data rows to the project
# Create a batch to send to your MAL project
batch = project.create_batch(
"first-batch-dicom-demo", # Each batch in a project must have a unique name
global_keys=[global_key], # a list of data row objects, data row ids or global keys
priority=5 # priority between 1(Highest) - 5(lowest)
)
print("Batch: ", batch)
Step 5: Create the annotations payload
Create the annotations payload using the snippets of code shown above.
Labelbox supports two formats for the annotations payload: NDJSON and Python annotation types. Both approaches are described below with instructions to compose annotations into Labels attached to the data rows.
The resulting labels
and label_ndjson
from each approach will include every annotation (created above) supported by the respective method.
annotations_list = polyline_annotation + mask_annotation
labels = [
lb_types.Label(
data=lb_types.DicomData(global_key=global_key),
annotations=annotations_list
)
]
label_ndjson = []
for annotation in [
polyline_annotation_ndjson,
mask_annotation_ndjson
]:
annotation.update({
'dataRow': {
'globalKey': global_key
}
})
label_ndjson.append(annotation)
Step 6: Upload annotations to a project as pre-labels or ground truth
For both options, you can pass either the labels
or label_ndjson
payload as the value for the predictions
or labels
parameter.
Option A: Upload to a labeling project as pre-labels (Model-assisted labeling)
upload_job_mal = lb.MALPredictionImport.create_from_objects(
client = client,
project_id = project.uid,
name="mal_import_job-" + str(uuid.uuid4()),
predictions=labels)
upload_job_mal.wait_until_done();
print("Errors:", upload_job_mal.errors)
print("Status of uploads: ", upload_job_mal.statuses)
Option B: Upload to a labeling project as ground truth
upload_job_label_import = lb.LabelImport.create_from_objects(
client = client,
project_id = project.uid,
name = "label_import_job-" + str(uuid.uuid4()),
labels=labels
)
upload_job_label_import.wait_until_done()
print("Errors:", upload_job_label_import.errors)
print("Status of uploads: ", upload_job_label_import.statuses)