Overview
You can integrate your model training pipeline on your server (cloud or customer-managed infra) to trigger model training jobs directly from the Labelbox Models UI. To create an integration, click the Train Model button.
This guide covers the following:
-
How to set up webhooks to receive model training requests from the Labelbox UI and launch your model training service.
-
How to update the model training job status and upload model evaluation results with the Python SDK.
-
How to customize your model training pipeline on top of our reference implementation.
Set up webhooks to launch model training from Labelbox UI
From the Labelbox model run UI, you can query available models and launch training jobs. The training jobs update Labelbox with status updates which include pipeline stages, metadata, predictions, and metrics.
![[OLD] Model Training Integration Architecture - Page 1.png 741](https://files.readme.io/77e3cc1-OLD_Model_Training_Integration_Architecture_-_Page_1.png)
Webhook API
Setting up the webhook API requires the following:
-
The endpoint at the target URL that you specify in your model training setting will receive REST requests containing JSON formatted data from Labelbox. You need to deploy a publicly available, secure endpoint for that URL that can handle the webhook payloads specified in this documentation.
-
You need to configure your model training server’s URL and secret. To do this, go to Models > Settings, and click on the Model training section.

Security
To ensure the request you are getting on your server is from Labelbox, Labelbox populates an x-hub-signature header with a SHA1 hash containing the encoded signature in the request.
You can verify the signature by concatenating the secret that you set in your Model runs/Settings within the Model training section, and the un-parsed request body to get a SHA1 hash of the result. You should compare the resulting hash with the x-hub-signature header and make sure it matches.
Query available models
GET /models
It should return a JSON object containing the model names as keys. This will be posted to the model_run endpoint and determined by the user selection in the modelType dropdown.

Example implementation:
SERVICE_SECRET = "<secret must match ui secret>"
model_names = [
'bounding_box',
'ner',
'image_single_classification',
'image_multi_classification',
'text_single_classification',
'Text_multi_classification'
]
@app.get("/models")
async def models(X_Hub_Signature: str = Header(None)):
computed_signature = hmac.new(SERVICE_SECRET.encode(),
digestmod=hashlib.sha1).hexdigest()
if X_Hub_Signature != "sha1=" + computed_signature:
raise HTTPException(
status_code=401,
detail=
"Error: computed_signature does not match signature provided in the headers"
)
return {model_name: {} for model_name in model_names}
Launch model training jobs
POST /model_run
A request will be made to this endpoint to initiate training. The request body will contain json data with a modelType key and a modelRunId key.
Request body
Field | Description |
---|---|
modelType | The name of your model to train with. You can run GET /model to get a list of available models from your training environment. |
modelRunId | The model run id. The model run id will be used to export all the data rows, labels and extract model configurations in your model training implementation. |
organizationId | The id of the organization that the model belongs to. |
modelId | The model id that can be used to reference information about the model. |
Sample request body:
{
"modelType" : "bounding_box" ,
"modelRunId" : "9e3f331a-551b-0e24-6ab5-f69855df64ea"
"modelId" : "9e3f51f8-5e62-0cab-360e-c0443dcc3008",
"organizationId" : "cjhfn5y6s0pk507024nz1ocys"
}
Example implementation:
@app.post("/model_run")
async def model_run(request: Request,
background_tasks: BackgroundTasks,
X_Hub_Signature: str = Header(None)):
req = await request.body()
computed_signature = hmac.new(SERVICE_SECRET.encode(),
msg=req,
digestmod=hashlib.sha1).hexdigest()
if X_Hub_Signature != "sha1=" + computed_signature:
raise HTTPException(
status_code=401,
detail=
"Error: computed_signature does not match signature provided in the headers"
)
data = json.loads(req.decode("utf8"))
# Use json data to launch training pipelines
Update model training job status and results via Python SDK
Once the model training is done, you can retrieve the model results for diagnostics to understand the model training progress. Labelbox provides SDK methods to show the training job status, errors, model predictions, and model metrics on Labelbox UI.
Update model training job status, error messages, and metadata
model_run.update_status(status, metadata=None, error_message=None)
[source]
Available statuses are: EXPORTING_DATA, PREPARING_DATA, TRAINING_MODEL, COMPLETE, FAILED
Status
When setting status, the UI will be updated to reflect the state.
Examples:
model_run.update_status(
status = "PREPARING_DATA",
)

PREPARING_DATA state
model_run.update_status(
status = "COMPLETE",
)

COMPLETE state
model_run.update_status(
status = "FAILED",
error_message = "Deadline of 120.0s exceeded while calling target function."
)
Error
The error message can be any arbitrary message you want to show in the UI. Once the status is set to FAILED, the error will appear in the UI.

FAILED state with error message string being displayed
Metadata
Metadata accepts a dictionary object for recording any information about the model run. This operation will update any existing keys and append new ones. This is useful for recording model configuration params or recording references to artifacts that are produced as training pipelines progress.
model_run.update_status(
status = "COMPLETE",
metadata = {"model_weights" : "gs://models/model_1.pb"}
)
Upload model training evaluations to Labelbox
You can upload model predictions and model metrics to the model run via SDK at the end of your training and evaluation pipeline.