Set up Google Vertex AutoML service

Use this guide to integrate your model training pipeline with GCP Vertex AI. Vertex AI an ML platform where you can build a model training pipeline, train and deploy models. Support for other cloud providers coming soon.


  1. Install docker-compose.

  2. install gcloud client

# install gcloud cli
curl | bash
# load env vars
source ~/.<bash_profile/zshrc/bashrc> 
# login
gcloud auth login
# Set the correct gcloud project
gcloud config set project PROJECT_NAME
  1. Create gcloud service account. Create gcloud service account keys if you don't have one already.
gcloud iam service-accounts create ACCOUNT_NAME \
    --description="account for configuring model training" \
  1. Assign gcloud service account your account permissions.
    # assign gcloud service account permissions
gcloud iam service-accounts add-iam-policy-binding \ \
    --member="user:YOUR_ACCOUNT_EMAIL" \
  1. Set path for your service account credentials.
export GOOGLE_APPLICATION_CREDENTIALS=/Path/to/credentials/.config/gcloud/<name>.json
  1. Create the service account credentials
gcloud iam service-accounts keys create $GOOGLE_APPLICATION_CREDENTIALS \
  1. Connect docker to Google Container Registry (GCR)
gcloud auth configure-docker

Deploy model training service to cloud platform

  1. Clone the Labelbox model training repo:
git clone
cd model-training
  1. Create a .env file to keep track of the following env vars (copy .env.example to get started):

      • GCS bucket to store all of the artifacts. It must be a unique name. If the bucket doesn't exist it will automatically be created.
      • Google cloud project name.
      • You will have to use the same secret when making a request to the service from your Labelbox UI.
      • Path to the application credentials.
      • Your Labelbox API key
      • Google service account. Will have the following format: <name>@<project>
      • This is the name that all of the google resources will use. You can deploy separate coordinator services (such as prod-training-service for production and dev-training-service for development)
  2. Once the .env file has the correct values or if you update it later, load the env vars again. Deploy the service to the cloud. This step might take a while.

source .env

When this script completes, copy the IP address printed in the console. This will be the ip address of your coordinator service. You will need to use it to configure the model training integration in Labelbox UI later.

  1. Test that it is running with:
curl http://<ip>:8000/ping

The remote ip will be printed to the console when you run the deployment script. The server will respond with pong if the deployment was successful

  1. (Recommended) Reserve a static IP address on Google by following this [guide] so that the IP address of your coordinator service won't change after a few days. (

What’s Next

Now that you have successfully set up the model training integration with your cloud platform, you can kick off model training from Labelbox UI