How to set up a prompt and response type project
Prompt and response creation projects are set up differently than other Labelbox projects. They have unique methods and modifications to existing methods. This guide will showcase the differences and provide an example workflow.
Before you start
The below imports are needed to use the code examples in this section.
import labelbox as lb
API key and client
Please provide a valid API key below to connect to the Labelbox client properly. For more information, please review the Create API key guide.
API_KEY = None
client = lb.Client(api_key=API_KEY)
Create a prompt and response ontology
You can create ontologies for prompt and response projects in the same way as other project ontologies using two methods: client.create_ontology
and client.create_ontology_from_feature_schemas
. For response creation projects the only difference between other projects is the media_type for the project needs to be set to lb.MediaType.Text
. For prompt and prompt response creation projects you need to include their respective media type: lb.MediaType.LLMPromptCreation
and lb.MediaType.LLMPromptResponseCreation
. Additional you also need to provide an additional parameter of ontology_kind
, which needs to be set to lb.OntologyKind.ResponseCreation
this is only applicable for prompt and prompt response creation projects.
Option A: create_ontology
create_ontology
Typically, you create ontologies and generate the associated features simultaneously. Below is an example of creating an ontology for your prompt and response projects using supported classifications; for information on supported annotation types, visit our prompt and response generation guide.
Depending on whether you are creating a prompt, a response, or both, certain classifications in your ontologies might not be necessary. See supported annotation types for more information.
ontology_builder = lb.OntologyBuilder(
tools=[],
classifications=[
lb.PromptResponseClassification(
class_type=lb.PromptResponseClassification.Type.PROMPT,
name="prompt text",
character_min = 1, # Minimum character count of prompt field (optional)
character_max = 20, # Maximum character count of prompt field (optional)
),
lb.PromptResponseClassification(
class_type=lb.PromptResponseClassification.Type.RESPONSE_CHECKLIST,
name="response checklist feature",
options=[
lb.Option(value="option 1", label="option 1"),
lb.Option(value="option 2", label="option 2"),
],
),
lb.PromptResponseClassification(
class_type=lb.PromptResponseClassification.Type.RESPONSE_RADIO,
name="response radio feature",
options=[
lb.Option(value="first_radio_answer"),
lb.Option(value="second_radio_answer"),
],
),
lb.PromptResponseClassification(
class_type=lb.PromptResponseClassification.Type.RESPONSE_TEXT,
name="response text",
character_min = 1, # Minimum character count of response text field (optional)
character_max = 20, # Maximum character count of response text field (optional)
)
],
)
# Create ontology
ontology = client.create_ontology(
"Prompt and response ontology",
ontology_builder.asdict(),
media_type=lb.MediaType.LLMPromptResponseCreation,
)
ontology_builder = lb.OntologyBuilder(
tools=[],
classifications=[
lb.PromptResponseClassification(
class_type=lb.PromptResponseClassification.Type.PROMPT,
name=f"prompt text",
character_min = 1, # Minimum character count of prompt field (optional)
character_max = 20, # Maximum character count of prompt field (optional)
)
],
)
# Create ontology
ontology = client.create_ontology(
"Prompt ontology",
ontology_builder.asdict(),
media_type=lb.MediaType.LLMPromptCreation,
)
ontology_builder = lb.OntologyBuilder(
tools=[],
classifications=[
lb.PromptResponseClassification(
class_type=lb.PromptResponseClassification.Type.RESPONSE_CHECKLIST,
name="response checklist feature",
options=[
lb.Option(value="option 1", label="option 1"),
lb.Option(value="option 2", label="option 2"),
],
),
lb.PromptResponseClassification(
class_type=lb.PromptResponseClassification.Type.RESPONSE_RADIO,
name="response radio feature",
options=[
lb.Option(value="first_radio_answer"),
lb.Option(value="second_radio_answer"),
],
),
lb.PromptResponseClassification(
class_type=lb.PromptResponseClassification.Type.RESPONSE_TEXT,
name=f"response text",
character_min = 1, # Minimum character count of response text field (optional)
character_max = 20, # Maximum character count of response text field (optional)
)
],
)
# Create ontology
ontology = client.create_ontology(
"Response ontology",
ontology_builder.asdict(),
media_type=lb.MediaType.Text,
ontology_kind=lb.OntologyKind.ResponseCreation
)
Option B: create_ontology_from_feature_schemas
create_ontology_from_feature_schemas
You can also create ontologies using feature schema IDs. This makes your ontologies come with existing features instead of generating new features. You can get these features by going to the Schema tab inside Labelbox.
ontology = client.create_ontology_from_feature_schemas(
"Prompt and response ontology",
feature_schema_ids=["<list of feature schema ids"],
media_type=lb.MediaType.LLMPromptResponseCreation
)
ontology = client.create_ontology_from_feature_schemas(
"Prompt ontology",
feature_schema_ids=["<list of feature schema ids"],
media_type=lb.MediaType.LLMPromptCreation
)
ontology = client.create_ontology_from_feature_schemas(
"Response ontology",
feature_schema_ids=["<list of feature schema ids"],
media_type=lb.MediaType.Text,
ontology_kind=lb.OntologyKind.ResponseCreation
)
Create response creation projects
You can create response creation projects using client.create_response_creation_project
, which uses the same parameters as client.create_project
but provides better validation to ensure the project is set up correctly. Additionally, you need to import text data rows to be used as prompts.
project = client.create_response_creation_project(
name="<project_name>",
description="<project_description>", # optional
)
Create prompt response and prompt creation projects
When creating a prompt response or prompt creation project using client.create_prompt_response_generation_project
, you do not need to create data rows because they are generated automatically. This method takes the same parameters as the traditional client.create_project
but with a few specific additional parameters.
Parameters
The client.create_prompt_response_generation_project
method requires the following parameters:
-
create_prompt_response_generation_project
parameters:-
name
(required): The name of your new project. -
description
: An optional description of your project. -
media_type
(required): The type of assets this project accepts. Can be eitherlb.MediaType.LLMPromptCreation
orMediaType.LLMPromptResponseCreation
, depending on the project type you are setting up. -
dataset_name
: The name of the dataset where the generated data rows will be located. Include this parameter only if you want to create a new dataset. -
dataset_id
: An optional dataset ID of an existing Labelbox dataset. Include this parameter if you want to append it to an existing dataset. -
data_row_count
: The number of data row assets that will be generated and used with your project.
-
project = client.create_prompt_response_generation_project(
name="Demo prompt response project",
media_type=lb.MediaType.LLMPromptResponseCreation,
dataset_name="Demo prompt response dataset",
data_row_count=100,
)
# Setup project with ontology created above
project.connect_ontology(ontology)
project = client.create_prompt_response_generation_project(
name="Demo prompt project",
media_type=lb.MediaType.LLMPromptCreation,
dataset_name="Demo prompt dataset",
data_row_count=100,
)
# Setup project with ontology created above
project.connect_ontology(ontology)
Exporting prompt response, prompt or response create project
Exporting from a prompt and response type project works the same as exporting from other projects. In this example, your export will be empty unless you create labels within the Labelbox platform. See prompt and response export for a sample export.
# The return type of this method is an `ExportTask`, which is a wrapper of a`Task`
# Most of `Task` features are also present in `ExportTask`.
export_params = {
"attachments": True,
"metadata_fields": True,
"data_row_details": True,
"project_details": True,
"label_details": True,
"performance_details": True,
"interpolated_frames": True,
}
# Note: Filters follow AND logic, so typically using one filter is sufficient.
filters = {
"last_activity_at": ["2000-01-01 00:00:00", "2050-01-01 00:00:00"],
"label_created_at": ["2000-01-01 00:00:00", "2050-01-01 00:00:00"],
"workflow_status": "InReview",
"batch_ids": ["batch_id_1", "batch_id_2"],
"data_row_ids": ["data_row_id_1", "data_row_id_2"],
"global_keys": ["global_key_1", "global_key_2"],
}
export_task = project.export(params=export_params, filters=filters)
export_task.wait_till_done()
# Return a JSON output string from the export task results/errors one by one:
def json_stream_handler(output: lb.BufferedJsonConverterOutput):
print(output.json)
if export_task.has_errors():
export_task.get_buffered_stream(stream_type=lb.StreamType.ERRORS).start(
stream_handler=lambda error: print(error)
)
if export_task.has_result():
export_json = export_task.get_buffered_stream(
stream_type=lb.StreamType.RESULT
).start(stream_handler=json_stream_handler)
print("file size: ", export_task.get_total_file_size(stream_type=lb.StreamType.RESULT))
print("line count: ", export_task.get_total_lines(stream_type=lb.StreamType.RESULT))