Step 3: Configure model prediction

Here, you define the settings and parameters of your model run

After selecting the model to use in your model run, you need to configure the settings and parameters for the model run.

Each model has an ontology defined to describe what it should predict from the data. There are other specific options depending on the selected model and your scenario. Not every option is available for every model. For instance, prompts only apply to language and image models with text input. Each model will have its own set of hyperparameters, which you can find in the Advanced model setting.

Each setting is set to a default value designed to serve the most common case. For best results, take time to investigate settings before experimenting with new values.

Here are some things you can do:

Change the model

To change the selected model select the Cancel icon in the model name panel.

To select a different model for your model run, select the Cancel button displayed next to the name of the current model.

When you do this, you return to the Choose a model view in order to select the preferred model. This resets settings to default values.

Load model config

You can load configuration settings from earlier model runs using the same model. Select Load model config to display a list of available configurations.

This copies several settings from the earlier model run, including ontology settings, confidence thresholds, hyperparameters, and prompts. You can use this to ensure consistency between runs or to test options from a base configuration.

Use Filter to filter available configurations using a substring search (case insensitive).

You can't apply configurations for model runs using different models, partly because each model handles settings differently.

Ontology and Prompt

An ontology describes the machine learning task you want the model to perform. An ontology consists of multiple supported features.

Ontology options for model runs vary according to the model. Some models provide their own fixed ontologies because they are trained on specific classes and tasks. Other models ask users to define the ontology because they use natural language as part of model inputs to support flexible prediction classes.

Ontology options vary because each model has a unique take on how ontologies should be used. To learn more about the ontology of a particular model, use Model to display its model card and then review the Supported features section of the Overview tab.

In general, model use of ontologies fall into one of the following categories:

Models with fixed ontologies

Some models are trained on specific classes and can therefore only predict those classes. Examples include the YOLO image classification model and DistillBert NER model. Such models do not have prompts.

To see the features supported by such models, use Model to open the appropriate model card and then look for Supported features on the Overview tab.

Use the **Overview** tab of a model card to learn the features supported for models with fixed ontologies.

Text generation models that support a single feature

Some text generation models support only a single feature: "text." These models are designed to output text, either as answers to specific questions, summaries, image captions, or visual task responses.

Example 1: You can use the BLIP2 model answer questions posed in an input prompt. For example, you can set Prompt to Question: what is in the image? Answer:. The model then tries to answer the question from an image.

Other questions and prompts may also be supported. In the case of BLIP2, you can leave the prompt blank to have the model provide a caption for the image.

Example 2: The Flan-T5 model can perform multiple tasks and you should use the Prompt to describe the tasks you want the model to perform. Here are some examples:

PromptResult
Summarize:The model reviews the data row and summarizes it.
Question: <input question>? Answer:The model answers the question for each data row.
Blank ("")Generates text, generally what it determines the next few words should be.

For best results, you should be at least somewhat familiar with a model before using it in a model run.

Prompt-based models requiring user-defined ontologies

Some models take natural language as the Prompt input, and provides flexibility in prediction classes. The models require a user-defined ontology (that needs to exist before defining the model run).

If using such a model, select your ontology and use the Generate prompt button to use the ontology to create an initial prompt for the model run. You're free to edit the prompt in order to engineer specific outcomes.

Note that input prompts are passed directly to the model without further processing. This means you should take use care to follow formatting conventions defined by the underlying model. (It also means you probably cannot use conventions for one model to format a prompt for an unrelated model.) Otherwise, your model run may produce unexpected results.

Example 1: Large language models such as GPT and Claude Models let you provide ontologies that analyze text; they let you define features that perform radio-button and checkbox classification, ask questions using free form text, and recognize named entities. You can also combine these features in a single ontology.

Here, the prompt asks ChatGPT-4 to recognize entities in data rows and uses an example to demonstrate the desired responses. This prompt was created by selecting Generate prompt and is created from the ontology. By default, generated prompts serve simple, common cases. Customize and extend the template to guide the model's responses.

Example 2: Text-conditioned Image models such as OWL-VIT and Grounding Dino can classify any objects using text descriptions. To use them, provide image ontologies containing names for your objects of interest.

When you select Generate prompt, the default prompt tells the model what to look for. Often, such prompts take the form of features names separated by semicolons.

Examples

Some Large Language Models (LLMs) that require user-defined ontologies support Examples. These help the model understand your preferences for the model response and generally take the form of a sample input and the desired result.

To illustrate, suppose you're classifying and summarizing movie plots. When defining a model run with a free text question of "summary" and a checklist question of "movie_genres", you can set Examples to:

Vincent Chase, who separated from his wife after nine days of marriage, wants to do something new in his career. He calls his former agent-turned-studio head Ari Gold, who offers Vince a leading role in his first studio production ……
{“summary”: “A Hollywood actor directs his first movie, which goes over budget, leading to conflicts with financiers and a struggle to complete the film.” , “movie_genres”: [“drama”]}

In this case, your model run would create a JSON response for each data row; the response would include two fields: summary and movie_genres.

Edit Ontology

When working with a model that provides its own ontology, you can edit a model ontology to ignore specific features

To perform either task, select Edit in the Ontology section of the Predict view.

Ignoring features

When you edit an ontology, its features are displayed the Edit ontology view. To ignore a feature, place a checkmark next to it in the editor.

When you ignore features in an ontology, the model run results don't include predictions for those features.

Use **Edit ontology** to ignore features in the selected ontology or to map features in a model ontology to features in your own ontology.

When finished, select Save or Discard changes as appropriate.

Advanced model settings

Advanced model settings modify hyperparameters associated with the model, as such they vary considerably.