Data curation

Data curation is an essential step in building modern production AI systems. Data curation is the act of prioritizing data to label (including improving data quality) to maximally increase the model performance per iteration.

It is not efficient or desired to label your entire dataset in one attempt.

Instead, it is best practice to create training data in iteration. At each iteration, you can exploit various insights to prioritize the data that will maximally increase model performance. Here are some best practices for creating training data, iteratively.

  • You can start by labeling a small fraction of the dataset, train a model, and understand where your model is struggling or is not confident. Then, leverage your insights to prioritize what to label next, and repeat.
  • You can gain insights around which data is and is not redundant and prioritize labeling non-redundant data.
  • You can surface edge cases that are challenging to humans and label more of these in priority.
  • You can discover that some labeling instructions were unclear and adjust labeling instructions along the way.
  • You can discover that your ontology needs adjustments and adjust it along the way.

By labeling in iterations, you will develop better ML models faster while keeping human labeling costs at a minimum.

16351635

The insights to select and prioritize data be gained in numerous ways. Below are a few examples of how machine learning teams will gain insights that help them select and prioritize data for labeling.

Human insights

During AI development, AI teams often have insights into the nature of the problem. Below is an example of a data selection driven by human insight.

A company is using AI to develop a crop weed detection system. To accomplish this, they are collecting data from sensors across the country. As they collect data, they observe that the crop weeds are highly diverse and region-specific. Instead of developing a single model that works across the country, the team chooses to focus on a specific crop type (e.g. potatoes). To gather data for this crop-specific model, the company uses Catalog to search and filter data that will likely have a higher density of positive examples of potato crop weeds (region, season time, tractor model type).

12401240

Model performance based insights

During the iterative cycle of AI development, AI teams may ask themselves the following questions:

  • What unlabeled data should I label in priority, in order to improve model performance?

  • What existing ground truth labels should I send for re-work in priority, in order to improve model performance?

Performing robust model error analysis leads AI teams to perform the following actions.

14901490

Insights derived during data labeling

During data labeling, you may uncover hard or confusing examples (e.g. a certain kind of manufacturing defect) that may cause your labeling team to misclassify the example. When this happens, it is best practice to find more examples like that in your unlabeled data and prioritize those Data Rows for labeling. Doing so can improve expertise (familiarity) among the data labelers.

Another best practice is to improve class distribution by purposefully selecting and prioritizing data that is likely to contain rarer classes.

For both of these situations, you can use the Catalog to send prioritized Data Rows to the labeling queue.

10401040

What’s Next
Did this page help you?