In order to launch a production batch you need:
You also have the ability to add concepts and a difficulty to each quality task. Concepts describe what the evaluation task is about, whereas difficulty describes how difficult the task is to complete. Tagging quality tasks with concepts and difficulties allows us to serve them in a more balanced way to Taskers, obtaining more holistic quality signals on production batches.
In order to ensure quality of your labels, you'll need to decide on subsets of Training tasks and Evaluation tasks.
If you think the task would be a good one for all Taskers to complete before moving on to the live Production Batch tasks, it would make sense to make the task a Training task. Remember to think about your Training tasks as a set - make sure they cover a good breadth of the data variability of your dataset. These tasks should generally be easier, as it will be the first time a Tasker encounters your data.
If you think the task would be good one to track in terms of measuring quality of your Production Batch tasks, it would make sense to make the task an Evaluation task. These tasks should generally be harder, since they will be randomly served to Taskers to gauge quality and accuracy. Note that since they tend to be harder, your general Production Batch quality should be higher than your Evaluation task quality.
You can create a quality task from any audited task. For instance, you can take your Calibration Batch and after you audit each task, you can choose to make a quality task out of it.
It is important that you create a diverse set of quality tasks. For example, for a 3 class categorization problem, you would want an equal balance between all 3 classes.
Selecting Create Quality Task
in the lower right corner will prompt you to choose the type
You can decide which type of task it should be.
You can then label the quality task with concepts and a difficulty.
All the quality tasks you've created (both Training & Evaluation Tasks) can be found under Quality Lab
in the upper navigation of each project.
You can click on an evaluation task to show its corresponding concepts and difficulty. For example, the following image tests the ability to find license plates in a picture taken during the night.
This evaluation task has been tagged with the "night time" concept and has been assigned a difficulty of "Hard."
Evaluation Tasks are automatically split into initial and review based on the changes you made in the audit. If you had Rejected and then made appropriate corrections to the attempted annotation, that Evaluation Task becomes a Review Phase Evaluation task.
It is recommended that you create:
At the start of a project (before launching production)
Ongoing delivery
Once you have determined that your quality tasks subsets represent your full dataset well and you have checked that all of your initial and expected responses are correct, you're ready to launch your Production Batch!
After creating Quality Tasks, your project is ready to start creating Regular Batches. These are batches hat make up the bulk of the data you want labelled. You can use batch names as a metadata to help group your data. We usually recommend up to 5000 tasks per batch. Once you create your first Regular Batch, Scale Rapid will automatically start onboarding labelers onto your project.
After launching a Production Batch, you can continue to add data and refine your project for future Production Batches.