Training API
Introduction
This guide explains how to programmatically start and manage neural network training in Supervisely using the Training API. You'll learn how to configure training parameters, run multiple experiments in parallel, and automatically compare model performance.
TrainApi overview
TrainApi is a high-level API that starts a training application task programmatically. It allows you to conveniently run a training app by providing parameters in the same structure that a user configures in the Training App GUI (TrainApp).
If you are not yet familiar with Supervisely environment variables you can read about it here.
Quick Example:
import os
from dotenv import load_dotenv
import supervisely as sly
from supervisely.api.nn.train_api import TrainApi
if sly.is_development():
load_dotenv("local.env")
load_dotenv(os.path.expanduser("~/supervisely.env"))
api = sly.Api.from_env()
project_id = sly.env.project_id()
train = TrainApi(api)
train.run(project_id=project_id, model="YOLO/YOLO26s-det")TrainApp overview
TrainApp in Supervisely is a template for a training application that guides the user through steps with training settings.
Steps:
Select Project - what data to train on and whether to cache this data for future use
Select Model - Pretrained model or custom checkpoint that was trained in Supervisely
Select Classes - List of classes names from the project
Train/Val split - Split the data into train and validation sets
Hyperparameters - YAML editor with training hyperparameters. Hyperparameters are different for each framework.
Model Benchmark - Run model benchmark and speed test. Can be disabled if not needed.
Export - Export the model to ONNX or TensorRT formats, if supported by the framework.
Start training - Start training.
How to Start Training
To start training programmatically, call the run() method of the TrainApi class.
It will:
Prepare the same app
statethat you would configure in TrainApp UIDetect a suitable training app for the chosen framework
Start the training task on the selected agent
TrainApi.run() parameters
TrainApi.run() parametersType: int | None
Optional: Yes (default: auto-select)
Agent ID where the training task will be started.
If not provided, TrainApi will automatically pick an available agent in the project team.
Example:
Type: int
Required: Yes
Project ID with training data.
Example:
Type: str
Required: Yes
Model identifier in one of two formats:
Pretrained model:
"framework/model_name"Custom checkpoint from Team Files: checkpoint path in Team Files (absolute or relative)
Examples:
Type: list[str] | None
Optional: Yes (default: all classes from project)
List of class names to train on. Classes that are not in the project meta are automatically filtered out by TrainApi.
If not provided, TrainApi uses all classes from the project.
Example:
Type: RandomSplit | DatasetsSplit | TagsSplit | CollectionsSplit | None
Optional: Yes (default: RandomSplit())
Specify how to split your data into train/val sets.
Available split types:
DatasetsSplit- Split by dataset IDsRandomSplit- Random split by percentageTagsSplit- Split by image tagsCollectionsSplit- Split by collection IDs
Examples:
Type: str | None
Optional: Yes (default: framework defaults)
Hyperparameters as a YAML string.
All list of available hyperparameters for the selected framework can usually be found in the training app repository in the hyperparameters.yaml file. For example, for YOLO app you can find the list of hyperparameters here.
Supervisely doesn't modify any hyperparameters input and uses parameter names as provided by the model authors. If you can't find the parameter that you want to use in the default hyperparameters provided by the training app, you can add it manually.
Example:
Type: str | None
Optional: Yes (default: auto-generated)
Name of the experiment.
If not provided, the name will be generated by TrainApp using the following format:
Example:
Type: bool
Optional: Yes (default: True)
Automatically convert class shapes for the model task type.
For example, if you have a project with polygons and you want to train a detection model, TrainApp will automatically convert polygons to rectangles for the detection model.
Example:
Type: bool
Optional: Yes (default: True)
Runs model benchmark post-training and generate evaluation report.
Learn more about Model Evaluation and Benchmark.
Example:
Type: bool
Optional: Yes (default: False)
Runs model speed test during model evaluation. Can be enabled only if enable_benchmark option is set to True.
Example:
Type: bool
Optional: Yes (default: True)
Cache project on agent to avoid downloading project again during next training runs. If project was changed since last training run, it will be updated and synced with the project on the server.
Example:
Type: bool
Optional: Yes (default: False)
Export model to ONNX format.
If supported by the selected training app, the model will be exported to ONNX format after training. This option will not affect PyTorch checkpoints, they will be preserved.
Example:
Type: bool
Optional: Yes (default: False)
Export model to TensorRT format.
If supported by the selected training app, the model will be exported to TensorRT format after training. This option will not affect PyTorch checkpoints, they will be preserved.
Example:
Type: bool
Optional: Yes (default: True)
If True, training is started automatically after all settings are applied. If False, training must be started manually from the training app UI by clicking the "Start Training" button.
Example:
Use Case: Run Multiple Experiments and Compare Results
This example demonstrates how to programmatically run multiple training experiments in parallel and compare their performance. This workflow is useful for:
Testing different model architectures on the same dataset
Comparing various hyperparameter configurations
Benchmarking model performance across different experiments
The workflow consists of three main steps:
Run training experiments - Train multiple models in parallel using different agents
Generate evaluation reports - Each experiment automatically produces a benchmark report
Compare results - Use the Model Benchmark Compare application to analyze performance side-by-side
Learn more about Training Experiments and Model Evaluation.
Prerequisites
Create a local.env file with your environment configuration:
Complete example
Workflow results
After running the script, you can track the entire workflow through the Supervisely UI:
Training tasks
Monitor parallel training progress and comparison task execution in the Tasks & Apps section:

Multiple training tasks run simultaneously on different agents, reducing total training time. After training completes, the comparison task automatically starts.
Experiments
All training runs are automatically registered in the Experiments section with full tracking and metrics:

Each experiment includes training data, model checkpoints, and automatically generated evaluation reports.
Model benchmark comparison
The final comparison report in Model Benchmark provides comprehensive side-by-side analysis:

Compare key metrics including mAP, precision, recall, inference speed, and per-class performance - making it easy to identify the best performing model for your use case.
Last updated