githubEdit

Training API

Introduction

This guide explains how to programmatically start and manage neural network training in Supervisely using the Training API. You'll learn how to configure training parameters, run multiple experiments in parallel, and automatically compare model performance.

TrainApi overview

TrainApi is a high-level API that starts a training application task programmatically. It allows you to conveniently run a training app by providing parameters in the same structure that a user configures in the Training App GUI (TrainApp).

If you are not yet familiar with Supervisely environment variables you can read about it here.

Quick Example:

import os
from dotenv import load_dotenv

import supervisely as sly
from supervisely.api.nn.train_api import TrainApi

if sly.is_development():
    load_dotenv("local.env")
    load_dotenv(os.path.expanduser("~/supervisely.env"))

api = sly.Api.from_env()

project_id = sly.env.project_id()

train = TrainApi(api)
train.run(project_id=project_id, model="YOLO/YOLO26s-det")

TrainApp overview

TrainApp in Supervisely is a template for a training application that guides the user through steps with training settings.

Steps:

  1. Select Project - what data to train on and whether to cache this data for future use

  2. Select Model - Pretrained model or custom checkpoint that was trained in Supervisely

  3. Select Classes - List of classes names from the project

  4. Train/Val split - Split the data into train and validation sets

  5. Hyperparameters - YAML editor with training hyperparameters. Hyperparameters are different for each framework.

  6. Model Benchmark - Run model benchmark and speed test. Can be disabled if not needed.

  7. Export - Export the model to ONNX or TensorRT formats, if supported by the framework.

  8. Start training - Start training.

How to Start Training

To start training programmatically, call the run() method of the TrainApi class.

It will:

  • Prepare the same app state that you would configure in TrainApp UI

  • Detect a suitable training app for the chosen framework

  • Start the training task on the selected agent

TrainApi.run() parameters

Type: int | None

Optional: Yes (default: auto-select)

Agent ID where the training task will be started.

If not provided, TrainApi will automatically pick an available agent in the project team.

Example:

Use Case: Run Multiple Experiments and Compare Results

This example demonstrates how to programmatically run multiple training experiments in parallel and compare their performance. This workflow is useful for:

  • Testing different model architectures on the same dataset

  • Comparing various hyperparameter configurations

  • Benchmarking model performance across different experiments

The workflow consists of three main steps:

  1. Run training experiments - Train multiple models in parallel using different agents

  2. Generate evaluation reports - Each experiment automatically produces a benchmark report

  3. Compare results - Use the Model Benchmark Compare application to analyze performance side-by-side

Learn more about Training Experimentsarrow-up-right and Model Evaluationarrow-up-right.

Prerequisites

Create a local.env file with your environment configuration:

Complete example

Workflow results

After running the script, you can track the entire workflow through the Supervisely UI:

Training tasks

Monitor parallel training progress and comparison task execution in the Tasks & Apps section:

Training tasks and comparison

Multiple training tasks run simultaneously on different agents, reducing total training time. After training completes, the comparison task automatically starts.

Experiments

All training runs are automatically registered in the Experiments section with full tracking and metrics:

Experiments list

Each experimentarrow-up-right includes training data, model checkpoints, and automatically generated evaluation reportsarrow-up-right.

Model benchmark comparison

The final comparison report in Model Benchmark provides comprehensive side-by-side analysis:

Model Benchmark comparison report

Compare key metrics including mAP, precision, recall, inference speed, and per-class performance - making it easy to identify the best performing model for your use case.

Last updated