Object detection
Step-by-step tutorial explains how to integrate custom object detection neural network into Supervisely platform
Introduction
This tutorial will teach you how to integrate your custom object detection model into Supervisely by using ObjectDetectionTrainDashboad class.
Full code of object detecting sample app can be found here

How to debug this tutorial
Step 1. Prepare ~/supervisely.env
file with credentials. Learn more here.
Step 2. Clone repository with source code and create Virtual Environment.
git clone https://github.com/supervisely-ecosystem/object-detection-training-template
cd object-detection-training-template
./create_venv.sh
Step 3. Open the repository directory in Visual Studio Code.
code -r .
Step 4. Start debugging src/main.py
Integrate your model
The integration of your own NN with ObjectDetectionTrainDashboad is really simple:
Step 1. Define pytorch dataset
Step 2. Define pytorch object detection model
Step 3. Define subclass ObjectDetectionTrainDashboad
and implement train
method
Step 4. Configure your dashboard using parameters and run the app. That's all. π
How to customize the dashboard?
Configuration via parameters
This section provide detailed information about parameters for ObjectDetectionTrainDashboad initialize and how to change it.
class ObjectDetectionTrainDashboad:
def __init__(
self,
# βββ required βββ
model,
plots_titles
# βββ optional βββ
pretrained_weights
hyperparameters_categories
extra_hyperparams
hyperparams_edit_mode
show_augmentations_ui
extra_augmentation_templates
download_batch_size
loggers
):
...
pretrained_weights: Dict
- it defines the table of pretraned model weights in UI
hyperparameters_categories: List
- list of tabs names in hyperparameters UI.
extra_hyperparams: Dict
- they will be added at the end of list hyperparams in the tab by passed tab name, which used as parent key.
hyperparams_edit_mode: String
- the ways to define hyperparameters.
show_augmentations_ui: Bool
- show/hide flag for augmentations card
Default: True
extra_augmentation_templates: List
- these augmentations templates will be added to beginning of the list for selector in augmentations card:
download_batch_size: int
- How much data to download per batch. Increase this value for speedup download on big projects.
Default: 100
loggers: List
- additional user loggers
Configuration via methods re-implemetation
How to change all hyperparameters in the hyperparameters card?
All what you neeed is just re-define hyperparameters_ui
method in subclass of ObjectDetectionTrainDashboad
Additional notes
Environment variable SLY_APP_DATA_DIR
in src.globals
is used to provide access to app files when the app will be finished. If something went wrong in your training process at any moment - you won't lose checkpoints and other important artifacts. They will be available by SFTP.
By default object detection training template app use this directoties structure from src/sly_globals
:
|object-detection-training-template
ββ project_dir # project training data destination folder
ββ data_dir # All training artefacts. This dir will be saved in Team files at `remote_data_dir` at the end of training process.
ββ checkpoints_dir # Model checkponts will be saved here. This dir included in `data_dir`.
ββ tensorboard_runs_dir # This dir will be created if tensorboard ResultsWriter was passed in loggers list
remote_data_dir
= f"/train_dashboard/{project.name}/runs/{time.strftime('%Y-%m-%d %H:%M:%S')}"
- the destination dir in Team files for all training artefacts.
Last updated
Was this helpful?