Object detection
Step-by-step tutorial explains how to integrate custom object detection neural network into Supervisely platform
Introduction
This tutorial will teach you how to integrate your custom object detection model into Supervisely by using ObjectDetectionTrainDashboad class.
Full code of object detecting sample app can be found here

How to debug this tutorial
Step 1. Prepare ~/supervisely.env file with credentials. Learn more here.
Step 2. Clone repository with source code and create Virtual Environment.
Step 3. Open the repository directory in Visual Studio Code.
Step 4. Start debugging src/main.py
Integrate your model
The integration of your own NN with ObjectDetectionTrainDashboad is really simple:
Step 1. Define pytorch dataset
Step 2. Define pytorch object detection model
Step 3. Define subclass ObjectDetectionTrainDashboad and implement train method
Step 4. Configure your dashboard using parameters and run the app. That's all. π
How to customize the dashboard?
Configuration via parameters
This section provide detailed information about parameters for ObjectDetectionTrainDashboad initialize and how to change it.
pretrained_weights: Dict - it defines the table of pretraned model weights in UI
Details
If the provided path doesn't exist in the local filesystem at sly_globals.checkpoints_dir, it will be downloaded from Team files.
You can read more about sly_global in the Additional notes section
Example
The "Pretrained weights" tab will appear in the model settings card automatically.

hyperparameters_categories: List - list of tabs names in hyperparameters UI.
Details
Default: ['general', 'checkpoints', 'optimizer', 'intervals', 'scheduler']
These names also will be used as parent keys for hyperparams from corresponding tabs. You can add/delete tabs by this parameter in hyperparameters card.
Example
Before

After

extra_hyperparams: Dict - they will be added at the end of list hyperparams in the tab by passed tab name, which used as parent key.
Details
Extra hyperparam structure
any_tab_name should be unique string.
any_key should be unique string for corresponding tab, but it can be repeatet on a another tab. See example below.
content work correctly only with sly.app.widgets, which have get_value method.
In other cases you have two options:
implement
get_valuemethod for your widgetmodify
get_hyperparametersmethod for support custom widgets
Example:
The General tab 
The Checkpoints tab 
hyperparams_edit_mode: String - the ways to define hyperparameters.
Details
Default: `'ui'`
Supported values: ['ui', 'raw', 'all']
ui - only π’ section will be shown.
raw - only π΄ section will be shown.
all - π’ + π΄ sections will be shown together.

The hyperparams from UI will overwrite hyperparams with the same names from the text editor widget.
For example, if you declare hparam_1 with "general" as the parent key in extra_hyperparams or in hyperparameters_ui method
and declare the same in the text editor widget
then when you will call get_hyperparameters method, the hparam_1 value will be equal to 100, not 0.1.
show_augmentations_ui: Bool - show/hide flag for augmentations card
Default: True
extra_augmentation_templates: List - these augmentations templates will be added to beginning of the list for selector in augmentations card:
Details
You can create your own augmentations template .json using ImgAug Studio app.
Example:
If you will set hyperparams_edit_mode to raw or all, this additional widget will be shown.

download_batch_size: int - How much data to download per batch. Increase this value for speedup download on big projects.
Default: 100
loggers: List - additional user loggers
Details
Example:
You can log value for all loggers by calling common method.
All passed loggers should have the called method.
If you want to log value for specific logger, then use self.loggers.YOUR_LOGGER_CLASS
Configuration via methods re-implemetation
How to change all hyperparameters in the hyperparameters card?
All what you neeed is just re-define hyperparameters_ui method in subclass of ObjectDetectionTrainDashboad
Additional notes
Environment variable SLY_APP_DATA_DIR in src.globals is used to provide access to app files when the app will be finished. If something went wrong in your training process at any moment - you won't lose checkpoints and other important artifacts. They will be available by SFTP.
By default object detection training template app use this directoties structure from src/sly_globals:
remote_data_dir = f"/train_dashboard/{project.name}/runs/{time.strftime('%Y-%m-%d %H:%M:%S')}" - the destination dir in Team files for all training artefacts.
Last updated