Point tracking
Step-by-step tutorial on how to integrate custom point tracking neural network into Supervisely platform on the example of PIPs.
Introduction
In this tutorial you will learn how to integrate your custom point tracking model into Supervisely by creating two simple serving apps. First, we will construct a straightforward model that only moves the original point as an illustration. The SOTA model PIPs, which already has the majority of the necessary functions implemented, will be used in the second part.
Implementation details
To integrate your model, you need to subclass sly.nn.inference.PointTracking
and implement 2 methods:
load_on_device
method for downloading the weights and initializing the model on a specific device. Takes amodel_dir
argument, which is a directory for all model files (like configs, weights, etc). The second argument is adevice
- a torch.device likecuda:0
,cpu
.predict
. The core implementation of model inference. It takes a list of images ofnp.ndarray
type, inference settings and point to track as arguments, applies the model inference to the images and returns a list of predictions (both input point and predicted points aresly.nn.PredictionPoint
objects).
Currently, integrating models that can track several points simultaneously is not possible due to the implementation of the sly.nn.inference.PointTracking
class.
Overall structure
The overall structure of the class we will implement is looking like this:
The superclass has a serve()
method. For running the code on the Supervisely platform, m.serve()
method should be executed:
And here is the beauty comes in. The method serve()
internally handles everything and deploys your model as a REST API service on the Supervisely platform. It means that other applications are able to communicate with your model and get predictions from it.
So let's implement the class.
Simple model
Getting started
Step 1. Prepare ~/supervisely.env
file with credentials. Learn more here.
Step 2. Create Virtual Environment and install supervisely==6.72.11
in it.
Step-by-step implementation
Defining imports and global variables
1. load_on_device
The following code creates the model according to config model_settings.yaml
. Path to .yaml
config is passed during initialization. This settings can also be given as a python dictionary. Config in the form of a dictionary becomes available in self.custom_inference_settings_dict
attribute. Also load_on_device
will keep the model as a self.model
for further use:
Our settings.yaml
file:
2. predict
The core method for model inference. Here we will use the defined model and make sure that predicted points are not outside of the bounds.
It must return exactly a list of sly.nn.PredictionPoint
objects for compatibility with Supervisely. Notice, that the first frame is not in the list.
Usage of our class
Once the class is created, here we initialize it and get one test prediction for debugging.
Here are the output predictions of our simple model:
PIPs tracking model
Let's now implement the class for pre-trained model. The majority of the code used to load and run the model is taken directly from the original repository.
Getting started
Step 1. Prepare ~/supervisely.env
file with credentials. Learn more here.
Step 2. Clone repository with source code and create Virtual Environment.
It's feasible to run the present model on the CPU
, thus installing CUDA
requirements is not required.
Step 3. Load model weights.
Step 4. Open the repository directory in Visual Studio Code.
Step 5. Run debug for script src/main.py
Python script
The integration script is simple:
Initialize model.
Runs inference on a demo images.
Predictions adds and new frames saves in chronological order.
Step-by-step implementation
Defining imports and global variables
1. load_on_device
The following code creates the model according to config supervisely/serve/model_settings.yaml
. Path to .yaml
config is passed during initialization. The saverloader.load
function provided by the creator of the original repository loads the model state dict from model_dir
. Also load_on_device
will keep the model as a self.model
and the device as self.device
for further use:
Here we are downloading the model weights from local storage, but it can be also downloaded by path in Supervisely Team Files.
2. predict
The core method for model inference. Here we are preparing images and getting an inference of the model. The function sly_functions.run_model
is borrowed from the original repository. However, there are a few changes that can be made to improve quality: preserve the aspect ratio, apply padding before resizing and make sure that predicted points are not outside of the bounds. Then we wrap model predictions into sly.nn.PredictionPoint
.
It must return exactly a list of sly.nn.PredictionPoint
objects for compatibility with Supervisely.
Usage of our class
Once the class is created, here we initialize it and get one test prediction for debugging.
In the code below a custom_inference_settings
is used. It allows us to provide custom settings that could be used in predict()
(See more in Customized Inference Tutorial)
Here are the output predictions of our PIPs model:
Run and debug
The beauty of this class is that you can easily debug your code locally in your favorite IDE.
For now, we recommend using Visual Studio Code IDE, because our repositories have prepared settings for convenient debugging in VSCode. It is the easiest way to start.
Local debug
You can run the code locally for debugging. For Visual Studio Code we've created a launch.json
config file that can be selected:
Debug in Supervisely platform
Once the code seems working locally, it's time to test the code right in the Supervisely platform as a debugging app. For that:
If you develop in a Docker container, you should run the container with
--cap_add=NET_ADMIN
option.Install
sudo apt-get install wireguard iproute2
.Define your
TEAM_ID
in thelocal.env
file. Actually there are other env variables that is needed, but they are already provided in./vscode/launch.json
for you.Switch the
launch.json
config to theAdvanced debug in Supervisely platform
:
Run the code.
✅ It will deploy the model in the Supervisely platform as a regular serving App that is able to communicate with all other apps in the platform:
Now you can use apps like Apply NN to Images, Apply NN to videos with your deployed model.
Or get the model inference via Python API with the help of sly.nn.inference.Session
class just in one line of code. See Inference API Tutorial.
Release your code as a Supervisely App.
Once you've tested the code, it's time to release it into the platform. It can be released as an App that is shared with the all Supervisely community, or as your own private App.
Refer to How to Release your App for all releasing details. For a private app check also Private App Tutorial.
Repository structure
The structure of our GitHub repository is the following:
Explanation:
supervisely/serve/src/main.py
- main inference scriptsupervisely/serve/src/sly_functions.py
- functions to run the PIPs model based on the original repository codereference_model
- directory with model weights; will be created automatically inget_reference_model.sh
demo_images
- directory with demo frames for inferencesupervisely/serve/README.md
- readme of your application, it is the main page of an application in Ecosystem with some images, videos, and how-to-use guidessupervisely/serve/config.json
- configuration of the Supervisely application, which defines the name and description of the app, its context menu, icon, poster, and running settingsrequirements.txt
- all packages needed for debugginglocal.env
- file with variables used for debuggingsupervisely/serve/docker
- directory with the custom Dockerfile for this application and the script that builds it and publishes it to the docker registry
App configuration
App configuration is stored in config.json
file. A detailed explanation of all possible fields is covered in this Configuration Tutorial. Let's check the config for our current app:
Here is the explanation for the fields:
type
- type of the module in Supervisely Ecosystemversion
- version of Supervisely App Engine. Just keep it by defaultname
- the name of the applicationdescription
- the description of the applicationcategories
- these tags are used to place the application in the correct category in Ecosystem.session_tags
- these tags will be assigned to every running session of the application. They can be used by other apps to find and filter all running sessions"need_gpu": true
- should be true if you want to use anycuda
devices."community_agent": false
- this means that this app can not be run on the agents started by Supervisely team, so users have to connect their own computers and run the app only on their own agents. Only applicable in Community Edition. Enterprise customers use their private instances so they can ignore the current optiondocker_image
- Docker container will be started from the defined Docker image, github repository will be downloaded and mounted inside the container.entrypoint
- the command that starts our application in a containerport
- port inside the container"headless": true
means that the app has no User Interfaceallowed_shapes
- shapes can be tracked with this model. Сonversion of figures to a set of points and vice versa is implemented in the base class, so you can keep this field default.
Last updated