Custom inference pipeline

Introduction

Sometimes it is needed to organize custom data processing pipeline with using neural networks. This quide illustrates how to import image, process it with detection model and separate predictions with high and low confidences.

neural network pipelines in Supervisely

In this tutorial, you'll learn how to infer deployed models from your code with the sly.nn.inference.Session class and process the images. This class is a convenient wrapper for a low-level API. It under the hood is just a communication with the serving app via requests.

The entire integration Python script takes only πŸ‘ 95 lines of code (including comments) and can be found in GitHub repository for this tutorial.

Table of Contents:

Before starting you have to deploy your model with a Serving App (e.g. Serve YOLOv5)

How to debug this tutorial

Step 1. Prepare ~/supervisely.env file with credentials. Learn more here.

Step 2. Clone repository with source code and demo data and create Virtual Environment.

Step 3. Open repository directory in Visual Studio Code.

Step 4. change βœ… workspace ID βœ… in local.env file by copying the ID from the context menu of the workspace. A new project with annotated images will be created in the workspace you define. Learn more here.

Copy workspace ID from context menu

Step 5. Start debugging src/main.py

Debug tutorial in Visual Studio Code

Python Code

Import libraries

Init API client

Init api for communicating with Supervisely Instance. First, we load environment variables with credentials and workspace ID:

(for more info see Basics of authentication tutorial)

With next lines we will check the you did everything right - API client initialized with correct credentials and you defined the correct workspace ID in local.env.

Initialize sly.nn.inference.Session

First serve the model you want (e.g. Serve YOLOv5) and copy the task_id from the App sessions section in the Supervisely platform:

Copy the Task ID here

Create an Inference Session, a connection to the model:

(for more info see Inference API tutorial)

Create project

Create empty project with name "Model predictions" with one dataset "Week # 1" in your workspace on server. If the project with the same name exists in your dataset, it will be automatically renamed (Week # 1_001, Week # 1_002, etc ...) to avoid name collisions.

Create 2 new tags: "high confidence" and "need validation"

Add new tags to the project metadata

Add tag metas to ProjectMeta.

Set up tags in our project on server:

Processing images and object detections

It this section we will make predictions on images and applies tags based on the prediction confidence. If the confidence of the current label is below 0.8, both the label and the current image will be tagged as "need validation," otherwise, the image will be tagged as "high confidence."

Result images with objects and tags

Last updated

Was this helpful?