Supervisely
About SuperviselyEcosystemContact usSlack
  • 💻Supervisely Developer Portal
  • 🎉Getting Started
    • Installation
    • Basics of authentication
    • Intro to Python SDK
    • Environment variables
    • Supervisely annotation format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Objects
      • Tags
      • Image Annotation
      • Video Annotation
      • Point Clouds Annotation
      • Point Cloud Episode Annotation
      • Volumes Annotation
    • Python SDK tutorials
      • Images
        • Images
        • Image and object tags
        • Spatial labels on images
        • Keypoints (skeletons)
        • Multispectral images
        • Multiview images
        • Advanced: Optimized Import
        • Advanced: Export
      • Videos
        • Videos
        • Video and object tags
        • Spatial labels on videos
      • Point Clouds
        • Point Clouds (LiDAR)
        • Point Cloud Episodes and object tags
        • 3D point cloud object segmentation based on sensor fusion and 2D mask guidance
        • 3D segmentation masks projection on 2D photo context image
      • Volumes
        • Volumes (DICOM)
        • Spatial labels on volumes
      • Common
        • Iterate over a project
        • Iterate over a local project
        • Progress Bar tqdm
        • Cloning projects for development
    • Command Line Interface (CLI)
      • Enterprise CLI Tool
        • Instance administration
        • Workflow automation
      • Supervisely SDK CLI
    • Connect your computer
      • Linux
      • Windows WSL
      • Troubleshooting
  • 🔥App development
    • Basics
      • Create app from any py-script
      • Configuration file
        • config.json
        • Example 1. Headless
        • Example 2. App with GUI
        • v1 - Legacy
          • Example 1. v1 Modal Window
          • Example 2. v1 app with GUI
      • Add private app
      • Add public app
      • App Compatibility
    • Apps with GUI
      • Hello World!
      • App in the Image Labeling Tool
      • App in the Video Labeling Tool
      • In-browser app in the Labeling Tool
    • Custom import app
      • Overview
      • From template - simple
      • From scratch - simple
      • From scratch GUI - advanced
      • Finding directories with specific markers
    • Custom export app
      • Overview
      • From template - simple
      • From scratch - advanced
    • Neural Network integration
      • Overview
      • Serving App
        • Introduction
        • Instance segmentation
        • Object detection
        • Semantic segmentation
        • Pose estimation
        • Point tracking
        • Object tracking
        • Mask tracking
        • Image matting
        • How to customize model inference
        • Example: Custom model inference with probability maps
      • Serving App with GUI
        • Introduction
        • How to use default GUI template
        • Default GUI template customization
        • How to create custom user interface
      • Inference API
      • Training App
        • Overview
        • Tensorboard template
        • Object detection
      • High level scheme
      • Custom inference pipeline
      • Train and predict automation model pipeline
    • Advanced
      • Advanced debugging
      • How to make your own widget
      • Tutorial - App Engine v1
        • Chapter 1 Headless
          • Part 1 — Hello world! [From your Python script to Supervisely APP]
          • Part 2 — Errors handling [Catching all bugs]
          • Part 3 — Site Packages [Customize your app]
          • Part 4 — SDK Preview [Lemons counter app]
          • Part 5 — Integrate custom tracker into Videos Annotator tool [OpenCV Tracker]
        • Chapter 2 Modal Window
          • Part 1 — Modal window [What is it?]
          • Part 2 — States and Widgets [Customize modal window]
        • Chapter 3 UI
          • Part 1 — While True Script [It's all what you need]
          • Part 2 — UI Rendering [Simplest UI Application]
          • Part 3 — APP Handlers [Handle Events and Errors]
          • Part 4 — State and Data [Mutable Fields]
          • Part 5 — Styling your app [Customizing the UI]
        • Chapter 4 Additionals
          • Part 1 — Remote Developing with PyCharm [Docker SSH Server]
      • Custom Configuration
        • Fixing SSL Certificate Errors in Supervisely
        • Fixing 400 HTTP errors when using HTTP instead of HTTPS
      • Autostart
      • Coordinate System
      • MLOps Workflow integration
    • Widgets
      • Input
        • Input
        • InputNumber
        • InputTag
        • BindedInputNumber
        • DatePicker
        • DateTimePicker
        • ColorPicker
        • TimePicker
        • ClassesMapping
        • ClassesColorMapping
      • Controls
        • Button
        • Checkbox
        • RadioGroup
        • Switch
        • Slider
        • TrainValSplits
        • FileStorageUpload
        • Timeline
        • Pagination
      • Text Elements
        • Text
        • TextArea
        • Editor
        • Copy to Clipboard
        • Markdown
        • Tooltip
        • ElementTag
        • ElementTagsList
      • Media
        • Image
        • LabeledImage
        • GridGallery
        • Video
        • VideoPlayer
        • ImagePairSequence
        • Icons
        • ObjectClassView
        • ObjectClassesList
        • ImageSlider
        • Carousel
        • TagMetaView
        • TagMetasList
        • ImageAnnotationPreview
        • ClassesMappingPreview
        • ClassesListPreview
        • TagsListPreview
        • MembersListPreview
      • Selection
        • Select
        • SelectTeam
        • SelectWorkspace
        • SelectProject
        • SelectDataset
        • SelectItem
        • SelectTagMeta
        • SelectAppSession
        • SelectString
        • Transfer
        • DestinationProject
        • TeamFilesSelector
        • FileViewer
        • Dropdown
        • Cascader
        • ClassesListSelector
        • TagsListSelector
        • MembersListSelector
        • TreeSelect
        • SelectCudaDevice
      • Thumbnails
        • ProjectThumbnail
        • DatasetThumbnail
        • VideoThumbnail
        • FolderThumbnail
        • FileThumbnail
      • Status Elements
        • Progress
        • NotificationBox
        • DoneLabel
        • DialogMessage
        • TaskLogs
        • Badge
        • ModelInfo
        • Rate
        • CircleProgress
      • Layouts and Containers
        • Card
        • Container
        • Empty
        • Field
        • Flexbox
        • Grid
        • Menu
        • OneOf
        • Sidebar
        • Stepper
        • RadioTabs
        • Tabs
        • TabsDynamic
        • ReloadableArea
        • Collapse
        • Dialog
        • IFrame
      • Tables
        • Table
        • ClassicTable
        • RadioTable
        • ClassesTable
        • RandomSplitsTable
        • FastTable
      • Charts and Plots
        • LineChart
        • GridChart
        • HeatmapChart
        • ApexChart
        • ConfusionMatrix
        • LinePlot
        • GridPlot
        • ScatterChart
        • TreemapChart
        • PieChart
      • Compare Data
        • MatchDatasets
        • MatchTagMetas
        • MatchObjClasses
        • ClassBalance
        • CompareAnnotations
      • Widgets demos on github
  • 😎Advanced user guide
    • Objects binding
    • Automate with Python SDK & API
      • Start and stop app
      • User management
      • Labeling Jobs
  • 🖥️UI widgets
    • Element UI library
    • Supervisely UI widgets
    • Apexcharts - modern & interactive charts
    • Plotly graphing library
  • 📚API References
    • REST API Reference
    • Python SDK Reference
Powered by GitBook
On this page
  • Introduction
  • How to debug this tutorial
  • Python Code
  • Import libraries
  • Init API client
  • Initialize sly.nn.inference.Session
  • Create project
  • Create 2 new tags: "high confidence" and "need validation"
  • Add new tags to the project metadata
  • Prepare image links and classes you want to collect
  • Processing images and object detections

Was this helpful?

Edit on GitHub
  1. App development
  2. Neural Network integration

Custom inference pipeline

PreviousHigh level schemeNextTrain and predict automation model pipeline

Last updated 6 months ago

Was this helpful?

Introduction

Sometimes it is needed to organize with using neural networks. This quide illustrates how to import image, process it with detection model and separate predictions with high and low confidences.

neural network pipelines in Supervisely

In this tutorial, you'll learn how to infer deployed models from your code with the sly.nn.inference.Session class and process the images. This class is a convenient wrapper for a low-level API. It under the hood is just a communication with the serving app via requests.

Table of Contents:

How to debug this tutorial

git clone https://github.com/supervisely-ecosystem/example-inference-session
cd example-inference-session
./create_venv.sh

Step 3. Open repository directory in Visual Studio Code.

code -r .
WORKSPACE_ID=680 # ⬅️ change value

Step 5. Start debugging src/main.py

Python Code

Import libraries

import os

import supervisely as sly
from dotenv import load_dotenv

Init API client

Init api for communicating with Supervisely Instance. First, we load environment variables with credentials and workspace ID:

load_dotenv("local.env")
load_dotenv(os.path.expanduser("~/supervisely.env"))
api = sly.Api()

With next lines we will check the you did everything right - API client initialized with correct credentials and you defined the correct workspace ID in local.env.

workspace_id = sly.env.workspace_id()
workspace = api.workspace.get_info_by_id(workspace_id)
if workspace is None:
    print("you should put correct workspaceId value to local.env")
    raise ValueError(f"Workspace with id={workspace_id} not found")

Initialize sly.nn.inference.Session

Create an Inference Session, a connection to the model:

# Get your Serving App's task_id from the Supervisely platform
task_id = 33172

# Create session
session = sly.nn.inference.Session(api, task_id=task_id)

Create project

Create empty project with name "Model predictions" with one dataset "Week # 1" in your workspace on server. If the project with the same name exists in your dataset, it will be automatically renamed (Week # 1_001, Week # 1_002, etc ...) to avoid name collisions.

project_info = api.project.create(workspace_id, "Model predictions", change_name_if_conflict=True)
dataset_info = api.dataset.create(project_info.id, "Week # 1")

print(f"Project has been sucessfully created, id={project_info.id}")
# Output: Project has been sucessfully created, id=20924

Create 2 new tags: "high confidence" and "need validation"

meta_high_confidence = sly.TagMeta("high confidence", sly.TagValueType.NONE)
high_confidence_tag = sly.Tag(meta_high_confidence)

meta_need_validation = sly.TagMeta("need validation", sly.TagValueType.NONE)
need_validation_tag = sly.Tag(meta_need_validation)

Add new tags to the project metadata

Add tag metas to ProjectMeta.

model_meta = model_meta.add_tag_metas(new_tag_metas=[meta_high_confidence, meta_need_validation])

Set up tags in our project on server:

api.project.update_meta(id=project_info.id, meta=model_meta)

Prepare image links and classes you want to collect

links = [
    "https://live.staticflickr.com/1578/24294187606_89069ac7dd_k_d.jpg",
    "https://live.staticflickr.com/5491/9127573526_2999fafead_k_d.jpg",
    "https://live.staticflickr.com/6161/6175302372_76c4db94d0_k_d.jpg",
    "https://live.staticflickr.com/5601/15309578219_aa39bbfad2_k_d.jpg",
    "https://live.staticflickr.com/2465/3622848494_bad3b7ebe1_k_d.jpg",
    "https://live.staticflickr.com/557/19806156284_3ebb5a4046_k_d.jpg",
    "https://live.staticflickr.com/8403/8668991964_7969e1be9f_k_d.jpg",
    "https://live.staticflickr.com/1924/43556503550_f79978a134_k_d.jpg",
    "https://live.staticflickr.com/3799/20240807568_fcdab6a529_k_d.jpg",
    "https://live.staticflickr.com/7344/9886706776_16f9656162_k_d.jpg",
]
target_class_names = ["person", "bicycle", "car"]

Processing images and object detections

It this section we will make predictions on images and applies tags based on the prediction confidence. If the confidence of the current label is below 0.8, both the label and the current image will be tagged as "need validation," otherwise, the image will be tagged as "high confidence."

By setting tags based on the prediction confidence level, this script enables the separation of the dataset into "high confidence" and "need validation" images. This allows for efficient and automated image processing. ✅

CONFIDENCE_THRESHOLD = 0.8

for i, link in enumerate(links):
    # Upload current image from given link to Supervisely server
    image_info = api.image.upload_link(dataset_info.id, f"image_{i}.jpg", link)
    print(f"Image successfully uploaded, id={image_info.id}")

    # Get image inference
    prediction = session.inference_image_url(link)

    # Check confidence of predictions and set relevant tags.
    # If the prediction confidence is lower than the defined threshold,
    # both the image and the current label will be marked with the 'need validation' tag.
    image_need_validation = False
    new_labels = []

    for label in prediction.labels:
        # Skip the label if object class name is not in list of target class names.
        if label.obj_class.name not in target_class_names:
            continue
        confidence_tag = label.tags.get("confidence")
        if confidence_tag.value < CONFIDENCE_THRESHOLD:
            new_label = label.add_tag(need_validation_tag)
            image_need_validation = True
            new_labels.append(new_label)
        else:
            new_labels.append(label)

    prediction = prediction.clone(labels=new_labels)

    if image_need_validation is False:
        prediction = prediction.add_tag(high_confidence_tag)
    else:
        prediction = prediction.add_tag(need_validation_tag)

    api.annotation.upload_ann(image_info.id, prediction) # Upload annotations to server

The entire integration Python script takes only 👍 95 lines of code (including comments) and can be found in for this tutorial.

Before starting you have to deploy your model with a Serving App (e.g. )

Step 1. Prepare ~/supervisely.env file with credentials.

Step 2. Clone with source code and demo data and create .

Step 4. change ✅ workspace ID ✅ in local.env file by copying the ID from the context menu of the workspace. A new project with annotated images will be created in the workspace you define.

Copy workspace ID from context menu
Debug tutorial in Visual Studio Code

(for more info see tutorial)

First serve the model you want (e.g. ) and copy the task_id from the App sessions section in the Supervisely platform:

Copy the Task ID here

(for more info see tutorial)

Result images with objects and tags
🔥
GitHub repository
Serve YOLOv5
repository
Virtual Environment
Serve YOLOv5
Inference API
Custom inference pipeline
Introduction
How to debug this tutorial
Tutorial
1. Import libraries
2. Init API client
3. Initialize sly.nn.inference.Session
4. Create project
5. Create and add new tags to the project metadata
6. Prepare source images
7. Process images and predictions
Learn more here.
Basics of authentication
Learn more here.
custom data processing pipeline