Supervisely
About SuperviselyEcosystemContact usSlack
  • 💻Supervisely Developer Portal
  • 🎉Getting Started
    • Installation
    • Basics of authentication
    • Intro to Python SDK
    • Environment variables
    • Supervisely annotation format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Objects
      • Tags
      • Image Annotation
      • Video Annotation
      • Point Clouds Annotation
      • Point Cloud Episode Annotation
      • Volumes Annotation
    • Python SDK tutorials
      • Images
        • Images
        • Image and object tags
        • Spatial labels on images
        • Keypoints (skeletons)
        • Multispectral images
        • Multiview images
        • Advanced: Optimized Import
        • Advanced: Export
      • Videos
        • Videos
        • Video and object tags
        • Spatial labels on videos
      • Point Clouds
        • Point Clouds (LiDAR)
        • Point Cloud Episodes and object tags
        • 3D point cloud object segmentation based on sensor fusion and 2D mask guidance
        • 3D segmentation masks projection on 2D photo context image
      • Volumes
        • Volumes (DICOM)
        • Spatial labels on volumes
      • Common
        • Iterate over a project
        • Iterate over a local project
        • Progress Bar tqdm
        • Cloning projects for development
    • Command Line Interface (CLI)
      • Enterprise CLI Tool
        • Instance administration
        • Workflow automation
      • Supervisely SDK CLI
    • Connect your computer
      • Linux
      • Windows WSL
      • Troubleshooting
  • 🔥App development
    • Basics
      • Create app from any py-script
      • Configuration file
        • config.json
        • Example 1. Headless
        • Example 2. App with GUI
        • v1 - Legacy
          • Example 1. v1 Modal Window
          • Example 2. v1 app with GUI
      • Add private app
      • Add public app
      • App Compatibility
    • Apps with GUI
      • Hello World!
      • App in the Image Labeling Tool
      • App in the Video Labeling Tool
      • In-browser app in the Labeling Tool
    • Custom import app
      • Overview
      • From template - simple
      • From scratch - simple
      • From scratch GUI - advanced
      • Finding directories with specific markers
    • Custom export app
      • Overview
      • From template - simple
      • From scratch - advanced
    • Neural Network integration
      • Overview
      • Serving App
        • Introduction
        • Instance segmentation
        • Object detection
        • Semantic segmentation
        • Pose estimation
        • Point tracking
        • Object tracking
        • Mask tracking
        • Image matting
        • How to customize model inference
        • Example: Custom model inference with probability maps
      • Serving App with GUI
        • Introduction
        • How to use default GUI template
        • Default GUI template customization
        • How to create custom user interface
      • Inference API
      • Training App
        • Overview
        • Tensorboard template
        • Object detection
      • High level scheme
      • Custom inference pipeline
      • Train and predict automation model pipeline
    • Advanced
      • Advanced debugging
      • How to make your own widget
      • Tutorial - App Engine v1
        • Chapter 1 Headless
          • Part 1 — Hello world! [From your Python script to Supervisely APP]
          • Part 2 — Errors handling [Catching all bugs]
          • Part 3 — Site Packages [Customize your app]
          • Part 4 — SDK Preview [Lemons counter app]
          • Part 5 — Integrate custom tracker into Videos Annotator tool [OpenCV Tracker]
        • Chapter 2 Modal Window
          • Part 1 — Modal window [What is it?]
          • Part 2 — States and Widgets [Customize modal window]
        • Chapter 3 UI
          • Part 1 — While True Script [It's all what you need]
          • Part 2 — UI Rendering [Simplest UI Application]
          • Part 3 — APP Handlers [Handle Events and Errors]
          • Part 4 — State and Data [Mutable Fields]
          • Part 5 — Styling your app [Customizing the UI]
        • Chapter 4 Additionals
          • Part 1 — Remote Developing with PyCharm [Docker SSH Server]
      • Custom Configuration
        • Fixing SSL Certificate Errors in Supervisely
        • Fixing 400 HTTP errors when using HTTP instead of HTTPS
      • Autostart
      • Coordinate System
      • MLOps Workflow integration
    • Widgets
      • Input
        • Input
        • InputNumber
        • InputTag
        • BindedInputNumber
        • DatePicker
        • DateTimePicker
        • ColorPicker
        • TimePicker
        • ClassesMapping
        • ClassesColorMapping
      • Controls
        • Button
        • Checkbox
        • RadioGroup
        • Switch
        • Slider
        • TrainValSplits
        • FileStorageUpload
        • Timeline
        • Pagination
      • Text Elements
        • Text
        • TextArea
        • Editor
        • Copy to Clipboard
        • Markdown
        • Tooltip
        • ElementTag
        • ElementTagsList
      • Media
        • Image
        • LabeledImage
        • GridGallery
        • Video
        • VideoPlayer
        • ImagePairSequence
        • Icons
        • ObjectClassView
        • ObjectClassesList
        • ImageSlider
        • Carousel
        • TagMetaView
        • TagMetasList
        • ImageAnnotationPreview
        • ClassesMappingPreview
        • ClassesListPreview
        • TagsListPreview
        • MembersListPreview
      • Selection
        • Select
        • SelectTeam
        • SelectWorkspace
        • SelectProject
        • SelectDataset
        • SelectItem
        • SelectTagMeta
        • SelectAppSession
        • SelectString
        • Transfer
        • DestinationProject
        • TeamFilesSelector
        • FileViewer
        • Dropdown
        • Cascader
        • ClassesListSelector
        • TagsListSelector
        • MembersListSelector
        • TreeSelect
        • SelectCudaDevice
      • Thumbnails
        • ProjectThumbnail
        • DatasetThumbnail
        • VideoThumbnail
        • FolderThumbnail
        • FileThumbnail
      • Status Elements
        • Progress
        • NotificationBox
        • DoneLabel
        • DialogMessage
        • TaskLogs
        • Badge
        • ModelInfo
        • Rate
        • CircleProgress
      • Layouts and Containers
        • Card
        • Container
        • Empty
        • Field
        • Flexbox
        • Grid
        • Menu
        • OneOf
        • Sidebar
        • Stepper
        • RadioTabs
        • Tabs
        • TabsDynamic
        • ReloadableArea
        • Collapse
        • Dialog
        • IFrame
      • Tables
        • Table
        • ClassicTable
        • RadioTable
        • ClassesTable
        • RandomSplitsTable
        • FastTable
      • Charts and Plots
        • LineChart
        • GridChart
        • HeatmapChart
        • ApexChart
        • ConfusionMatrix
        • LinePlot
        • GridPlot
        • ScatterChart
        • TreemapChart
        • PieChart
      • Compare Data
        • MatchDatasets
        • MatchTagMetas
        • MatchObjClasses
        • ClassBalance
        • CompareAnnotations
      • Widgets demos on github
  • 😎Advanced user guide
    • Objects binding
    • Automate with Python SDK & API
      • Start and stop app
      • User management
      • Labeling Jobs
  • 🖥️UI widgets
    • Element UI library
    • Supervisely UI widgets
    • Apexcharts - modern & interactive charts
    • Plotly graphing library
  • 📚API References
    • REST API Reference
    • Python SDK Reference
Powered by GitBook
On this page
  • Introduction
  • Step 1. Preparing UI widgets
  • Step 2. Enabling advanced debug mode
  • Step 3. Handling the events
  • Step 4. Preparing config.json file
  • Step 5. Using cache (optional)
  • Step 6. Processing the mask
  • Step 7. Implementing the processing function
  • Step 8. Running the app locally
  • Step 9. Releasing the app and running it in Supervisely
  • Step 10. Optimizations (optional)
  • Summary

Was this helpful?

Edit on GitHub
  1. App development
  2. Apps with GUI

App in the Image Labeling Tool

PreviousHello World!NextApp in the Video Labeling Tool

Last updated 1 year ago

Was this helpful?

Introduction

Supervisely instance version >= 6.8.54 Supervisely SDK version >= 6.72.200

In the tutorial, Supervisely Python SDK version is not directly defined in the dev_requirements.txt and config.json files. But when developing your app, we recommend defining the SDK version in the dev_requirements.txt and the config.json file.

Developing a custom app for the Labeling Tool be useful in the following cases:

  1. When you need to combine manual labeling and the algorithmic post-processing of the labels in real-time.

  2. When you need to validate created labels for some specific rules in real-time.

In this tutorial, we'll learn how to develop an application that will process masks in real-time while working with the Image Labeling Tool. The processing will be triggered automatically after the mask is created with the Brush tool (after releasing the left mouse button). The demo app will also have settings for enabling / disabling the processing and adjusting the mask processing settings. We will focus on processing the mask in this tutorial, but it's possible to work with different geometries (points, polygons, rectangles, etc.) in the same way.

We will go through the following steps:

Prepare UI widgets and the application's layout. Enable advanced debug mode. Handle the events. Prepare the config.json file. Use cache (optional). Process the mask. Implement the processing function. Run the app locally. Release the app and run it in Supervisely. Optimizations (optional).

Everything you need to reproduce : source code and additional app files.

Step 1. Preparing UI widgets

But first, we need to import required packages and modules:

import cv2
import os
import numpy as np
import supervisely as sly
from dotenv import load_dotenv

To be able to change the app's settings we need to add UI widgets to the app's layout. So, we'll need two widgets:

  • Switch widget for enabling / disabling the processing

  • Slider widget for adjusting the mask processing settings

import supervisely.app.development as sly_app_development
from supervisely.app.widgets import Container, Switch, Field, Slider


# Creating widget to turn on/off the processing of labels.
need_processing = Switch(switched=True)
processing_field = Field(
    title="Process masks",
    description="If turned on, then the mask will be processed after every change on left mouse release after drawing",
    content=need_processing,
)

# Creating widget to set the strength of the processing.
dilation_strength = Slider(value=10, min=1, max=50, step=1)
dilation_strength_field = Field(
    title="Dilation",
    description="Select the strength of the dilation operation",
    content=dilation_strength,
)

Now, our widgets are ready and we can create the app's layout:

layout = Container(widgets=[processing_field, dilation_strength_field])
app = sly.Application(layout=layout)

Step 2. Enabling advanced debug mode

if sly.is_development():
    load_dotenv("local.env")
    team_id = sly.env.team_id()
    load_dotenv(os.path.expanduser("~/supervisely.env"))
    sly_app_development.supervisely_vpn_network(action="up")
    sly_app_development.create_debug_task(team_id, port="8000")

Step 3. Handling the events

Now, we need to handle the events that will be triggered by the Labeling Tool. In this tutorial, we'll be using only one event, when the left mouse button is released after drawing a mask. So, catching the event will is pretty simple:

@app.event(sly.Event.Brush.DrawLeftMouseReleased)
def brush_left_mouse_released(api: sly.Api, event: sly.Event.Brush.DrawLeftMouseReleased):
    sly.logger.info("Left mouse button released after drawing mask with brush")

That's it! Our function will receive the API object and the Event object and that's all we need to process the mask. The API object contains credentials for the user, which is currently working in the Labeling Tool and triggered the event. The Event object contains a lot of context information, such as:

    team_id: int,
    workspace_id: int,
    project_id: int,
    dataset_id: int,
    image_id: int,
    label_id: int,
    object_class_id: int,
    object_class_title: str,
    user_id: int,
    is_fill: bool,
    is_erase: bool,
    mask: np.ndarray,

So it will be easy to get any required information from the Event object like this:

project_id = event.project_id
dataset_id = event.dataset_id

and so on.

Step 4. Preparing config.json file

"integrated_into": ["image_annotation_tool"],

So, it will allow us to run the application directly in the Image Labeling Tool.

Step 5. Using cache (optional)

project_metas = {}

def get_project_meta(api: sly.Api, project_id: int) -> sly.ProjectMeta:
    if project_id not in project_metas:
        project_meta = sly.ProjectMeta.from_json(api.project.get_meta(project_id))
        project_metas[project_id] = project_meta
    else:
        project_meta = project_metas[project_id]
    return project_meta

So, if we already have the image or project meta in the cache, we'll use it. Otherwise, we'll get it from the API and save it to the cache. And it will save us some time when processing the mask.

Step 6. Processing the mask

And now we're ready to implement the mask processing. But first, let's do some checks to make sure that we need to use the processing of the mask.

@app.event(sly.Event.Brush.DrawLeftMouseReleased)
def brush_left_mouse_released(api: sly.Api, event: sly.Event.Brush.DrawLeftMouseReleased):
    sly.logger.info("Left mouse button released after drawing mask with brush")
    if not need_processing.is_on():
        # Checking if the processing is turned on in the UI.
        return

    if event.is_erase:
        # If the eraser was used, then we don't need to process the label in this tutorial.
        return

So, if the processing is turned off or the eraser was used, then we don't need to process the mask and we'll just exit the function. And now, let's finally process the mask!

    project_meta = get_project_meta(api, event.project_id)

    obj_class = project_meta.get_obj_class_by_id(event.object_class_id)

    new_mask = process(event.mask)

    label = sly.Label(geometry=sly.Bitmap(data=new_mask.astype(bool)), obj_class=obj_class)

    api.annotation.update_label(event.label_id, new_label)

Let's take a closer look at the process function:

  1. We're retrieving the ProjectMeta object from the cache function.

  2. We're retrieving the label's object class from the ProjectMeta object.

  3. We're processing the mask in the process function.

  4. We're creating a new label object with the processed mask and the same object class as the original label.

  5. We're uploading the new label to the Supervisely platform.

Step 7. Implementing the processing function

So, we already have the code for all the application's logic. But we still don't have the code for the processing function. In this tutorial, we'll be using a simple mask transformation just for demonstration purposes. But you can implement any logic you want.

def process(mask: np.ndarray) -> np.ndarray:
    dilation = cv2.dilate(mask.astype(np.uint8), None, iterations=dilation_strength.get_value())
    return dilation

Let's take a closer look at the process function:

  1. We're reading the dilation strength from the Slider widget.

  2. We're converting the mask to the uint8 type since it cames as a boolean 2D array from the Event object.

  3. We're returning a new mask.

Step 8. Running the app locally

Now, when everything is ready let's run the app and test it in the Labeling Tool. After launching the app from your VSCode you'll need to enter your root password to run the VPN connection. If everything works as it should, you'll see the following message in the terminal:

INFO:     Application startup complete.

Now follow the steps below to test the app while running it locally:

  1. Open Image Labeling Tool in Supervisely.

  2. Select the Apps tab.

  3. Find the Develop and Debug application with a running marker and click Open.

  4. The app's UI will be opened in the right sidebar.

The application UI including the widgets we created earlier is displayed in the right sidebar. We can turn the processing on and off using the Switch widget and adjust the strength of the processing using the Slider widget. Both events will be visible in the terminal, so it will be easy and convenient to debug the app.

Now we're ready to test our app. Let's try to draw a mask with the Brush tool and release the left mouse button. After that mask will be processed and become a little bit bigger as you will see in the Labeling Tool. Let's also check that our widgets are working properly:

  • The Switch widget disables and enables the processing

  • The Slider widget changes the strength of the processing

Step 9. Releasing the app and running it in Supervisely

supervisely release

After it's done, you can find your app in the Apps section of the platform and run it in the Labeling Tool without running the code locally. The steps are the same as in the previous step, but this time we'll be launching the actual application. In this tutorial the app's name in config.json is Labeling Tool template, so we'll find it in the list and click Run.

Step 10. Optimizations (optional)

It's important to mention that the app we developed in this tutorial is not optimized for production use. The main reason is the delays: when the application receives the mask from the event and starts processing it, the user can already draw a new mask. So, the app will be processing the old mask while the user is already working with a new one. And it can lead to unexpected results. In this tutorial, we've implemented a very simple throttling mechanism to avoid this issue. Let's take a closer look at it:

timestamp = None

@app.event(sly.Event.Brush.DrawLeftMouseReleased)
def brush_left_mouse_released(api: sly.Api, event: sly.Event.Brush.DrawLeftMouseReleased):
    sly.logger.info("Left mouse button released after drawing mask with brush")
    if not need_processing.is_on():
        # Checking if the processing is turned on in the UI.
        return

    if event.is_erase:
        # If the eraser was used, then we don't need to process the label in this tutorial.
        return

    t = datetime.now().timestamp()
    global timestamp
    timestamp = t

    # Here goes the processing code...

    if t == timestamp:
        api.annotation.update_label(event.label_id, label)

So, what's happening here:

  1. When the event is triggered, we get the current timestamp at the beginning of the function.

  2. We do some processing with our mask.

  3. When we're ready to upload the new label to the platform, we're checking if the current timestamp is the same as the one we got at the beginning of the function. If it's not the same, then it means that the user has already drawn a new mask and we don't need to upload the label to the platform.

And this simple solution will do its job in most cases. But it's only implemented for demonstration purposes and it's not optimized for production use since it only handles the cases inside of the application code. That means when the application calls the API to upload the label, there's still a delay before a request reaches the platform and updates the mask in the Labeling Tool. So if you draw new masks fast enough, you can still get outdated results. For those cases, we recommend implementing a more advanced mechanism for handling queues and delays. But it's out of the scope of this tutorial.

Summary

In this tutorial, we learned how to develop an application for the Image Labeling Tool. We learned how to use UI widgets, how to handle the events and how to process the mask. We also learned how to use advanced debug mode and how to release the app and run it in Supervisely. We hope that this tutorial was helpful for you and you'll be able to use it as a reference for your application.

In this tutorial, we'll be using advanced debug mode. It allows you to run your code locally from VSCode, while the application will be linked to the Labeling Tool and you'll be able to see the results of your actions in the Labeling Tool in real-time. Ensure that you have installed the required software from step 3 in tutorial. Otherwise, you won't be able to debug it in the Video Labeling Tool.

To use the advanced debug mode, you'll need to prepare two .env files. Learn more about them .

Now, when we're ready to start testing our app, we first need to prepare the config.json file, so our app can be launched directly in the Labeling Tool. You can find a lot of information using the config.json file . In this tutorial, we will pay attention to the specific key in the file:

While working in the Labeling Tool, we are waiting for the results of our actions in real-time. So, we need to process the mask as fast as possible. That's why it's better to use a cache to avoid unnecessary API calls each time the function is triggered (e.g. for the same project meta). In this tutorial, we will use a very simple caching just as a reference. In the real app, you can implement more advanced caching. So, we'll need to cache (list of classes and tags in the project) objects.

Opening Develop and Debug
Application UI
Working with app running locally

When we test the application, we can release it and run it in Supervisely. You can find a detailed guide on how to release the app , but in this tutorial, we'll just use the following command:

🔥
this
here
here
Supervisely Project Meta
here
this tutorial is on GitHub
Step 1.
Step 2.
Step 3.
Step 4.
Step 5.
Step 6.
Step 7.
Step 8.
Step 9.
Step 10.