Supervisely
About SuperviselyEcosystemContact usSlack
  • 💻Supervisely Developer Portal
  • 🎉Getting Started
    • Installation
    • Basics of authentication
    • Intro to Python SDK
    • Environment variables
    • Supervisely annotation format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Objects
      • Tags
      • Image Annotation
      • Video Annotation
      • Point Clouds Annotation
      • Point Cloud Episode Annotation
      • Volumes Annotation
    • Python SDK tutorials
      • Images
        • Images
        • Image and object tags
        • Spatial labels on images
        • Keypoints (skeletons)
        • Multispectral images
        • Multiview images
        • Advanced: Optimized Import
        • Advanced: Export
      • Videos
        • Videos
        • Video and object tags
        • Spatial labels on videos
      • Point Clouds
        • Point Clouds (LiDAR)
        • Point Cloud Episodes and object tags
        • 3D point cloud object segmentation based on sensor fusion and 2D mask guidance
        • 3D segmentation masks projection on 2D photo context image
      • Volumes
        • Volumes (DICOM)
        • Spatial labels on volumes
      • Common
        • Iterate over a project
        • Iterate over a local project
        • Progress Bar tqdm
        • Cloning projects for development
    • Command Line Interface (CLI)
      • Enterprise CLI Tool
        • Instance administration
        • Workflow automation
      • Supervisely SDK CLI
    • Connect your computer
      • Linux
      • Windows WSL
      • Troubleshooting
  • 🔥App development
    • Basics
      • Create app from any py-script
      • Configuration file
        • config.json
        • Example 1. Headless
        • Example 2. App with GUI
        • v1 - Legacy
          • Example 1. v1 Modal Window
          • Example 2. v1 app with GUI
      • Add private app
      • Add public app
      • App Compatibility
    • Apps with GUI
      • Hello World!
      • App in the Image Labeling Tool
      • App in the Video Labeling Tool
      • In-browser app in the Labeling Tool
    • Custom import app
      • Overview
      • From template - simple
      • From scratch - simple
      • From scratch GUI - advanced
      • Finding directories with specific markers
    • Custom export app
      • Overview
      • From template - simple
      • From scratch - advanced
    • Neural Network integration
      • Overview
      • Serving App
        • Introduction
        • Instance segmentation
        • Object detection
        • Semantic segmentation
        • Pose estimation
        • Point tracking
        • Object tracking
        • Mask tracking
        • Image matting
        • How to customize model inference
        • Example: Custom model inference with probability maps
      • Serving App with GUI
        • Introduction
        • How to use default GUI template
        • Default GUI template customization
        • How to create custom user interface
      • Inference API
      • Training App
        • Overview
        • Tensorboard template
        • Object detection
      • High level scheme
      • Custom inference pipeline
      • Train and predict automation model pipeline
    • Advanced
      • Advanced debugging
      • How to make your own widget
      • Tutorial - App Engine v1
        • Chapter 1 Headless
          • Part 1 — Hello world! [From your Python script to Supervisely APP]
          • Part 2 — Errors handling [Catching all bugs]
          • Part 3 — Site Packages [Customize your app]
          • Part 4 — SDK Preview [Lemons counter app]
          • Part 5 — Integrate custom tracker into Videos Annotator tool [OpenCV Tracker]
        • Chapter 2 Modal Window
          • Part 1 — Modal window [What is it?]
          • Part 2 — States and Widgets [Customize modal window]
        • Chapter 3 UI
          • Part 1 — While True Script [It's all what you need]
          • Part 2 — UI Rendering [Simplest UI Application]
          • Part 3 — APP Handlers [Handle Events and Errors]
          • Part 4 — State and Data [Mutable Fields]
          • Part 5 — Styling your app [Customizing the UI]
        • Chapter 4 Additionals
          • Part 1 — Remote Developing with PyCharm [Docker SSH Server]
      • Custom Configuration
        • Fixing SSL Certificate Errors in Supervisely
        • Fixing 400 HTTP errors when using HTTP instead of HTTPS
      • Autostart
      • Coordinate System
      • MLOps Workflow integration
    • Widgets
      • Input
        • Input
        • InputNumber
        • InputTag
        • BindedInputNumber
        • DatePicker
        • DateTimePicker
        • ColorPicker
        • TimePicker
        • ClassesMapping
        • ClassesColorMapping
      • Controls
        • Button
        • Checkbox
        • RadioGroup
        • Switch
        • Slider
        • TrainValSplits
        • FileStorageUpload
        • Timeline
        • Pagination
      • Text Elements
        • Text
        • TextArea
        • Editor
        • Copy to Clipboard
        • Markdown
        • Tooltip
        • ElementTag
        • ElementTagsList
      • Media
        • Image
        • LabeledImage
        • GridGallery
        • Video
        • VideoPlayer
        • ImagePairSequence
        • Icons
        • ObjectClassView
        • ObjectClassesList
        • ImageSlider
        • Carousel
        • TagMetaView
        • TagMetasList
        • ImageAnnotationPreview
        • ClassesMappingPreview
        • ClassesListPreview
        • TagsListPreview
        • MembersListPreview
      • Selection
        • Select
        • SelectTeam
        • SelectWorkspace
        • SelectProject
        • SelectDataset
        • SelectItem
        • SelectTagMeta
        • SelectAppSession
        • SelectString
        • Transfer
        • DestinationProject
        • TeamFilesSelector
        • FileViewer
        • Dropdown
        • Cascader
        • ClassesListSelector
        • TagsListSelector
        • MembersListSelector
        • TreeSelect
        • SelectCudaDevice
      • Thumbnails
        • ProjectThumbnail
        • DatasetThumbnail
        • VideoThumbnail
        • FolderThumbnail
        • FileThumbnail
      • Status Elements
        • Progress
        • NotificationBox
        • DoneLabel
        • DialogMessage
        • TaskLogs
        • Badge
        • ModelInfo
        • Rate
        • CircleProgress
      • Layouts and Containers
        • Card
        • Container
        • Empty
        • Field
        • Flexbox
        • Grid
        • Menu
        • OneOf
        • Sidebar
        • Stepper
        • RadioTabs
        • Tabs
        • TabsDynamic
        • ReloadableArea
        • Collapse
        • Dialog
        • IFrame
      • Tables
        • Table
        • ClassicTable
        • RadioTable
        • ClassesTable
        • RandomSplitsTable
        • FastTable
      • Charts and Plots
        • LineChart
        • GridChart
        • HeatmapChart
        • ApexChart
        • ConfusionMatrix
        • LinePlot
        • GridPlot
        • ScatterChart
        • TreemapChart
        • PieChart
      • Compare Data
        • MatchDatasets
        • MatchTagMetas
        • MatchObjClasses
        • ClassBalance
        • CompareAnnotations
      • Widgets demos on github
  • 😎Advanced user guide
    • Objects binding
    • Automate with Python SDK & API
      • Start and stop app
      • User management
      • Labeling Jobs
  • 🖥️UI widgets
    • Element UI library
    • Supervisely UI widgets
    • Apexcharts - modern & interactive charts
    • Plotly graphing library
  • 📚API References
    • REST API Reference
    • Python SDK Reference
Powered by GitBook
On this page
  • Introduction
  • Step 0. Project structure
  • Step 1. Preparing UI widgets
  • Step 2. Handling the events
  • Step 3. Preparing config.json file
  • Step 4. Processing the mask
  • Step 5. Implementing the processing function
  • Step 6. Debugging the app
  • Step 7. Releasing the app and running it in Supervisely
  • Summary

Was this helpful?

Edit on GitHub
  1. App development
  2. Apps with GUI

In-browser app in the Labeling Tool

PreviousApp in the Video Labeling ToolNextCustom import app

Last updated 4 months ago

Was this helpful?

Introduction

Supervisely instance version >= 6.12.13 Supervisely SDK version >= 6.73.272

Developing a in-browser custom app for the Labeling Tool can be useful in the following cases:

  1. When you need to combine manual labeling and the algorithmic post-processing of the labels in real-time.

  2. When you need to validate created labels for some specific rules in real-time.

In this tutorial, we'll learn how to develop an application that will process masks in real-time while working with the Image Labeling Tool. The processing will be triggered automatically after the mask is created with the Brush tool (after releasing the left mouse button). The demo app will also have settings for enabling / disabling the processing and adjusting the mask processing settings.

We will go through the following steps:

Project structure. Prepare UI widgets and the application's layout. Handle the events. Prepare the config.json file. Process the mask. Implement the processing function. Debug the app. Release the app and run it in Supervisely.\

Everything you need to reproduce : source code and additional app files. Another example of the app that processes the masks in real-time can be found

Step 0. Project structure

Supervisely SDK is not used for the application to run, it is only used for debugging and releasing the application. The application will be using the sly_sdk module as supervisely in the app runtime, so it must be present in the repository for application to work. Any in-browser app for the Labeling Tool should have the following structure:

  1. config.json - the configuration file that contains the app's settings.

  2. the directory that contains the source code of the app. In this tutorial, it will be src.

  3. sly_sdk - module that is required for releasing and running the application. Newest version of the module can be found . This module should not be modified.

  4. requirements.txt - the file that contains the dependencies of the app.

config.json
src/
    main.py
    gui.py
sly_sdk/
requirements.txt

Step 1. Preparing UI widgets

To be able to change the app's settings we need to add UI widgets to the app's layout. So, we'll need two widgets:

  • Switch widget for enabling / disabling the processing

  • Slider widget for adjusting the mask processing settings

Only widgets that are present in sly_sdk.app.widgets can be used for such applications. But you should import them from supervisely.app.widgets in the app's code.

The widget_id argument is required. It should be unique for each widget in the app.

# src/gui.py
from supervisely.app.widgets import Container, Slider, Switch, Field
from supervisely.sly_logger import logger


# Creating widget to turn on/off the processing of labels.
need_processing = Switch(switched=True, widget_id="need_processing_widget")
processing_field = Field(
    title="Process masks",
    description="If turned on, then the mask will be processed after every change on left mouse release after drawing",
    content=need_processing,
    widget_id="processing_field_widget",
)

# Creating widget to set the strength of the processing.
dilation_strength = Slider(value=10, min=1, max=50, step=1, widget_id="dilation_strength_widget")
dilation_strength_field = Field(
    title="Dilation",
    description="Select the strength of the dilation operation",
    content=dilation_strength,
    widget_id="dilation_strength_field_widget",
)

Now, our widgets are ready and we can create the app's layout:

layout = Container(widgets=[processing_field, dilation_strength_field], widget_id="layout_widget")

Step 2. Handling the events

Now, we need to handle the events that will be triggered by the Labeling Tool. In this tutorial, we'll be using only one event, when the left mouse button is released after drawing a mask. So, catching the event will is pretty simple:

# src/main.py
from datetime import datetime
import cv2
import numpy as np
from sly_sdk.webpy import WebPyApplication
from sly_sdk.sly_logger import logger

from src.gui import layout, dilation_strength, need_processing


app = WebPyApplication(layout)

@app.event(app.Event.FigureGeometrySaved)
def geometry_updated(event: WebPyApplication.Event.FigureGeometrySaved):
    logger.info("Left mouse button released after drawing mask with brush")

That's it! Our function will receive the Event object and that's all we need to process the mask. In our case the event will contain a single argument figure_id:

figure_id = event.figure_id

Step 3. Preparing config.json file

"type": "client_side_app",
"integrated_into": ["image_annotation_tool"],
"gui_folder_path": "app",
"main_script": "src/main.py",
"src_dir": "src"
  • type: client_side_app - it's a key that tells the platform that this app is an in-browser app.

  • integrated_into: [image_annotation_tool] - it's a key that tells the platform that this app should be integrated into the Image Labeling Tool.

  • gui_folder_path: app - it is a key that tells the platform where the app's layout is located. It can be any non-conflicting path. It will only be used when releasing the application.

  • main_script: src/main.py - it's a key that tells the platform where the main script of the app is located. This fhile should contain the app variable of the WebPyApplication type.

  • src_dir: src - it's a key that tells the platform where the source code of the app is located. All the modules that you import in the main script should be located in this directory.

So, it will allow us to run the application directly in the Image Labeling Tool.

Step 4. Processing the mask

And now we're ready to implement the mask processing. But first, let's do some checks to make sure that we need to use the processing of the mask.

# src/main.py
# Creating geometry version dictionary to avoid recursion.
last_geometry_version = {}

@app.event(app.Event.FigureGeometrySaved)
def geometry_updated(event: WebPyApplication.Event.FigureGeometrySaved):
    logger.info("Left mouse button released after drawing mask with brush")
    if not need_processing.is_on():
        # Checking if the processing is turned on in the UI.
        return
    # Get figure
    figure_id = event.figure_id
    figure = app.get_figure_by_id(figure_id)

    # app.update_figure_geometry will trigger the same event, so we need to avoid infinite recursion.
    current_geom_version = figure.geometry_version
    last_geom_version = last_geometry_version.get(figure_id, None)
    last_geometry_version[figure_id] = current_geom_version + 2
    if last_geom_version is not None and last_geom_version >= current_geom_version:
        return

So, if the processing is turned off or the current geometry version of the figure less or equal to the last version we set, then we don't need to process the mask and we'll just exit the function. And now, let's finally process the mask!

    # Get mask from the figure
    figure_geometry = figure.geometry
    mask = figure_geometry["data"]

    # Processing the mask. You need to implement your own logic in the process function.
    new_mask = process(mask)

    # Update the mask in the figure
    app.update_figure_geometry(figure, new_mask)

Let's take a closer look at the process function:

  1. We're retrieving the geometry from the figure object.

  2. We're processing the mask in the process function.

  3. We're updating the figure geometry directly in the labeling tool.

Step 5. Implementing the processing function

So, we already have the code for all the application's logic. But we still don't have the code for the processing function. In this tutorial, we'll be using a simple mask transformation just for demonstration purposes. But you can implement any logic you want.

def process(mask: np.ndarray) -> np.ndarray:
    dilation = cv2.dilate(mask.astype(np.uint8), None, iterations=dilation_strength.get_value())
    return dilation

Let's take a closer look at the process function:

  1. We're reading the dilation strength from the Slider widget.

  2. We're converting the mask to the uint8 type since it cames as a boolean 2D array from the Event object.

  3. We're returning a new mask.

Step 6. Debugging the app

Since the app is running in the Labeling Tool, we can't use the standard debugging tools. But we developed an approach that allows you to test and debug the application easily.

When you run the app, there is an advanced setting Client side app server URL. You can set the URL to the server that will serve the files of the app. We added a .vscode/launch.json file to the repository that allows you to run such server which will reload each time you make changes in your src directory. So you can run the server and set the URL in the app settings. After that, each time you make changes and want to test them, you just need to reload the app in the Labeling Tool.

You can find the configuration below:

{
    "name": "Advanced Debug in Supervisely platform",
    "type": "python",
    "request": "launch",
    "module": "uvicorn",
    "args": [
        "sly_sdk.webpy.debug_server:app",
        "--host",
        "0.0.0.0",
        "--port",
        "8000",
        "--ws",
        "websockets",
        "--reload",
        "--reload-dir",
        "src", // config.json[src_dir]
        "--reload-exclude", 
        "app", // config.json[gui_folder_path]
        "--reload-exclude",
        "app/__webpy_script__.py"
    ],
    "jinja": true,
    "justMyCode": false,
    "env": {
        "PYTHONPATH": "${workspaceFolder}:${PYTHONPATH}",
        "LOG_LEVEL": "DEBUG",
        "ENV": "development",
    }
}

Step 7. Releasing the app and running it in Supervisely

supervisely release

After it's done, you can find your app in the Apps section of the platform and run it in the Labeling Tool. Follow the steps below to run the app in Supervisely:

  1. Open Image Labeling Tool in Supervisely.

  2. Select the Apps tab.

  3. Find the application and click Run.

  4. The app's UI will be opened in the right sidebar.

Summary

In this tutorial, we learned how to develop an in-browser application for the Image Labeling Tool. We learned how to use UI widgets, how to handle the events and how to process the mask. We also learned how to release the app and run it in Supervisely. We hope that this tutorial was helpful for you and you'll be able to use it as a reference for your application.

Now, when we're ready to start testing our app, we first need to prepare the config.json file, so our app can be launched directly in the Labeling Tool. You can find a lot of information using the config.json file . In this tutorial, we will pay attention to the specific keys in the file:

Now we can release it and run it in Supervisely. You can find a detailed guide on how to release the app , but in this tutorial, we'll just use the following command:

🔥
here
here
this tutorial is on GitHub
here
here
Step 0.
Step 1.
Step 2.
Step 3.
Step 4.
Step 5.
Step 6.
Step 7.