From template - simple

A step-by-step tutorial of how to create custom export app from SDK export template class `sly.app.Export`.

Introduction

In this tutorial, you will learn how to create custom export app for exporting your data from Supervisely platform using an export template class sly.app.Export that we have prepared for you.

We will go through the following steps:

Step 0. Set up the working environment.

Step 1. Write an export script.

Step 2. Debug export app.

Step 3. Advanced debugging.

Step 4. Release and run the app in Supervisely.

Everything you need to reproduce this tutorial is on GitHub: source code and additional app files.

Overview of the simple (illustrative) example we will use in tutorial

  • original images

  • annotations in json format

Output example:

🗃️<id_project_name>.tar
┣ 📂ds0
┃ ┣ 🖼️image_1.jpg
┃ ┣ 🖼️image_2.jpg
┃ ┗ 📜labels.json
┗ 📂ds1
  ┣ 🖼️image_1.jpg
  ┣ 🖼️image_2.jpg
  ┗ 📜labels.json

For each dataset label.json files contain annotations for images with class names and coordinate points for bounding boxes of each label.

{
    "image_1.jpg": [
        {
            "class_name": "cat",
            "coordinates": [top, left, right, bottom]
        },
        {
            "class_name": "cat",
            "coordinates": [top, left, right, bottom]
        },
        {
            "class_name": "dog",
            "coordinates": [top, left, right, bottom]
        }
        ...
    ],
    "image_2.jpg": [
        ...
    ],
    "image_3.jpg": [
        ...
    ]
}

Set up the working environment

Before we begin, please clone the template-export-app repository and set up the working environment - here is a link with a description of the steps.

Write an export script

You can find source code for this example here

Step 1. Import libraries

import json, os
import supervisely as sly

from dotenv import load_dotenv
from tqdm import tqdm

Step 2. Load environment variables

Load ENV variables for debug, has no effect in production

load_dotenv("local.env")
load_dotenv(os.path.expanduser("~/supervisely.env"))

Step 3. Write Export script

Create a class that inherits from sly.app.Export and write process method that will handle the team_id, workspace_id, project_id and dataset_id that you specified in the local.env. In this example our class called MyExport.

sly.app.Export class will handle export routines for you:

  • it will check that selected project or dataset exist and that you have access to work with it,

  • it will upload your result data to Team Files and clean temporary folder, containing result archive in remote container or local hard drive if you are debugging your app.

  • Your application must return string, containing path to result archive or folder. If you return path to folder - this folder will be automatically archived.

sly.app.Export has a Context subclass which contains all required information that you need for exporting your data from Supervisely platform:

  • Team ID - shows team id where exporting project or dataset is located

  • Workspace ID - shows workspace id where exporting project or dataset is located

  • Project ID - id of exporting project

  • Dataset ID - id of exporting dataset (detected only if the export is performed from dataset context menu)

context variable is passed as an argument to process method of class MyExport and context object will be created automatically when you execute export script.

class MyExport(sly.app.Export):
    def process(self, context: sly.app.Export.Context):
        print(context)

Output:

Team ID: 447
Workspace ID: 680
Project ID: 20934
Dataset ID: 64985

Now let's get to the code part

STORAGE_DIR = sly.app.get_data_dir() # path to directory for temp files and result archive
ANN_FILE_NAME = "labels.json"

class MyExport(sly.app.Export):
    def process(self, context: sly.app.Export.Context):
        # create api object to communicate with Supervisely Server
        api = sly.Api.from_env()

        # get project info from server
        project_info = api.project.get_info_by_id(id=context.project_id)

        # make project directory path
        data_dir = os.path.join(STORAGE_DIR, f"{project_info.id}_{project_info.name}")

        # get project meta
        meta_json = api.project.get_meta(id=context.project_id)
        project_meta = sly.ProjectMeta.from_json(meta_json)

        # Check if the app runs from the context menu of the dataset. 
        if context.dataset_id is not None:
            # If so, get the dataset info from the server.
            dataset_infos = [api.dataset.get_info_by_id(context.dataset_id)]
        else:
            # If it does not, obtain all datasets infos from the current project.
            dataset_infos = api.dataset.get_list(context.project_id)

        # iterate over datasets in project
        for dataset in dataset_infos:
            result_anns = {}

            # get dataset images info
            images_infos = api.image.get_list(dataset.id)

            # track progress using Tqdm
            with tqdm(total=dataset.items_count) as pbar:
                # iterate over images in dataset
                for image_info in images_infos:
                    labels = []

                    # create path for each image and download it from server
                    image_path = os.path.join(data_dir, dataset.name, image_info.name)
                    api.image.download(image_info.id, image_path)

                    # download annotation for current image
                    ann_json = api.annotation.download_json(image_info.id)
                    ann = sly.Annotation.from_json(ann_json, project_meta)

                    # iterate over labels in current annotation
                    for label in ann.labels:
                        # get obj class name
                        name = label.obj_class.name

                        # get bounding box coordinates for label
                        bbox = label.geometry.to_bbox()
                        labels.append(
                            {
                                "class_name": name,
                                "coordinates": [
                                    bbox.top,
                                    bbox.left,
                                    bbox.bottom,
                                    bbox.right,
                                ],
                            }
                        )

                    result_anns[image_info.name] = labels

                    # increment the current progress counter by 1
                    pbar.update(1)

            # create JSON annotation in new format
            filename = os.path.join(data_dir, dataset.name, ANN_FILE_NAME)
            with open(filename, "w") as file:
                json.dump(result_anns, file, indent=2)

        return data_dir

Create MyExport object and execute run method to start export

app = MyExport()
app.run()

Debug export app

In this tutorial, we will be using the Run & Debug section of the VSCode to debug our export app.

The export template has 2 launch options for debugging: Debug and Advanced Debug. The settings for these options are configured in the launch.json file. Lets start from option #1 - Debug

launch.json

This option is a good starting point. In this case, the resulting archive or folder with the exported data will remain on your computer and be saved in the path that we defined in the local.env file (SLY_APP_DATA_DIR="results/").

Output of this python program:

{"message": "Exporting Project: id=20934, name=Model predictions, type=images", "timestamp": "2023-05-08T11:30:06.341Z", "level": "info"}
{"message": "Exporting Dataset: id=64895, name=Week # 1", "timestamp": "2023-05-08T11:30:06.651Z", "level": "info"}
Processing: 100%|████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:06<00:00,  1.12s/it]

Advanced debugging

In addition to the regular debug option, this template also includes setting for Advanced debugging.

launch.json

The advanced debugging option is somewhat identical, however it will upload result archive or folder with data to Team Files instead (Path to result archive - /tmp/supervisely/export/Supervisely App/<SESSION ID>/<PROJECT_ID>_<PROJECT_NAME>.tar). This option is an example of how production apps work in Supervisely platform.

Output of this python program:

{"message": "App data directory results/ doesn't exist. Will be made automatically.", "timestamp": "2023-05-08T10:39:15.626Z", "level": "info"}
{"message": "Exporting Project: id=20934, name=Model predictions, type=images", "timestamp": "2023-05-08T10:39:17.918Z", "level": "info"}
{"message": "Exporting Dataset: id=64895, name=Week # 1", "timestamp": "2023-05-08T10:39:18.209Z", "level": "info"}
{"message": "progress", "event_type": "EventType.PROGRESS", "subtask": "Processing", "current": 0, "total": 6, "timestamp": "2023-05-08T10:39:19.478Z", "level": "info"}
{"message": "progress", "event_type": "EventType.PROGRESS", "subtask": "Processing", "current": 1, "total": 6, "timestamp": "2023-05-08T10:39:20.592Z", "level": "info"}
...
{"message": "progress", "event_type": "EventType.PROGRESS", "subtask": "Processing", "current": 6, "total": 6, "timestamp": "2023-05-08T10:39:25.918Z", "level": "info"}
{"message": "progress", "event_type": "EventType.PROGRESS", "subtask": "Uploading '20934_Model predictions.tar'", "current": 0, "total": 4567476, "current_label": "0.0 B", "total_label": "4.4 MiB", "timestamp": "2023-05-08T10:39:26.135Z", "level": "info"}
{"message": "progress", "event_type": "EventType.PROGRESS", "subtask": "Uploading '20934_Model predictions.tar'", "current": 1048576, "total": 4567476, "current_label": "1.0 MiB", "total_label": "4.4 MiB", "timestamp": "2023-05-08T10:39:26.676Z", "level": "info"}
...
{"message": "progress", "event_type": "EventType.PROGRESS", "subtask": "Uploading '20934_Model predictions.tar'", "current": 4567476, "total": 4567476, "current_label": "4.4 MiB", "total_label": "4.4 MiB", "timestamp": "2023-05-08T10:39:36.578Z", "level": "info"}
{"message": "Remote file: id=1273718, name=20934_Model predictions.tar", "timestamp": "2023-05-08T10:39:37.451Z", "level": "info"}

Release and run the app in Supervisely

Submitting an app to the Supervisely Ecosystem isn’t as simple as pushing code to github repository, but it’s not as complicated as you may think of it either.

Please follow this link for instructions on adding your app. We have produced a step-by-step guide on how to add your application to the Supervisely Ecosystem.

Last updated