Supervisely
About SuperviselyEcosystemContact usSlack
  • 💻Supervisely Developer Portal
  • 🎉Getting Started
    • Installation
    • Basics of authentication
    • Intro to Python SDK
    • Environment variables
    • Supervisely annotation format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Objects
      • Tags
      • Image Annotation
      • Video Annotation
      • Point Clouds Annotation
      • Point Cloud Episode Annotation
      • Volumes Annotation
    • Python SDK tutorials
      • Images
        • Images
        • Image and object tags
        • Spatial labels on images
        • Keypoints (skeletons)
        • Multispectral images
        • Multiview images
        • Advanced: Optimized Import
        • Advanced: Export
      • Videos
        • Videos
        • Video and object tags
        • Spatial labels on videos
      • Point Clouds
        • Point Clouds (LiDAR)
        • Point Cloud Episodes and object tags
        • 3D point cloud object segmentation based on sensor fusion and 2D mask guidance
        • 3D segmentation masks projection on 2D photo context image
      • Volumes
        • Volumes (DICOM)
        • Spatial labels on volumes
      • Common
        • Iterate over a project
        • Iterate over a local project
        • Progress Bar tqdm
        • Cloning projects for development
    • Command Line Interface (CLI)
      • Enterprise CLI Tool
        • Instance administration
        • Workflow automation
      • Supervisely SDK CLI
    • Connect your computer
      • Linux
      • Windows WSL
      • Troubleshooting
  • 🔥App development
    • Basics
      • Create app from any py-script
      • Configuration file
        • config.json
        • Example 1. Headless
        • Example 2. App with GUI
        • v1 - Legacy
          • Example 1. v1 Modal Window
          • Example 2. v1 app with GUI
      • Add private app
      • Add public app
      • App Compatibility
    • Apps with GUI
      • Hello World!
      • App in the Image Labeling Tool
      • App in the Video Labeling Tool
      • In-browser app in the Labeling Tool
    • Custom import app
      • Overview
      • From template - simple
      • From scratch - simple
      • From scratch GUI - advanced
      • Finding directories with specific markers
    • Custom export app
      • Overview
      • From template - simple
      • From scratch - advanced
    • Neural Network integration
      • Overview
      • Serving App
        • Introduction
        • Instance segmentation
        • Object detection
        • Semantic segmentation
        • Pose estimation
        • Point tracking
        • Object tracking
        • Mask tracking
        • Image matting
        • How to customize model inference
        • Example: Custom model inference with probability maps
      • Serving App with GUI
        • Introduction
        • How to use default GUI template
        • Default GUI template customization
        • How to create custom user interface
      • Inference API
      • Training App
        • Overview
        • Tensorboard template
        • Object detection
      • High level scheme
      • Custom inference pipeline
      • Train and predict automation model pipeline
    • Advanced
      • Advanced debugging
      • How to make your own widget
      • Tutorial - App Engine v1
        • Chapter 1 Headless
          • Part 1 — Hello world! [From your Python script to Supervisely APP]
          • Part 2 — Errors handling [Catching all bugs]
          • Part 3 — Site Packages [Customize your app]
          • Part 4 — SDK Preview [Lemons counter app]
          • Part 5 — Integrate custom tracker into Videos Annotator tool [OpenCV Tracker]
        • Chapter 2 Modal Window
          • Part 1 — Modal window [What is it?]
          • Part 2 — States and Widgets [Customize modal window]
        • Chapter 3 UI
          • Part 1 — While True Script [It's all what you need]
          • Part 2 — UI Rendering [Simplest UI Application]
          • Part 3 — APP Handlers [Handle Events and Errors]
          • Part 4 — State and Data [Mutable Fields]
          • Part 5 — Styling your app [Customizing the UI]
        • Chapter 4 Additionals
          • Part 1 — Remote Developing with PyCharm [Docker SSH Server]
      • Custom Configuration
        • Fixing SSL Certificate Errors in Supervisely
        • Fixing 400 HTTP errors when using HTTP instead of HTTPS
      • Autostart
      • Coordinate System
      • MLOps Workflow integration
    • Widgets
      • Input
        • Input
        • InputNumber
        • InputTag
        • BindedInputNumber
        • DatePicker
        • DateTimePicker
        • ColorPicker
        • TimePicker
        • ClassesMapping
        • ClassesColorMapping
      • Controls
        • Button
        • Checkbox
        • RadioGroup
        • Switch
        • Slider
        • TrainValSplits
        • FileStorageUpload
        • Timeline
        • Pagination
      • Text Elements
        • Text
        • TextArea
        • Editor
        • Copy to Clipboard
        • Markdown
        • Tooltip
        • ElementTag
        • ElementTagsList
      • Media
        • Image
        • LabeledImage
        • GridGallery
        • Video
        • VideoPlayer
        • ImagePairSequence
        • Icons
        • ObjectClassView
        • ObjectClassesList
        • ImageSlider
        • Carousel
        • TagMetaView
        • TagMetasList
        • ImageAnnotationPreview
        • ClassesMappingPreview
        • ClassesListPreview
        • TagsListPreview
        • MembersListPreview
      • Selection
        • Select
        • SelectTeam
        • SelectWorkspace
        • SelectProject
        • SelectDataset
        • SelectItem
        • SelectTagMeta
        • SelectAppSession
        • SelectString
        • Transfer
        • DestinationProject
        • TeamFilesSelector
        • FileViewer
        • Dropdown
        • Cascader
        • ClassesListSelector
        • TagsListSelector
        • MembersListSelector
        • TreeSelect
        • SelectCudaDevice
      • Thumbnails
        • ProjectThumbnail
        • DatasetThumbnail
        • VideoThumbnail
        • FolderThumbnail
        • FileThumbnail
      • Status Elements
        • Progress
        • NotificationBox
        • DoneLabel
        • DialogMessage
        • TaskLogs
        • Badge
        • ModelInfo
        • Rate
        • CircleProgress
      • Layouts and Containers
        • Card
        • Container
        • Empty
        • Field
        • Flexbox
        • Grid
        • Menu
        • OneOf
        • Sidebar
        • Stepper
        • RadioTabs
        • Tabs
        • TabsDynamic
        • ReloadableArea
        • Collapse
        • Dialog
        • IFrame
      • Tables
        • Table
        • ClassicTable
        • RadioTable
        • ClassesTable
        • RandomSplitsTable
        • FastTable
      • Charts and Plots
        • LineChart
        • GridChart
        • HeatmapChart
        • ApexChart
        • ConfusionMatrix
        • LinePlot
        • GridPlot
        • ScatterChart
        • TreemapChart
        • PieChart
      • Compare Data
        • MatchDatasets
        • MatchTagMetas
        • MatchObjClasses
        • ClassBalance
        • CompareAnnotations
      • Widgets demos on github
  • 😎Advanced user guide
    • Objects binding
    • Automate with Python SDK & API
      • Start and stop app
      • User management
      • Labeling Jobs
  • 🖥️UI widgets
    • Element UI library
    • Supervisely UI widgets
    • Apexcharts - modern & interactive charts
    • Plotly graphing library
  • 📚API References
    • REST API Reference
    • Python SDK Reference
Powered by GitBook
On this page
  • Introduction
  • How to debug this tutorial
  • Python Code
  • Import libraries
  • Init API client
  • Create project
  • Create annotation classes
  • Create rectangle
  • Create polygon
  • Create masks
  • Create image annotation
  • Upload image with annotation
  • Create points
  • Create polyline
  • Upload the second image with annotation
  • Recap

Was this helpful?

Edit on GitHub
  1. Getting Started
  2. Python SDK tutorials
  3. Images

Spatial labels on images

How to create bounding boxes, polygons, masks, points and polylines in Python

PreviousImage and object tagsNextKeypoints (skeletons)

Last updated 2 years ago

Was this helpful?

Introduction

In this tutorial, you will learn how to programmatically create classes and labels of different shapes and upload them to Supervisely platform. Supervisely supports different types of shapes / geometries for image annotation:

  • bounding box (rectangle)

  • polygon

  • mask (also known as bitmap)

  • polyline

  • point

  • keypoints (also known as graph, skeleton, landmarks) - will be covered in other tutorials

  • cuboids - will be covered in other tutorials

Learn more about Supervisely Annotation JSON format here.

Bounding box, polygon and masks

How to debug this tutorial

git clone https://github.com/supervisely-ecosystem/spatial-labels
cd spatial-labels
./create_venv.sh

Step 3. Open repository directory in Visual Studio Code.

code -r .

Step 4. change ✅ workspace ID ✅ in local.env file by copying the ID from the context menu of the workspace. A new project with annotated images will be created in the workspace you define:

WORKSPACE_ID=506 # ⬅️ change value

Step 5. Start debugging src/main.py

Python Code

Import libraries

import os
import cv2
import supervisely as sly
from dotenv import load_dotenv

Init API client

Init api for communicating with Supervisely Instance. First, we load environment variables with credentials and workspace ID:

load_dotenv("local.env")
load_dotenv(os.path.expanduser("~/supervisely.env"))
api = sly.Api.from_env()

With next lines we will check the you did everything right - API client initialized with correct credentials and you defined the correct workspace ID in local.env.

workspace_id = sly.env.workspace_id()
workspace = api.workspace.get_info_by_id(workspace_id)
if workspace is None:
    print("you should put correct workspaceId value to local.env")
    raise ValueError(f"Workspace with id={workspace_id} not found")

Create project

Create empty project with name "Demo" with one dataset "berries" in your workspace on server. If the project with the same name exists in your dataset, it will be automatically renamed (Demo_001, Demo_002, etc ...) to avoid name collisions.

project = api.project.create(workspace.id, name="Demo", change_name_if_conflict=True)
dataset = api.dataset.create(project.id, name="berries")
print(f"Project has been sucessfully created, id={project.id}")

Create annotation classes

strawberry = sly.ObjClass("strawberry", sly.Rectangle, color=[0, 0, 255])
raspberry = sly.ObjClass("raspberry", sly.Polygon, color=[0, 255, 0])
blackberry = sly.ObjClass("blackberry", sly.Bitmap, color=[255, 255, 0])
berry_center = sly.ObjClass("berry_center", sly.Point, color=[0, 255, 255])
separator = sly.ObjClass("separator", sly.Polyline)  # color will be generated randomly

Color will be automatically generated if the class was created without color argument.

The next step is to create ProjectMeta - a collection of annotation classes and tags that will be available for labeling in the project.

project_meta = sly.ProjectMeta(
    obj_classes=[strawberry, raspberry, blackberry, berry_center, separator]
)

And finally, we need to set up classes in our project on server:

api.project.update_meta(project.id, project_meta.to_json())

Create rectangle

Strawberry will be labeled with a bounding box.

bbox = sly.Rectangle(top=127, left=1726, bottom=1087, right=2560)
label1 = sly.Label(geometry=bbox, obj_class=strawberry)

Create polygon

Raspberry will be labeled with a polygon.

polygon = sly.Polygon(
    exterior=[
        [941, 663],
        [976, 874],
        [934, 1096],
        [819, 1196],
        [698, 1228],
        [527, 1081],
        [439, 1090],
        [331, 980],
        [359, 808],
        [452, 698],
        [549, 612],
        [762, 564],
        [879, 605],
    ]
)
label2 = sly.Label(geometry=polygon, obj_class=raspberry)

Create masks

Every blackberry will be labeled with a mask. So we are going to create three masks from the following black and white images:

Supervisely SDK allows creating masks from NumPy arrays with the following values:

  • 0 - nothing, 1 - pixels of target mask

  • 0 - nothing, 255 - pixels of target mask

  • False - nothing, True - pixels of target mask

Mask has to be the same size as the image

labels_masks = []
for mask_path in [
    "data/masks/Blackberry_01.png",
    "data/masks/Blackberry_02.png",
    "data/masks/Blackberry_03.png",
]:
    # read only first channel of an image
    image_black_and_white = cv2.imread(mask_path)[:, :, 0]
    
    # supports masks with values (0, 1) or (0, 255) or (False, True)
    mask = sly.Bitmap(image_black_and_white)
    label = sly.Label(geometry=mask, obj_class=blackberry)
    labels_masks.append(label)

Create image annotation

image_path = "data/berries-01.jpg"
height, width = cv2.imread(image_path).shape[0:2]

# result image annotation
all_labels = [label1, label2]
all_labels.extend(labels_masks)
ann = sly.Annotation(img_size=[height, width], labels=all_labels)

Upload image with annotation

Upload image to the dataset on server:

image_name = sly.fs.get_file_name_with_ext(image_path)
image_info = api.image.upload_path(dataset.id, image_name, image_path)
print(f"Image has been sucessfully uploaded, id={image_info.id}")

Upload annotation to the image on server:

api.annotation.upload_ann(image_info.id, ann)
print(f"Annotation has been sucessfully uploaded to the image {image_name}")

Create points

Let's create points for every berry on the second image and place them to the centers of the berries.

labels_points = []
for [row, col] in [
    [1313, 313],
    [1714, 1061],
    [1318, 1851],
    [554, 1912],
    [190, 808],
    [941, 1094],
]:
    point = sly.Point(row, col)
    label = sly.Label(geometry=point, obj_class=berry_center)
    labels_points.append(label)

Create polyline

polyline = sly.Polyline(
    [[883, 443], [1360, 803], [1395, 1372], [928, 1676], [458, 1372], [552, 554]]
)
label_line = sly.Label(geometry=polyline, obj_class=separator)

Upload the second image with annotation

image_path = "data/berries-02.jpg"
height, width = cv2.imread(image_path).shape[0:2]

# result image annotation
ann = sly.Annotation(img_size=[height, width], labels=[*labels_points, label_line])

# upload image to the dataset on server
image_name = sly.fs.get_file_name_with_ext(image_path)
image_info = api.image.upload_path(dataset.id, image_name, image_path)
print(f"Image has been sucessfully uploaded, id={image_info.id}")

# upload annotation to the image on server
api.annotation.upload_ann(image_info.id, ann)
print(f"Annotation has been sucessfully uploaded to the image {image_name}")

Recap

In this tutorial we learned how to

  • quickly configure python development for Supervisely

  • how to create a project and dataset with classes of different shapes

  • how to initialize rectangles, masks, polygons, polylines, and points

  • how to construct Supervisely annotation and upload it with an image to server

Points and polyline

Everything you need to reproduce : source code, Visual Studio Code configuration, and a shell script for creating virtual env.

Step 1. Prepare ~/supervisely.env file with credentials.

Step 2. Clone with source code and demo data and create .

Copy workspace ID from context menu
Debug tutorial in Visual Studio Code
Three black-and-white masks for every blackberry

In the , you will find the .

🎉
this tutorial is on GitHub
repository
Virtual Environment
GitHub repository for this tutorial
full python script
Learn more here.