Supervisely
About SuperviselyEcosystemContact usSlack
  • 💻Supervisely Developer Portal
  • 🎉Getting Started
    • Installation
    • Basics of authentication
    • Intro to Python SDK
    • Environment variables
    • Supervisely annotation format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Objects
      • Tags
      • Image Annotation
      • Video Annotation
      • Point Clouds Annotation
      • Point Cloud Episode Annotation
      • Volumes Annotation
    • Python SDK tutorials
      • Images
        • Images
        • Image and object tags
        • Spatial labels on images
        • Keypoints (skeletons)
        • Multispectral images
        • Multiview images
        • Advanced: Optimized Import
        • Advanced: Export
      • Videos
        • Videos
        • Video and object tags
        • Spatial labels on videos
      • Point Clouds
        • Point Clouds (LiDAR)
        • Point Cloud Episodes and object tags
        • 3D point cloud object segmentation based on sensor fusion and 2D mask guidance
        • 3D segmentation masks projection on 2D photo context image
      • Volumes
        • Volumes (DICOM)
        • Spatial labels on volumes
      • Common
        • Iterate over a project
        • Iterate over a local project
        • Progress Bar tqdm
        • Cloning projects for development
    • Command Line Interface (CLI)
      • Enterprise CLI Tool
        • Instance administration
        • Workflow automation
      • Supervisely SDK CLI
    • Connect your computer
      • Linux
      • Windows WSL
      • Troubleshooting
  • 🔥App development
    • Basics
      • Create app from any py-script
      • Configuration file
        • config.json
        • Example 1. Headless
        • Example 2. App with GUI
        • v1 - Legacy
          • Example 1. v1 Modal Window
          • Example 2. v1 app with GUI
      • Add private app
      • Add public app
      • App Compatibility
    • Apps with GUI
      • Hello World!
      • App in the Image Labeling Tool
      • App in the Video Labeling Tool
      • In-browser app in the Labeling Tool
    • Custom import app
      • Overview
      • From template - simple
      • From scratch - simple
      • From scratch GUI - advanced
      • Finding directories with specific markers
    • Custom export app
      • Overview
      • From template - simple
      • From scratch - advanced
    • Neural Network integration
      • Overview
      • Serving App
        • Introduction
        • Instance segmentation
        • Object detection
        • Semantic segmentation
        • Pose estimation
        • Point tracking
        • Object tracking
        • Mask tracking
        • Image matting
        • How to customize model inference
        • Example: Custom model inference with probability maps
      • Serving App with GUI
        • Introduction
        • How to use default GUI template
        • Default GUI template customization
        • How to create custom user interface
      • Inference API
      • Training App
        • Overview
        • Tensorboard template
        • Object detection
      • High level scheme
      • Custom inference pipeline
      • Train and predict automation model pipeline
    • Advanced
      • Advanced debugging
      • How to make your own widget
      • Tutorial - App Engine v1
        • Chapter 1 Headless
          • Part 1 — Hello world! [From your Python script to Supervisely APP]
          • Part 2 — Errors handling [Catching all bugs]
          • Part 3 — Site Packages [Customize your app]
          • Part 4 — SDK Preview [Lemons counter app]
          • Part 5 — Integrate custom tracker into Videos Annotator tool [OpenCV Tracker]
        • Chapter 2 Modal Window
          • Part 1 — Modal window [What is it?]
          • Part 2 — States and Widgets [Customize modal window]
        • Chapter 3 UI
          • Part 1 — While True Script [It's all what you need]
          • Part 2 — UI Rendering [Simplest UI Application]
          • Part 3 — APP Handlers [Handle Events and Errors]
          • Part 4 — State and Data [Mutable Fields]
          • Part 5 — Styling your app [Customizing the UI]
        • Chapter 4 Additionals
          • Part 1 — Remote Developing with PyCharm [Docker SSH Server]
      • Custom Configuration
        • Fixing SSL Certificate Errors in Supervisely
        • Fixing 400 HTTP errors when using HTTP instead of HTTPS
      • Autostart
      • Coordinate System
      • MLOps Workflow integration
    • Widgets
      • Input
        • Input
        • InputNumber
        • InputTag
        • BindedInputNumber
        • DatePicker
        • DateTimePicker
        • ColorPicker
        • TimePicker
        • ClassesMapping
        • ClassesColorMapping
      • Controls
        • Button
        • Checkbox
        • RadioGroup
        • Switch
        • Slider
        • TrainValSplits
        • FileStorageUpload
        • Timeline
        • Pagination
      • Text Elements
        • Text
        • TextArea
        • Editor
        • Copy to Clipboard
        • Markdown
        • Tooltip
        • ElementTag
        • ElementTagsList
      • Media
        • Image
        • LabeledImage
        • GridGallery
        • Video
        • VideoPlayer
        • ImagePairSequence
        • Icons
        • ObjectClassView
        • ObjectClassesList
        • ImageSlider
        • Carousel
        • TagMetaView
        • TagMetasList
        • ImageAnnotationPreview
        • ClassesMappingPreview
        • ClassesListPreview
        • TagsListPreview
        • MembersListPreview
      • Selection
        • Select
        • SelectTeam
        • SelectWorkspace
        • SelectProject
        • SelectDataset
        • SelectItem
        • SelectTagMeta
        • SelectAppSession
        • SelectString
        • Transfer
        • DestinationProject
        • TeamFilesSelector
        • FileViewer
        • Dropdown
        • Cascader
        • ClassesListSelector
        • TagsListSelector
        • MembersListSelector
        • TreeSelect
        • SelectCudaDevice
      • Thumbnails
        • ProjectThumbnail
        • DatasetThumbnail
        • VideoThumbnail
        • FolderThumbnail
        • FileThumbnail
      • Status Elements
        • Progress
        • NotificationBox
        • DoneLabel
        • DialogMessage
        • TaskLogs
        • Badge
        • ModelInfo
        • Rate
        • CircleProgress
      • Layouts and Containers
        • Card
        • Container
        • Empty
        • Field
        • Flexbox
        • Grid
        • Menu
        • OneOf
        • Sidebar
        • Stepper
        • RadioTabs
        • Tabs
        • TabsDynamic
        • ReloadableArea
        • Collapse
        • Dialog
        • IFrame
      • Tables
        • Table
        • ClassicTable
        • RadioTable
        • ClassesTable
        • RandomSplitsTable
        • FastTable
      • Charts and Plots
        • LineChart
        • GridChart
        • HeatmapChart
        • ApexChart
        • ConfusionMatrix
        • LinePlot
        • GridPlot
        • ScatterChart
        • TreemapChart
        • PieChart
      • Compare Data
        • MatchDatasets
        • MatchTagMetas
        • MatchObjClasses
        • ClassBalance
        • CompareAnnotations
      • Widgets demos on github
  • 😎Advanced user guide
    • Objects binding
    • Automate with Python SDK & API
      • Start and stop app
      • User management
      • Labeling Jobs
  • 🖥️UI widgets
    • Element UI library
    • Supervisely UI widgets
    • Apexcharts - modern & interactive charts
    • Plotly graphing library
  • 📚API References
    • REST API Reference
    • Python SDK Reference
Powered by GitBook
On this page
  • Introduction
  • How to debug this tutorial
  • Python Code
  • Importing Necessary Libraries
  • Working With Keypoints Template
  • Programmatically Create Keypoints Annotation

Was this helpful?

Edit on GitHub
  1. Getting Started
  2. Python SDK tutorials
  3. Images

Keypoints (skeletons)

How to create keypoints annotation in Python using Supervisely

PreviousSpatial labels on imagesNextMultispectral images

Last updated 1 year ago

Was this helpful?

Introduction

In this tutorial we will show you how to use sly.GraphNodes class to create data annotation for pose estimation / keypoints detection task. The tutorial illustrates basic upload-download scenario:

  • create project and dataset on server

  • upload image

  • programmatically create annotation and upload it to image

  • download image and annotation

ℹ️ Everything you need to reproduce : source code, Visual Studio Code configuration, and a shell script for creating virtual env.

How to debug this tutorial

Step 1. Prepare ~/supervisely.env file with credentials.

Step 2. Clone with source code and demo data and create .

git clone https://github.com/supervisely-ecosystem/keypoints-labeling-example
cd keypoints-labeling-example
./create_venv.sh

Step 3. Open repository directory in Visual Studio Code

code -r .

Step 4. Start debugging src/main.py

Python Code

Importing Necessary Libraries

Import necessary libraries:

import supervisely as sly
from supervisely.geometry.graph import Node, KeypointsTemplate
import os
import json
from dotenv import load_dotenv

Before we will start creating our project, let's learn how to create keypoints template - we are going to use it in our project.

Working With Keypoints Template

We will need an image to create and visualize our keypoints template.

Image for building keypoints template:

Create keypoints template:

# initialize template
template = KeypointsTemplate()
# add nodes
template.add_point(label="nose", row=635, col=427)
template.add_point(label="left_eye", row=597, col=404)
template.add_point(label="right_eye", row=685, col=401)
template.add_point(label="left_ear", row=575, col=431)
template.add_point(label="right_ear", row=723, col=425)
template.add_point(label="left_shoulder", row=502, col=614)
template.add_point(label="right_shoulder", row=794, col=621)
template.add_point(label="left_elbow", row=456, col=867)
template.add_point(label="right_elbow", row=837, col=874)
template.add_point(label="left_wrist", row=446, col=1066)
template.add_point(label="right_wrist", row=845, col=1073)
template.add_point(label="left_hip", row=557, col=1035)
template.add_point(label="right_hip", row=743, col=1043)
template.add_point(label="left_knee", row=541, col=1406)
template.add_point(label="right_knee", row=751, col=1421)
template.add_point(label="left_ankle", row=501, col=1760)
template.add_point(label="right_ankle", row=774, col=1765)
# add edges
template.add_edge(src="left_ankle", dst="left_knee")
template.add_edge(src="left_knee", dst="left_hip")
template.add_edge(src="right_ankle", dst="right_knee")
template.add_edge(src="right_knee", dst="right_hip")
template.add_edge(src="left_hip", dst="right_hip")
template.add_edge(src="left_shoulder", dst="left_hip")
template.add_edge(src="right_shoulder", dst="right_hip")
template.add_edge(src="left_shoulder", dst="right_shoulder")
template.add_edge(src="left_shoulder", dst="left_elbow")
template.add_edge(src="right_shoulder", dst="right_elbow")
template.add_edge(src="left_elbow", dst="left_wrist")
template.add_edge(src="right_elbow", dst="right_wrist")
template.add_edge(src="left_eye", dst="right_eye")
template.add_edge(src="nose", dst="left_eye")
template.add_edge(src="nose", dst="right_eye")
template.add_edge(src="left_eye", dst="left_ear")
template.add_edge(src="right_eye", dst="right_ear")
template.add_edge(src="left_ear", dst="left_shoulder")
template.add_edge(src="right_ear", dst="right_shoulder")

Visualize your keypoints template:

template_img = sly.image.read("images/girl.jpg")
template.draw(image=template_img, thickness=7)
sly.image.write("images/template.jpg", template_img)

Explore Keypoints Template in JSON Format

You can also transfer your template to json:

template_json = template.to_json()
Click to see the example of template in json format
{
  "nodes": {
    "nose": {
      "label": "nose",
      "loc": [635, 427],
      "color": "#0000FF"
    },
    "left_eye": {
      "label": "left_eye",
      "loc": [597, 404],
      "color": "#0000FF"
    },
    "right_eye": {
      "label": "right_eye",
      "loc": [685, 401],
      "color": "#0000FF"
    },
    "left_ear": {
      "label": "left_ear",
      "loc": [575, 431],
      "color": "#0000FF"
    },
    "right_ear": {
      "label": "right_ear",
      "loc": [723, 425],
      "color": "#0000FF"
    },
    "left_shoulder": {
      "label": "left_shoulder",
      "loc": [502, 614],
      "color": "#0000FF"
    },
    "right_shoulder": {
      "label": "right_shoulder",
      "loc": [794, 621],
      "color": "#0000FF"
    },
    "left_elbow": {
      "label": "left_elbow",
      "loc": [456, 867],
      "color": "#0000FF"
    },
    "right_elbow": {
      "label": "right_elbow",
      "loc": [837, 874],
      "color": "#0000FF"
    },
    "left_wrist": {
      "label": "left_wrist",
      "loc": [446, 1066],
      "color": "#0000FF"
    },
    "right_wrist": {
      "label": "right_wrist",
      "loc": [845, 1073],
      "color": "#0000FF"
    },
    "left_hip": {
      "label": "left_hip",
      "loc": [557, 1035],
      "color": "#0000FF"
    },
    "right_hip": {
      "label": "right_hip",
      "loc": [743, 1043],
      "color": "#0000FF"
    },
    "left_knee": {
      "label": "left_knee",
      "loc": [541, 1406],
      "color": "#0000FF"
    },
    "right_knee": {
      "label": "right_knee",
      "loc": [751, 1421],
      "color": "#0000FF"
    },
    "left_ankle": {
      "label": "left_ankle",
      "loc": [501, 1760],
      "color": "#0000FF"
    },
    "right_ankle": {
      "label": "right_ankle",
      "loc": [774, 1765],
      "color": "#0000FF"
    }
  },
  "edges": [
    {
      "src": "left_ankle",
      "dst": "left_knee",
      "color": "#00FF00"
    },
    {
      "src": "left_knee",
      "dst": "left_hip",
      "color": "#00FF00"
    },
    {
      "src": "right_ankle",
      "dst": "right_knee",
      "color": "#00FF00"
    },
    {
      "src": "right_knee",
      "dst": "right_hip",
      "color": "#00FF00"
    },
    {
      "src": "left_hip",
      "dst": "right_hip",
      "color": "#00FF00"
    },
    {
      "src": "left_shoulder",
      "dst": "left_hip",
      "color": "#00FF00"
    },
    {
      "src": "right_shoulder",
      "dst": "right_hip",
      "color": "#00FF00"
    },
    {
      "src": "left_shoulder",
      "dst": "right_shoulder",
      "color": "#00FF00"
    },
    {
      "src": "left_shoulder",
      "dst": "left_elbow",
      "color": "#00FF00"
    },
    {
      "src": "right_shoulder",
      "dst": "right_elbow",
      "color": "#00FF00"
    },
    {
      "src": "left_elbow",
      "dst": "left_wrist",
      "color": "#00FF00"
    },
    {
      "src": "right_elbow",
      "dst": "right_wrist",
      "color": "#00FF00"
    },
    {
      "src": "left_eye",
      "dst": "right_eye",
      "color": "#00FF00"
    },
    {
      "src": "nose",
      "dst": "left_eye",
      "color": "#00FF00"
    },
    {
      "src": "nose",
      "dst": "right_eye",
      "color": "#00FF00"
    },
    {
      "src": "left_eye",
      "dst": "left_ear",
      "color": "#00FF00"
    },
    {
      "src": "right_eye",
      "dst": "right_ear",
      "color": "#00FF00"
    },
    {
      "src": "left_ear",
      "dst": "left_shoulder",
      "color": "#00FF00"
    },
    {
      "src": "right_ear",
      "dst": "right_shoulder",
      "color": "#00FF00"
    }
  ]
}

Now, when we have successfully created keypoints template, we can start creating keypoints annotation for our project.

Programmatically Create Keypoints Annotation

load_dotenv(os.path.expanduser('~/supervisely.env'))
api = sly.Api.from_env()
my_teams = api.team.get_list()
team = my_teams[0]
workspace = api.workspace.get_list(team.id)[0]

Input image:

Create project and dataset:

project = api.project.create(workspace.id, "Human Pose Estimation", change_name_if_conflict=True)
dataset = api.dataset.create(project.id, "Person with dog", change_name_if_conflict=True)
print(f"Project {project.id} with dataset {dataset.id} are created")

Now let's create annotation class using our keypoints template as a geometry config (unlike other supervisely geometry classes, sly.GraphNodes requires geometry config to be passed - it is necessary for object class initialization):

person = sly.ObjClass("person", geometry_type=sly.GraphNodes, geometry_config=template)
project_meta = sly.ProjectMeta(obj_classes=[person])
api.project.update_meta(project.id, project_meta.to_json())

You can also go to Supervisely platform and check that class with shape "Keypoints" was successfully added to your project:

Upload image:

image_info = api.image.upload_path(
    dataset.id, name="person_with_dog.jpg", path="images/person_with_dog.jpg"
)

Build keypoints graph:

nodes = [
    sly.Node(label="nose", row=146, col=670),
    sly.Node(label="left_eye", row=130, col=644),
    sly.Node(label="right_eye", row=135, col=701),
    sly.Node(label="left_ear", row=137, col=642),
    sly.Node(label="right_ear", row=142, col=705),
    sly.Node(label="left_shoulder", row=221, col=595),
    sly.Node(label="right_shoulder", row=226, col=738),
    sly.Node(label="left_elbow", row=335, col=564),
    sly.Node(label="right_elbow", row=342, col=765),
    sly.Node(label="left_wrist", row=429, col=555),
    sly.Node(label="right_wrist", row=438, col=784),
    sly.Node(label="left_hip", row=448, col=620),
    sly.Node(label="right_hip", row=451, col=713),
    sly.Node(label="left_knee", row=598, col=591),
    sly.Node(label="right_knee", row=602, col=715),
    sly.Node(label="left_ankle", row=761, col=573),
    sly.Node(label="right_ankle", row=766, col=709),
]

Label the image:

input_image = sly.image.read("images/person_with_dog.jpg")
img_height, img_width = input_image.shape[:2]
label = sly.Label(sly.GraphNodes(nodes), person)
ann = sly.Annotation(img_size=[img_height, img_width], labels=[label])
api.annotation.upload_ann(image_info.id, ann)

You can check that keypoints annotation was successfully created in Annotation Tool:

Download data:

image = api.image.download_np(image_info.id)
ann_json = api.annotation.download_json(image_info.id)

Draw annotation:

ann = sly.Annotation.from_json(ann_json, project_meta)
output_path = "images/person_with_dog_labelled.jpg"
ann.draw_pretty(image, output_path=output_path, thickness=7)

Result:

vscode_screen
girl
template

Authenticate (learn more ):

person_with_dog
class_screen
labelled
person_with_dog_labeled
🎉
this tutorial is on GitHub
Learn more here.
repository
Virtual Environment
here