Point Clouds (LiDAR)

Introduction

In this tutorial we will focus on working with Point Clouds and Point Cloud Episodes using Supervisely SDK.

You will learn how to:

📗 Everything you need to reproduce this tutorial is on GitHub: source code and demo data.

How to debug this tutorial

Step 1. Prepare ~/supervisely.env file with credentials. Learn more here.

Step 2. Clone repository with source code and demo data and create Virtual Environment.

git clone https://github.com/supervisely-ecosystem/tutorial-pointclouds.git

cd tutorial-pointclouds

./create_venv.sh

Step 3. Open repository directory in Visual Studio Code.

code -r .

Step 4. Change workspace ID in local.env file by copying the ID from the context menu of the workspace.

WORKSPACE_ID=654 # ⬅️ change value

Step 5. Start debugging src/main.py.

Import libraries

import os
import json
from pathlib import Path
from dotenv import load_dotenv
import supervisely as sly

Init API client

First, we load environment variables with credentials and init API for communicating with Supervisely Instance.

if sly.is_development():
    load_dotenv("local.env")
    load_dotenv(os.path.expanduser("~/supervisely.env"))
api = sly.Api()

Get variables from environment

In this tutorial, you will need an workspace ID that you can get from environment variables. Learn more here

workspace_id = sly.env.workspace_id()

Create new project and dataset

Create new project.

Source code:

project = api.project.create(
    workspace_id,
    name="Point Clouds Tutorial",
    type=sly.ProjectType.POINT_CLOUDS,
    change_name_if_conflict=True,
)
print(f"Project ID: {project.id}")

Output:

# Project ID: 16197

Create new dataset.

Source code:

dataset = api.dataset.create(project.id, name="dataset_1")

print(f"Dataset ID: {dataset.id}")

Output:

# Dataset ID: 54539

Upload point clouds and photo context to Supervisely

Upload single point cloud.

Source code:

pcd_file = "src/input/pcd/000000.pcd"
pcd_info = api.pointcloud.upload_path(dataset.id, name="pcd_0.pcd", path=pcd_file)
print(f'Point cloud "{pcd_info.name}" uploaded to Supervisely with ID:{pcd_info.id}')

Output:

# Point cloud "pcd_0.pcd" uploaded to Supervisely platform with ID:17539453

Now you can explore and label it in Supervisely labeling tool:

Extrinsic and intrinsic matrices

If you have a photo context taken with a LIDAR image, you can attach the photo to the point cloud. To do that, we need two additional matrices. They are used for matching 3D coordinates in the point cloud to the 2D coordinates in the photo context:

Parameters meaning

  • fx, fy are the focal lengths expressed in pixel units

  • cx, cy is a principal point that is usually at the image center

  • rij and ti from the extrinsicMatrix are the rotation and translation parameters

The dot product of the matrices and XYZ coordinate in 3D space gives us the coordinate of a point (x=u, y=v) in the photo context:

Uploading context photo to the Supervisely.

For attaching a photo, it is needed to provide the matrices in a meta dict with the deviceId and sensorsData fields. The matrices must be included in the meta dict as flattened lists.

Example of a meta dict:

# src/input/cam_info/000000.json
{
    "deviceId": "CAM_2",
    "sensorsData": {
        "extrinsicMatrix": [
            0.007533745,
            -0.9999714,
            -0.000616602,
            -0.004069766,
            0.01480249,
            0.0007280733,
            -0.9998902,
            -0.07631618,
            0.9998621,
            0.00752379,
            0.01480755,
            -0.2717806,
        ],
        "intrinsicMatrix": [721.5377, 0, 609.5593, 0, 721.5377, 172.854, 0, 0, 1],
    }
}

A full code for uploading and attaching the context image

Source code:

# input files:
img_file = "src/input/img/000000.png"
cam_info_file = "src/input/cam_info/000000.json"

# 0. Read cam_info with matrices (a meta dict).
with open(cam_info_file, "r") as f:
    cam_info = json.load(f)

# 1. Upload an image to the Supervisely. It generates us a hash for image
img_hash = api.pointcloud.upload_related_image(img_file)
# 2. Create img_info needed for matching the image to the point cloud by its ID
img_info = {"entityId": pcd_info.id, "name": "img_0.png", "hash": img_hash, "meta": cam_info}
# 3. Run the API command to attach the image
api.pointcloud.add_related_images([img_info])

print("Context image has been uploaded.")

Output:

# Context image has been uploaded.

More about the format of a photo context: Supervisely annotation JSON format

More about calibration and matrix transformations: OpenCV 3D Camera Calibration Tutorial.

Upload list of point clouds and context images.

✅ Supervisely API allows uploading multiple point clouds in a single request. The code sample below sends fewer requests and it leads to a significant speed-up of our original code.

Source code:

# Upload a batch of point clouds and related images
paths = ["src/input/pcd/000001.pcd", "src/input/pcd/000002.pcd"]
img_paths = ["src/input/img/000001.png", "src/input/img/000002.png"]
cam_paths = ["src/input/cam_info/000001.json", "src/input/cam_info/000002.json"]

pcd_infos = api.pointcloud.upload_paths(dataset.id, names=["pcd_1.pcd", "pcd_2.pcd"], paths=paths)
img_hashes = api.pointcloud.upload_related_images(img_paths)
img_infos = []
for i, cam_info_file in enumerate(cam_paths):
    # reading cam_info
    with open(cam_info_file, "r") as f:
        cam_info = json.load(f)
    img_info = {
        "entityId": pcd_infos[i].id,
        "name": f"img_{i}.png",
        "hash": img_hashes[i],
        "meta": cam_info,
    }
    img_infos.append(img_info)
result = api.pointcloud.add_related_images(img_infos)
print("Batch uploading has finished:", result)

Output:

# Batch uploading has finished: {'success': True}

Get info by name

Get information about point cloud from Supervisely by name.

Source code:

pcd_info = api.pointcloud.get_info_by_name(dataset.id, name="pcd_0.pcd")
print(pcd_info)

Output:

PointcloudInfo(
    id=17553684,
    frame=None,
    description="",
    name="pcd_0.pcd",
    team_id=440,
    workspace_id=662,
    project_id=16108,
    dataset_id=54365,
    link=None,
    hash="rxl9ioCcNobe1z7q1dA6idsebCM77G0wlrZd1Be28ng=",
    path_original="/h5un6l2bnaz1vj8a9qgms4-public/point_clouds/f/x/kC/5JwCwSNouz7u3sNVDWOIURf44HRAridOKsf3lDGjo9bEHcj22gCejQIULbZHblG9Ns6GWD4Vmc3I0KdBagpmZKovKikN50Ij7utyw5aUaCTtM10sLiX4BVqPRssx.pcd",
    cloud_mime="image/pcd",
    figures_count=0,
    objects_count=0,
    tags=[],
    meta={},
    created_at="2023-01-08T07:15:50.332Z",
    updated_at="2023-01-08T07:15:50.332Z",
)

Get info by ID

You can also get information about image from Supervisely by id.

Source code:

pcd_info = api.pointcloud.get_info_by_id(pcd_info.id)
print("Point cloud name:", pcd_info.name)

Output:

# Point cloud name: pcd_0.pcd

Get information about context images

Get information about related context images. For example it can be a photo from front/back cameras of a vehicle.

Source code:

img_infos = api.pointcloud.get_list_related_images(pcd_info.id)
img_info = img_infos[0]
print(img_info)

Output:

{'pathOriginal': '/h5un6l2bnaz1vj8a9qgms4-public/images/original/S/j/hJ/PwhtY7x4zRQ5jvNETPgFMtjJ9bDOMkjJelovMYLJJL2wxsGS9dvSjQC428ORi2qIFYg4u1gbiN7DsRIfO3JVBEt0xRgNc0vm3n2DTv8UiV9HXoaCp0Fy4IoObKMg.png',
 'id': 473302,
 'entityId': 17557533,
 'createdAt': '2023-01-09T08:50:33.225Z',
 'updatedAt': '2023-01-09T08:50:33.225Z',
 'meta': {'deviceId': 'cam_2'},
 'fileMeta': {'mime': 'image/png',
  'size': 893783,
  'width': 1224,
  'height': 370},
 'hash': 'vxA+emfDNUkFP9P6oitMB5Q0rMlnskmV2jvcf47OjGU=',
 'link': None,
 'preview': '/previews/q/ext:jpeg/resize:fill:50:0:0/q:50/plain/h5un6l2bnaz1vj8a9qgms4-public/images/original/S/j/hJ/PwhtY7x4zRQ5jvNETPgFMtjJ9bDOMkjJelovMYLJJL2wxsGS9dvSjQC428ORi2qIFYg4u1gbiN7DsRIfO3JVBEt0xRgNc0vm3n2DTv8UiV9HXoaCp0Fy4IoObKMg.png',
 'fullStorageUrl': 'https://dev.supervisely.com/h5un6l2bnaz1vj8a9qgms4-public/images/original/S/j/hJ/PwhtY7x4zRQ5jvNETPgFMtjJ9bDOMkjJelovMYLJJL2wxsGS9dvSjQC428ORi2qIFYg4u1gbiN7DsRIfO3JVBEt0xRgNc0vm3n2DTv8UiV9HXoaCp0Fy4IoObKMg.png',
 'name': 'img0.png'}

Get list of all point clouds in the dataset

You can list all point clouds in the dataset.

Source code:

pcd_infos = api.pointcloud.get_list(dataset.id)
print(f"Dataset contains {len(pcd_infos)} point clouds")

Output:

# Dataset contains 3 point clouds

Download point clouds and context images from Supervisely

Download a point cloud

Download point cloud from Supervisely to local directory by id.

Source code:

save_path = "src/output/pcd_0.pcd"
api.pointcloud.download_path(pcd_info.id, save_path)
print(f"Point cloud has been successfully downloaded to '{save_path}'")

Output:

# Point cloud has been successfully downloaded to 'src/output/pcd_0.pcd'

Download a related context image from Supervisely to local directory by image id.

Source code:

save_path = "src/output/img_0.png"
img_info = api.pointcloud.get_list_related_images(pcd_info.id)[0]
api.pointcloud.download_related_image(img_info["id"], save_path)
print(f"Context image has been successfully downloaded to '{save_path}'")

Output:

# Context image has been successfully downloaded to 'src/output/img_0.png'

Working with Point Cloud Episodes

Working with Point Cloud Episodes is similar, except the following:

  1. There is api.pointcloud_episode for working with episodes.

  2. Create new projects with type sly.ProjectType.POINT_CLOUD_EPISODES.

  3. Put the frame index in meta while uploading a pcd: meta = {"frame": idx}.

Note: in Supervisely each episode is treated as a dataset. Therefore, create a separate dataset every time you want to add a new episode.

Create new project and dataset

Create new project.

Source code:

project = api.project.create(
    workspace_id,
    name="Point Cloud Episodes Tutorial",
    type=sly.ProjectType.POINT_CLOUD_EPISODES,
    change_name_if_conflict=True,
)
print(f"Project ID: {project.id}")

Output:

# Project ID: 16197

Create new dataset.

Source code:

dataset = api.dataset.create(project.id, "dataset_1")
print(f"Dataset ID: {dataset.id}")

Output:

# Dataset ID: 54539

Upload one point cloud to Supervisely.

Source code:

meta = {"frame": 0}  # "frame" is a required field for Episodes
pcd_info = api.pointcloud_episode.upload_path(dataset.id, "pcd_0.pcd", "src/input/pcd/000000.pcd", meta=meta)
print(f'Point cloud "{pcd_info.name}" (frame={meta["frame"]}) uploaded to Supervisely')

Output:

# Point cloud "pcd_0.pcd" (frame=0) uploaded to Supervisely

Upload entire point clouds episode to Supervisely platform.

Source code:

def read_cam_info(cam_info_file):
    with open(cam_info_file, "r") as f:
        cam_info = json.load(f)
    return cam_info


# 1. get paths
input_path = "src/input"
pcd_files = list(Path(f"{input_path}/pcd").glob("*.pcd"))
img_files = list(Path(f"{input_path}/img").glob("*.png"))
cam_info_files = Path(f"{input_path}/cam_info").glob("*.json")

# 2. get names and metas
pcd_metas = [{"frame": i} for i in range(len(pcd_files))]
img_metas = [read_cam_info(cam_info_file) for cam_info_file in cam_info_files]
pcd_names = list(map(os.path.basename, pcd_files))
img_names = list(map(os.path.basename, img_files))

# 3. upload
pcd_infos = api.pointcloud_episode.upload_paths(dataset.id, pcd_names, pcd_files, metas=pcd_metas)
img_hashes = api.pointcloud.upload_related_images(img_files)
img_infos = [
    {"entityId": pcd_infos[i].id, "name": img_names[i], "hash": img_hashes[i], "meta": img_metas[i]}
    for i in range(len(img_hashes))
]
api.pointcloud.add_related_images(img_infos)

print("Point Clouds Episode has been uploaded to Supervisely")

Output:

# Point Clouds Episode has been uploaded to Supervisely

Now you can explore and label it in Supervisely labeling tool for Episodes:

Last updated