3D segmentation masks projection on 2D photo context image

How to transfer segmentation masks from 3D point cloud to 2D photo context image

Introduction

Previously we have made a tutorial on how to transfer segmentation masks from 2D photo context image to 3D point cloud. This time we will do the opposite - transfer segmentation masks from 3D point cloud to 2D photo context image.

In this tutorial, we will show an example of transfering segmention masks from 3D point cloud to 2D photo context image. Segmented image will be uploaded to Supervisely Platform - after that it is possible to export image segmentation masks in any popular format. We wil take point cloud, photo context image and camera calibration parameters from KITTI dataset as an example, but this approach can be generalized to any data. Supervisely's 3D Point Cloud labeling tool and Image labeling tool will be used for working with point cloud and photo context image respectively.

The main steps of this tutorial are the following:

  • prepare input data: a 3D point cloud with segmentation mask, a photo context image, KITTI's sensor calibration files

  • project LiDAR 3D points on photo context image, get projections of masked points

  • build convex hull based on projections of masked points to create 2D segmentation mask on photo context image

Everything you need to reproduce this toturial is on GitHub: source code, Dockerfile, demo data.

Input data overview: 3D point cloud with segmentation mask, photo context image, camera calibration parameters

Firstly, we will need 3D point cloud with segmentation mask:

input point cloud

Secondly, we will need 2D photo context image related to this point cloud:

photo context image

Finally, we will need camera calibration parameters to project LiDAR 3D points on 2D photo context image. KITTI dataset provides several sensor calibration files:

  • calib_cam_to_cam.txt - contains matrices for camera-to-camera calibration

  • calib_velo_to_cam.txt - contains matrices for velodyne-to-camera registration

File for camera-to-camera calibration contains the following data (source - KITTI README):

  • S_xx: 1x2 size of image xx before rectification

  • K_xx: 3x3 calibration matrix of camera xx before rectification

  • D_xx: 1x5 distortion vector of camera xx before rectification

  • R_xx: 3x3 rotation matrix of camera xx (extrinsic)

  • T_xx: 3x1 translation vector of camera xx (extrinsic)

  • S_rect_xx: 1x2 size of image xx after rectification

  • R_rect_xx: 3x3 rectifying rotation to make image planes co-planar

  • P_rect_xx: 3x4 projection matrix after rectification

For our task, we will need only P_rect_xx, R_rect_xx, R_xx and T_xx matrices.

File for velodyne-to-camera registration contains the following data:

  • R: 3x3 rotation matrix

  • T: 3x1 translation vector

This data serves as a representation of the velodyne coordinate frame in camera coordinates. We will need rotation matrix and translation vector in order to transform point in velodyne coordinates into the camera coordinate system.

Environment preparation and libraries import

For running the code provided in this tutorial, you will need some Python modules: supervisely, open3d and alphashape. You can use this Dockerfile for convenience:

Import necessary libraries, load Supervisely account credentials and set image display parameters:

Download input point cloud and its annotation

Download input point cloud and get indexes of masked points:

Get sensor calibration parameters

We already covered the topic of sensor calibration parameters in our previous tutorial, but we will also duplicate it here for convenience.

The KITTI paper describes the transformation from LiDAR to camera $i$ as follows, where each transformation matrix has been converted to it's homogeneous representation. The difference here is that we have changed the notation and added the transformation to the desired camera reference.

y~=PrecticamiRrefirectiTref0refiTveloref0x~,where x~=[x,y,z,1]T\tilde{y} = P^{\text{cam}_i}_{\text{rect}_i} R^{\text{rect}_i}_{\text{ref}_i} T^{\text{ref}_i}_{\text{ref}_0} T^{\text{ref}_0}_{\text{velo}} \tilde{x}, \qquad \text{where } \tilde{x} = [x, y, z, 1]^T
y~=(u~,v~,z,1)\tilde{y} = (\tilde{u}, \tilde{v}, z, 1)

For convenience we will denote the transformation from LiDAR to camera i like Isaac Berrios proposed in his sensor fusion tutorial:

Tvelocami=PrecticamiRrefirectiTref0refiTveloref0T^{\text{cam}_i}_{\text{velo}} = P^{\text{cam}_i}_{\text{rect}_i} R^{\text{rect}_i}_{\text{ref}_i} T^{\text{ref}_i}_{\text{ref}_0} T^{\text{ref}_0}_{\text{velo}}

Where:

  • LiDAR to camera reference → transforms a 3D point relative to the LiDAR to a 3D point relative to the Camera.

TvelorefT^{\text{ref}}_{\text{velo}}
  • Rigid body transformation from camera 0 to camera i.

Tref0refiT^{\text{ref}_i}_{\text{ref}_0}
  • Camera i to rectified camera i reference.

RrefirectiR^{\text{rect}_i}_{\text{ref}_i}
  • Rectified camera i to 2D camera i (u, v, z) coordinate space.

PrecticamiP^{\text{cam}_i}_{\text{rect}_i}
  • 3D LiDAR space to 2D camera i (u, v, z) coordinate space.

TvelocamiT^{\text{cam}_i}_{\text{velo}}

Where (u, v, z) are the final camera coordinates after the rectification and projection transforms. In order to transform from homogeneous image coordinates y to true (u, v, z) image coordinates y, we will need to normalize by the depth and drop the 1:

y=(u~z,v~z,z)y = \left( \frac{\tilde{u}}{z}, \frac{\tilde{v}}{z}, z \right)

Project LiDAR 3D points on 2D photo context image

Next step - project LiDAR 3D points on photo context image and get projections of points which belong to segmented area in point cloud (mask projections).

masked points projections

Build 2D segmentation mask from 3D point projections

In order to create segmentation mask from 3D point projections, we are going to build a convex hull - the smallest convex set that encloses all the points, forming a convex polygon. We found alphashape implementation of convex hull to be the most effective, but it is also possible to use cv2 and scipy convex hull implementations.

result

Upload image and its mask annotation to Supervisely platform

Final step - upload result to Supervisely platform - it will create opportunities for convenient export of segmention masks and other data operations.

You will need team and workspace IDs.

Here is how to get your team ID:

Here is how to get your workspace ID:

Result:

segmented photo context image

Data export

Now, when image has been uploaded to Supervisely platform, you can easily export image mask annotations in any suitable format using corresponding apps: Export as masks, Export to COCO mask, Export to Pascal VOC, Export to Cityscapes, Export to YOLOv8 format.

Conclusion

In this tutorial, we used 3D mask guidance, sensor calibration matrices and convex hull algorithm in order to project 3D segmentation mask on 2D photo context image. This approach can be useful when there is a need in labeling both 3D point clouds and corresponding photo context images - instead of manually labeling both point clouds and images, you can label only point clouds and transfer 3D mask annotations to images.

Acknowledgement

This tutorial is based on great work by Isaac Berrios.

Last updated

Was this helpful?