Last updated
Last updated
Previously we have made a on how to transfer segmentation masks from 2D photo context image to 3D point cloud. This time we will do the opposite - transfer segmentation masks from 3D point cloud to 2D photo context image.
In this tutorial, we will show an example of transfering segmention masks from 3D point cloud to 2D photo context image. Segmented image will be uploaded to Supervisely Platform - after that it is possible to export image segmentation masks in any popular format. We wil take point cloud, photo context image and camera calibration parameters from dataset as an example, but this approach can be generalized to any data. Supervisely's and will be used for working with point cloud and photo context image respectively.
The main steps of this tutorial are the following:
prepare input data: a 3D point cloud with segmentation mask, a photo context image, KITTI's sensor calibration files
project LiDAR 3D points on photo context image, get projections of masked points
build convex hull based on projections of masked points to create 2D segmentation mask on photo context image
Everything you need to reproduce this toturial is on : source code, Dockerfile, demo data.
Firstly, we will need 3D point cloud with segmentation mask:
Secondly, we will need 2D photo context image related to this point cloud:
Finally, we will need camera calibration parameters to project LiDAR 3D points on 2D photo context image. KITTI dataset provides several sensor calibration files:
calib_cam_to_cam.txt - contains matrices for camera-to-camera calibration
calib_velo_to_cam.txt - contains matrices for velodyne-to-camera registration
S_xx: 1x2 size of image xx before rectification
K_xx: 3x3 calibration matrix of camera xx before rectification
D_xx: 1x5 distortion vector of camera xx before rectification
R_xx: 3x3 rotation matrix of camera xx (extrinsic)
T_xx: 3x1 translation vector of camera xx (extrinsic)
S_rect_xx: 1x2 size of image xx after rectification
R_rect_xx: 3x3 rectifying rotation to make image planes co-planar
P_rect_xx: 3x4 projection matrix after rectification
For our task, we will need only P_rect_xx, R_rect_xx, R_xx and T_xx matrices.
File for velodyne-to-camera registration contains the following data:
R: 3x3 rotation matrix
T: 3x1 translation vector
This data serves as a representation of the velodyne coordinate frame in camera coordinates. We will need rotation matrix and translation vector in order to transform point in velodyne coordinates into the camera coordinate system.
Download input point cloud and get indexes of masked points:
Where:
LiDAR to camera reference → transforms a 3D point relative to the LiDAR to a 3D point relative to the Camera.
Rigid body transformation from camera 0 to camera i.
Camera i to rectified camera i reference.
Rectified camera i to 2D camera i (u, v, z) coordinate space.
3D LiDAR space to 2D camera i (u, v, z) coordinate space.
Where (u, v, z) are the final camera coordinates after the rectification and projection transforms. In order to transform from homogeneous image coordinates y to true (u, v, z) image coordinates y, we will need to normalize by the depth and drop the 1:
Next step - project LiDAR 3D points on photo context image and get projections of points which belong to segmented area in point cloud (mask projections).
In order to create segmentation mask from 3D point projections, we are going to build a convex hull - the smallest convex set that encloses all the points, forming a convex polygon. We found alphashape implementation of convex hull to be the most effective, but it is also possible to use cv2 and scipy convex hull implementations.
Final step - upload result to Supervisely platform - it will create opportunities for convenient export of segmention masks and other data operations.
You will need team and workspace IDs.
Here is how to get your team ID:
Here is how to get your workspace ID:
Result:
In this tutorial, we used 3D mask guidance, sensor calibration matrices and convex hull algorithm in order to project 3D segmentation mask on 2D photo context image. This approach can be useful when there is a need in labeling both 3D point clouds and corresponding photo context images - instead of manually labeling both point clouds and images, you can label only point clouds and transfer 3D mask annotations to images.
File for camera-to-camera calibration contains the following data (source - ):
For running the code provided in this tutorial, you will need some Python modules: supervisely
, open3d
and alphashape
. You can use for convenience:
Import necessary libraries, load and set image display parameters:
We already the topic of sensor calibration parameters in our previous tutorial, but we will also duplicate it here for convenience.
The KITTI describes the transformation from LiDAR to camera $i$ as follows, where each transformation matrix has been converted to it's homogeneous representation. The difference here is that we have changed the notation and added the transformation to the desired camera reference.
For convenience we will denote the transformation from LiDAR to camera i like Isaac Berrios in his sensor fusion tutorial:
Now, when image has been uploaded to Supervisely platform, you can easily export image mask annotations in any suitable format using corresponding apps: , , , , .
This tutorial is based on by Isaac Berrios.
How to transfer segmentation masks from 3D point cloud to 2D photo context image