If you have a photo context taken with a LIDAR image, you can attach the photo to the point cloud. To do that, we need two additional matrices. They are used for matching 3D coordinates in the point cloud to the 2D coordinates in the photo context:
Parameters meaning
fx, fy are the focal lengths expressed in pixel units
cx, cy is a principal point that is usually at the image center
rij and ti from the extrinsicMatrix are the rotation and translation parameters
The dot product of the matrices and XYZ coordinate in 3D space gives us the coordinate of a point (x=u, y=v) in the photo context:
Uploading context photo to the Supervisely.
For attaching a photo, it is needed to provide the matrices in a metadict with the deviceId and sensorsData fields. The matrices must be included in the meta dict as flattened lists.
A full code for uploading and attaching the context image
Source code:
# input files:img_file ="src/input/img/000000.png"cam_info_file ="src/input/cam_info/000000.json"# 0. Read cam_info with matrices (a meta dict).withopen(cam_info_file, "r")as f: cam_info = json.load(f)# 1. Upload an image to the Supervisely. It generates us a hash for imageimg_hash = api.pointcloud.upload_related_image(img_file)# 2. Create img_info needed for matching the image to the point cloud by its IDimg_info ={"entityId": pcd_info.id,"name":"img_0.png","hash": img_hash,"meta": cam_info}# 3. Run the API command to attach the imageapi.pointcloud.add_related_images([img_info])print("Context image has been uploaded.")
Output:
# Context image has been uploaded.
More about the format of a photo context: Supervisely annotation JSON format
✅ Supervisely API allows uploading multiple point clouds in a single request. The code sample below sends fewer requests and it leads to a significant speed-up of our original code.
Source code:
# Upload a batch of point clouds and related imagespaths = ["src/input/pcd/000001.pcd","src/input/pcd/000002.pcd"]img_paths = ["src/input/img/000001.png","src/input/img/000002.png"]cam_paths = ["src/input/cam_info/000001.json","src/input/cam_info/000002.json"]pcd_infos = api.pointcloud.upload_paths(dataset.id, names=["pcd_1.pcd", "pcd_2.pcd"], paths=paths)img_hashes = api.pointcloud.upload_related_images(img_paths)img_infos = []for i, cam_info_file inenumerate(cam_paths):# reading cam_infowithopen(cam_info_file, "r")as f: cam_info = json.load(f) img_info ={"entityId": pcd_infos[i].id,"name": f"img_{i}.png","hash": img_hashes[i],"meta": cam_info,} img_infos.append(img_info)result = api.pointcloud.add_related_images(img_infos)print("Batch uploading has finished:", result)
Output:
# Batch uploading has finished: {'success': True}
Get information about Point Clouds and related context Images
Get info by name
Get information about point cloud from Supervisely by name.
pcd_infos = api.pointcloud.get_list(dataset.id)print(f"Dataset contains {len(pcd_infos)} point clouds")
Output:
# Dataset contains 3 point clouds
Download point clouds and context images from Supervisely
Download a point cloud
Download point cloud from Supervisely to local directory by id.
Source code:
save_path ="src/output/pcd_0.pcd"api.pointcloud.download_path(pcd_info.id, save_path)print(f"Point cloud has been successfully downloaded to '{save_path}'")
Output:
# Point cloud has been successfully downloaded to 'src/output/pcd_0.pcd'
Download a related context image
Download a related context image from Supervisely to local directory by image id.
Source code:
save_path ="src/output/img_0.png"img_info = api.pointcloud.get_list_related_images(pcd_info.id)[0]api.pointcloud.download_related_image(img_info["id"], save_path)print(f"Context image has been successfully downloaded to '{save_path}'")
Output:
# Context image has been successfully downloaded to 'src/output/img_0.png'
Working with Point Cloud Episodes
Working with Point Cloud Episodes is similar, except the following:
There is api.pointcloud_episode for working with episodes.
Create new projects with type sly.ProjectType.POINT_CLOUD_EPISODES.
Put the frame index in meta while uploading a pcd: meta = {"frame": idx}.
Note: in Supervisely each episode is treated as a dataset. Therefore, create a separate dataset every time you want to add a new episode.
meta ={"frame":0}# "frame" is a required field for Episodespcd_info = api.pointcloud_episode.upload_path(dataset.id, "pcd_0.pcd", "src/input/pcd/000000.pcd", meta=meta)print(f'Point cloud "{pcd_info.name}" (frame={meta["frame"]}) uploaded to Supervisely')
Output:
# Point cloud "pcd_0.pcd" (frame=0) uploaded to Supervisely
Upload entire point clouds episode to Supervisely platform.
Source code:
defread_cam_info(cam_info_file):withopen(cam_info_file, "r")as f: cam_info = json.load(f)return cam_info# 1. get pathsinput_path ="src/input"pcd_files =list(Path(f"{input_path}/pcd").glob("*.pcd"))img_files =list(Path(f"{input_path}/img").glob("*.png"))cam_info_files =Path(f"{input_path}/cam_info").glob("*.json")# 2. get names and metaspcd_metas = [{"frame": i}for i inrange(len(pcd_files))]img_metas = [read_cam_info(cam_info_file)for cam_info_file in cam_info_files]pcd_names =list(map(os.path.basename, pcd_files))img_names =list(map(os.path.basename, img_files))# 3. uploadpcd_infos = api.pointcloud_episode.upload_paths(dataset.id, pcd_names, pcd_files, metas=pcd_metas)img_hashes = api.pointcloud.upload_related_images(img_files)img_infos = [{"entityId": pcd_infos[i].id,"name": img_names[i],"hash": img_hashes[i],"meta": img_metas[i]}for i inrange(len(img_hashes))]api.pointcloud.add_related_images(img_infos)print("Point Clouds Episode has been uploaded to Supervisely")
Output:
# Point Clouds Episode has been uploaded to Supervisely