Supervisely
About SuperviselyEcosystemContact usSlack
  • 💻Supervisely Developer Portal
  • 🎉Getting Started
    • Installation
    • Basics of authentication
    • Intro to Python SDK
    • Environment variables
    • Supervisely annotation format
      • Project Structure
      • Project Meta: Classes, Tags, Settings
      • Objects
      • Tags
      • Image Annotation
      • Video Annotation
      • Point Clouds Annotation
      • Point Cloud Episode Annotation
      • Volumes Annotation
    • Python SDK tutorials
      • Images
        • Images
        • Image and object tags
        • Spatial labels on images
        • Keypoints (skeletons)
        • Multispectral images
        • Multiview images
        • Advanced: Optimized Import
        • Advanced: Export
      • Videos
        • Videos
        • Video and object tags
        • Spatial labels on videos
      • Point Clouds
        • Point Clouds (LiDAR)
        • Point Cloud Episodes and object tags
        • 3D point cloud object segmentation based on sensor fusion and 2D mask guidance
        • 3D segmentation masks projection on 2D photo context image
      • Volumes
        • Volumes (DICOM)
        • Spatial labels on volumes
      • Common
        • Iterate over a project
        • Iterate over a local project
        • Progress Bar tqdm
        • Cloning projects for development
    • Command Line Interface (CLI)
      • Enterprise CLI Tool
        • Instance administration
        • Workflow automation
      • Supervisely SDK CLI
    • Connect your computer
      • Linux
      • Windows WSL
      • Troubleshooting
  • 🔥App development
    • Basics
      • Create app from any py-script
      • Configuration file
        • config.json
        • Example 1. Headless
        • Example 2. App with GUI
        • v1 - Legacy
          • Example 1. v1 Modal Window
          • Example 2. v1 app with GUI
      • Add private app
      • Add public app
      • App Compatibility
    • Apps with GUI
      • Hello World!
      • App in the Image Labeling Tool
      • App in the Video Labeling Tool
      • In-browser app in the Labeling Tool
    • Custom import app
      • Overview
      • From template - simple
      • From scratch - simple
      • From scratch GUI - advanced
      • Finding directories with specific markers
    • Custom export app
      • Overview
      • From template - simple
      • From scratch - advanced
    • Neural Network integration
      • Overview
      • Serving App
        • Introduction
        • Instance segmentation
        • Object detection
        • Semantic segmentation
        • Pose estimation
        • Point tracking
        • Object tracking
        • Mask tracking
        • Image matting
        • How to customize model inference
        • Example: Custom model inference with probability maps
      • Serving App with GUI
        • Introduction
        • How to use default GUI template
        • Default GUI template customization
        • How to create custom user interface
      • Inference API
      • Training App
        • Overview
        • Tensorboard template
        • Object detection
      • High level scheme
      • Custom inference pipeline
      • Train and predict automation model pipeline
    • Advanced
      • Advanced debugging
      • How to make your own widget
      • Tutorial - App Engine v1
        • Chapter 1 Headless
          • Part 1 — Hello world! [From your Python script to Supervisely APP]
          • Part 2 — Errors handling [Catching all bugs]
          • Part 3 — Site Packages [Customize your app]
          • Part 4 — SDK Preview [Lemons counter app]
          • Part 5 — Integrate custom tracker into Videos Annotator tool [OpenCV Tracker]
        • Chapter 2 Modal Window
          • Part 1 — Modal window [What is it?]
          • Part 2 — States and Widgets [Customize modal window]
        • Chapter 3 UI
          • Part 1 — While True Script [It's all what you need]
          • Part 2 — UI Rendering [Simplest UI Application]
          • Part 3 — APP Handlers [Handle Events and Errors]
          • Part 4 — State and Data [Mutable Fields]
          • Part 5 — Styling your app [Customizing the UI]
        • Chapter 4 Additionals
          • Part 1 — Remote Developing with PyCharm [Docker SSH Server]
      • Custom Configuration
        • Fixing SSL Certificate Errors in Supervisely
        • Fixing 400 HTTP errors when using HTTP instead of HTTPS
      • Autostart
      • Coordinate System
      • MLOps Workflow integration
    • Widgets
      • Input
        • Input
        • InputNumber
        • InputTag
        • BindedInputNumber
        • DatePicker
        • DateTimePicker
        • ColorPicker
        • TimePicker
        • ClassesMapping
        • ClassesColorMapping
      • Controls
        • Button
        • Checkbox
        • RadioGroup
        • Switch
        • Slider
        • TrainValSplits
        • FileStorageUpload
        • Timeline
        • Pagination
      • Text Elements
        • Text
        • TextArea
        • Editor
        • Copy to Clipboard
        • Markdown
        • Tooltip
        • ElementTag
        • ElementTagsList
      • Media
        • Image
        • LabeledImage
        • GridGallery
        • Video
        • VideoPlayer
        • ImagePairSequence
        • Icons
        • ObjectClassView
        • ObjectClassesList
        • ImageSlider
        • Carousel
        • TagMetaView
        • TagMetasList
        • ImageAnnotationPreview
        • ClassesMappingPreview
        • ClassesListPreview
        • TagsListPreview
        • MembersListPreview
      • Selection
        • Select
        • SelectTeam
        • SelectWorkspace
        • SelectProject
        • SelectDataset
        • SelectItem
        • SelectTagMeta
        • SelectAppSession
        • SelectString
        • Transfer
        • DestinationProject
        • TeamFilesSelector
        • FileViewer
        • Dropdown
        • Cascader
        • ClassesListSelector
        • TagsListSelector
        • MembersListSelector
        • TreeSelect
        • SelectCudaDevice
      • Thumbnails
        • ProjectThumbnail
        • DatasetThumbnail
        • VideoThumbnail
        • FolderThumbnail
        • FileThumbnail
      • Status Elements
        • Progress
        • NotificationBox
        • DoneLabel
        • DialogMessage
        • TaskLogs
        • Badge
        • ModelInfo
        • Rate
        • CircleProgress
      • Layouts and Containers
        • Card
        • Container
        • Empty
        • Field
        • Flexbox
        • Grid
        • Menu
        • OneOf
        • Sidebar
        • Stepper
        • RadioTabs
        • Tabs
        • TabsDynamic
        • ReloadableArea
        • Collapse
        • Dialog
        • IFrame
      • Tables
        • Table
        • ClassicTable
        • RadioTable
        • ClassesTable
        • RandomSplitsTable
        • FastTable
      • Charts and Plots
        • LineChart
        • GridChart
        • HeatmapChart
        • ApexChart
        • ConfusionMatrix
        • LinePlot
        • GridPlot
        • ScatterChart
        • TreemapChart
        • PieChart
      • Compare Data
        • MatchDatasets
        • MatchTagMetas
        • MatchObjClasses
        • ClassBalance
        • CompareAnnotations
      • Widgets demos on github
  • 😎Advanced user guide
    • Objects binding
    • Automate with Python SDK & API
      • Start and stop app
      • User management
      • Labeling Jobs
  • 🖥️UI widgets
    • Element UI library
    • Supervisely UI widgets
    • Apexcharts - modern & interactive charts
    • Plotly graphing library
  • 📚API References
    • REST API Reference
    • Python SDK Reference
Powered by GitBook
On this page
  • Introduction
  • Prepare data for annotations
  • Python code example
  • Import libraries and init API client
  • Create project and upload volumes
  • Create annotations and upload into the volume
  • Download existing annotations, manipulate the geometry & upload the result
  • Convert Mask3D geometries into meshes
  • How to debug this tutorial
  • To sum up

Was this helpful?

Edit on GitHub
  1. Getting Started
  2. Python SDK tutorials
  3. Volumes

Spatial labels on volumes

How to create Mask3D annotations on volumes in Python

PreviousVolumes (DICOM)NextCommon

Last updated 7 days ago

Was this helpful?

Introduction

In this tutorial, you will learn how to programmatically create 3D annotations for volumes and upload them to Supervisely platform.

Supervisely supports different types of shapes/geometries for volume annotation, and now we will consider the primary type - Mask3D.

You can explore other types as well, like Mask (also known as Bitmap), Bounding Box (Rectangle), and Polygon. However, you'll find more information about them in other articles.

Learn more about for volumes.

Read about our enterprise-grade DICOM labeling toolbox in blog post to be informed about all the advantages of our platform.

Labeling toolbox

Prepare data for annotations

During your work, you can create 3D annotation shapes, and here are a few ways you can do that:

  1. NRRD files

    The easiest way to create Mask3D annotation in Supervisely is to use NRRD file with 3D figure that corresponds to the dimensions of the Volume.

  2. NumPy Arrays

    Another simple way to create Mask3D annotation is to use NumPy arrays, where values of 1 represent the object and values of 0 represent empty space.

    On the right side, you can see a volume with a pink cuboid. Let's represent this volume as an NumPy array.

    figure_array = np.zeros((3, 4, 2))

    To draw a pink cuboid on it, you need to assign a value of 1 to the necessary cells. In the code below, each cell is indicated by three axes [axis_0, axis_1, axis_2].

    figure_array[0, 1, 0] = 1 
    figure_array[0, 2, 0] = 1
    figure_array[1, 1, 0] = 1
    figure_array[1, 2, 0] = 1

    In the Python code example section, we will create a NumPy array that represents a foreign body in the lung as a sphere.

  3. Images

    You can also use flat mask annotations, such as black and white pictures, to create Mask3D from them. You just need to know which plane and slice it refers to.

    If your flat annotation doesn't correspond to the dimensions of the plane, you also need to know its PointLocation. This will help to properly apply the mask to the image. This point indicates where the top-left corner of the mask is located, or in other words, the coordinates of the mask's initial position on the canvas or image.

    plane = 'axial'
    slice_index = 69
    point_location = [36, 91]

Python code example

Import libraries and init API client

import os
import numpy as np
import cv2
from dotenv import load_dotenv
import supervisely as sly



# To init API for communicating with Supervisely Instance. 
# It needs to load environment variables with credentials and workspace ID
if sly.is_development():
    load_dotenv("local.env")
    load_dotenv(os.path.expanduser("~/supervisely.env"))

api = sly.Api()

# Check that you did everything right - the API client initialized with the correct credentials and you defined the correct workspace ID in `local.env`file
workspace_id = sly.env.workspace_id()
workspace = api.workspace.get_info_by_id(workspace_id)
if workspace is None:
    sly.logger.warning("You should put correct WORKSPACE_ID value to local.env")
    raise ValueError(f"Workspace with id={workspace_id} not found")

Create project and upload volumes

Create an empty project with the name "Volumes Demo" with one dataset "CTChest" in your workspace on the server. If a project with the same name exists in your workspace, it will be automatically renamed (Volumes Demo_001, Volumes Demo_002, etc.) to avoid name collisions.

# create empty project and dataset on server
project_info = api.project.create(
    workspace.id,
    name="Volumes Demo",
    type=sly.ProjectType.VOLUMES,
    change_name_if_conflict=True,
)
dataset_info = api.dataset.create(project_info.id, name="CTChest")

sly.logger.info(
    f"Project with id={project_info.id} and dataset with id={dataset_info.id} have been successfully created"
)

# upload NRRD volume as ndarray into dataset
volume_info = api.volume.upload_nrrd_serie_path(
    dataset_info.id,
    name="CTChest.nrrd",
    path="data/CTChest_nrrd/CTChest.nrrd",
)

Create annotations and upload into the volume


# create annotation classes
lung_class = sly.ObjClass("lung", sly.Mask3D, color=[111, 107, 151])
body_class = sly.ObjClass("body", sly.Mask3D, color=[209, 192, 129])
tumor_class = sly.ObjClass("tumor", sly.Mask3D, color=[255, 153, 204])

# update project meta with new classes
api.project.append_classes(project_info.id, [lung_class, tumor_class, body_class])

################################  1  NRRD file    ######################################

mask3d_path = "data/mask/lung.nrrd"

# create 3D Mask annotation for 'lung' using NRRD file with 3D object
lung_mask = sly.Mask3D.create_from_file(mask3d_path)
lung = sly.VolumeObject(lung_class, mask_3d=lung_mask)

###############################  2  NumPy array    #####################################

# create 3D Mask annotation for 'tumor' using NumPy array
tumor_mask = sly.Mask3D(generate_tumor_array())
tumor = sly.VolumeObject(tumor_class, mask_3d=tumor_mask)

##################################  3  Image    ########################################

image_path = "data/mask/body.png"

# create 3D Mask annotation for 'body' using image file
mask = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
# create an empty mask with the same dimensions as the volume
body_mask = sly.Mask3D(np.zeros(volume_info.file_meta["sizes"], np.bool_))
# fill this mask with the an image mask for the desired plane. 
# to avoid errors, use constants: Plane.AXIAL, Plane.CORONAL, Plane.SAGITTAL
body_mask.add_mask_2d(mask, plane_name=sly.Plane.AXIAL, slice_index=69, origin=[36, 91])
body = sly.VolumeObject(body_class, mask_3d=body_mask)

# create volume annotation object
volume_ann = sly.VolumeAnnotation(
    volume_info.meta,
    objects=[lung, tumor, body],
    spatial_figures=[lung.figure, tumor.figure, body.figure],
)

# upload VolumeAnnotation
api.volume.annotation.append(volume_info.id, volume_ann)
sly.logger.info(
    f"Annotation has been sucessfully uploaded to the volume {volume_info.name} in dataset with ID={volume_info.dataset_id}"
)

Auxiliary function for generating tumor NumPy array:


def generate_tumor_array():
    """
    Generate a NumPy array representing the tumor as a sphere
    """
    width, height, depth = (512, 512, 139)  # volume shape
    center = np.array([128, 242, 69])  # sphere center in the volume
    radius = 25
    x, y, z = np.ogrid[:width, :height, :depth]
    # Calculate the squared distances from each point to the center
    squared_distances = (x - center[0]) ** 2 + (y - center[1]) ** 2 + (z - center[2]) ** 2
    # Create a boolean mask by checking if squared distances are less than or equal to the square of the radius
    tumor_array = squared_distances <= radius**2
    tumor_array = tumor_array.astype(np.uint8)
    return tumor_array

Download existing annotations, manipulate the geometry & upload the result

volume_id = os.getenv("VOLUME_ID")
project_meta = sly.ProjectMeta.from_json(api.project.get_meta(volume_info.project_id))
key_id_map = sly.KeyIdMap()

################################## 4 Download Ann ########################################

# download json annotation and deserialize it
ann_json = api.volume.annotation.download(volume_id)
ann = sly.VolumeAnnotation.from_json(ann_json, project_meta, key_id_map)

# load spatial geometries
for figure in ann.spatial_figures:
    api.volume.figure.load_sf_geometry(figure, key_id_map)


##########################  5 Alter Geometries & reupload  ##############################

new_sfs = []
for figure in ann.spatial_figures:
    # invert the mask
    inverted_mask_array = np.invert(figure.geometry.data)

    # create a new object with the inverted mask
    new_geometry = sly.Mask3D.clone(figure.geometry)
    new_geometry.data = inverted_mask_array

    # add the new figure to the list of spatial figures
    new_sfs.append(sly.VolumeFigure.clone(figure, geometry=new_geometry))

# clone the annotation with the new spatial figures
new_ann = sly.VolumeAnnotation.clone(ann, spatial_figures=new_sfs)

# upload the new annotation
api.volume.annotation.append(volume_id, new_ann, key_id_map)

Convert Mask3D geometries into meshes

Spatial figures can be easily converted into meshes:

ann_json = api.volume.annotation.download(volume_id)
ann = sly.VolumeAnnotation.from_json(ann_json, project_meta, key_id_map)

for figure in ann.spatial_figures:
    # load the spatial geometry first
    api.volume.figure.load_sf_geometry(figure, key_id_map)

    # option 1: convert to python Trimesh object
    mesh = sly.volume.volume.convert_3d_geometry_to_mesh(figure.geometry)

    # option 2: export to file
    out_path = str(figure.geometry.sly_id) + ".stl"  # or ".obj"
    sly.volume.volume.export_3d_as_mesh(figure.geometry, out_path)

If you need to use custom conversion parameters, they can be passed into each method.

Example for python object:

mesh = sly.volume.volume.convert_3d_geometry_to_mesh(
    figure.geometry,
    spacing=(0.9, 0.9, 1.5),
    level=0.8,
    apply_decimation=True,
    decimation_fraction=0.4,
)

Example for file export:

out_path = str(figure.geometry.sly_id) + ".stl"  # or ".obj"
conversion_kwargs = {
    "spacing": (0.9, 0.9, 1.5),
    "level": 0.8,
    "apply_decimation": True,
    "decimation_fraction": 0.4,
}
sly.volume.volume.export_3d_as_mesh(figure.geometry, out_path, kwargs=conversion_kwargs)

How to debug this tutorial

git clone https://github.com/supervisely-ecosystem/dicom-spatial-figures
cd dicom-spatial-figures
./create_venv.sh

Step 3. Open the repository directory in Visual Studio Code.

code -r .

Step 4. Change ✅ workspace ID ✅ in local.env file by copying the ID from the context menu of the workspace. A new project with annotated videos will be created in the workspace you define:

WORKSPACE_ID=696 # ⬅️ change value

Step 5. Start debugging src/main.py

To sum up

In this tutorial, we learned:

  • What are the types of annotations for Volumes

  • How to create a project and dataset, upload volume

  • How to create 3D annotations and upload into volume

  • How to configure Python development for Supervisely

  • How to download and manipulate spatial geometries

Everything you need to reproduce : source code, Visual Studio Code configuration, and a shell script for creating virtual env.

You can find an example NRRD file at in the GitHub repository for this tutorial.

You can find an example image file at in the GitHub repository for this tutorial.

In the , you will find the .

Step 1. Prepare ~/supervisely.env file with credentials.

Step 2. Clone the with source code and demo data and create a .

Debug tutorial in Visual Studio Code
🎉
this tutorial is on GitHub
data/mask/lung.nrrd
data/mask/body.png
GitHub repository for this tutorial
full Python script
Learn more here.
repository
Virtual Environment
Supervisely Annotation in JSON format
Best DICOM & NIfTI annotation tools for Medical Imaging AI