Labeling point clouds for object detection is a daunting task. The data is massive and its 3D nature makes it more difficult for the annotator to find the objects of interest compared to the labeling images. In order to facilitate the point cloud labeling process, I explored and developed a simple algorithm that takes advantage of the maturity of image object detection to simplify the process of labeling 3D objects. The idea is that for a dataset consisting of image-point cloud pairs, like the one obtained with my robot, we can apply object detection to the images and then create 3D truncated cones from the 2D bounding boxes .

By taking a 2D bounding box with the intrinsic and extrinsic camera parameters, we can backproject the corners of the bounding box to get 3D points. These points define the vertices of the 3D truncated cone in real-world space. Essentially, each trunk can be conceptualized as a 3D field of view of a camera. The bounding box in the 2D image acts as the aperture of this camera, and the trunk encapsulates all the 3D points that the camera can potentially observe. In other words, the algorithm metaphorically places multiple virtual cameras in 3D space, each focusing on a specific object identified by the 2D object detector.

Let’s break it down:

**2D object detection:** I used the YOLO (You Only Look Once) model (V8), a real-time object detection system, to detect objects in the image. Detected objects are represented by 2D bounding boxes, which are then used to generate 3D truncated cones in the next step.

**3D frost generation:** A truncated cone is a portion of a solid located between one or two parallel planes intersecting it. Here we use 2D bounding boxes to generate 3D trunks in the point cloud. Each trunk corresponds to an object detected in the 2D image and contains the relevant 3D points of the point cloud.

**Point cloud segmentation:** The algorithm then filters the points inside each trunk to create smaller pieces of the point cloud. Each piece contains the 3D points corresponding to an object detected in the 2D image.

**Viewing and saving:** The final step of the algorithm is to visualize these small chunks and optionally save them for further processing or labeling.

After saving the filtered point clouds, they can be uploaded to an annotation tool for easy labeling.

The algorithm involves several techniques that may require a basic understanding of image processing and 3D geometry. Let’s go through them:

- 2D to 3D rear projection: Rear projection is a technique for converting 2D image points into 3D points.
- 3D trunk generation: I used the corners of the 2D bounding box and the minimum and maximum depths to generate the eight vertices of the 3D trunk. Then I projected those vertices back onto 3D space using camera settings and rotation and translation between camera and world coordinates.
- Point cloud filtering: To segment the point cloud, we need to identify the points that are in each trunk. For this, I used the Delaunay triangulation of the vertices of the trunk, then I checked if each point of the point cloud is in the triangulation. The points that do are part of the truncated cone and are used to create the smaller point cloud pieces.

To access the full code and point cloud examples, check out my repository:

For a full demo, check out the video up top.

## Leave a Reply