point cloud - acfr/snark GitHub Wiki

Table of Contents

rationale

This library has the intention to complement rather than replace PCL or alike.

classes

voxel_grid

Voxel grid with of limited extents.

voxel_map

Hash map-based voxel grid with unlimited extents (and thus all the advantages and disadvantages of hashed containers).

partition

Partition a point cloud based on adjacency

voted_tracking

Given two partitioned point clouds, make their labels consistent, using a simple voting algorithm.

utilities

points-slice

Take a stream of points, output them with appended distance to a given plane defined by 3 points or a point and normal.

points-to-voxels

Take a stream of points, output voxel indices, centroids, and point count for a voxel grid with the given origin and resolution. Indices are integers i, j, k relative to the voxel map origin.

Voxelisation refers to the following process: divide a three dimensional space up into rectangular prisms of a chosen fixed size (e.g it could just be cubes with the same resolution in x,y and z, or each dimension can have its own resolution, meaning the space is divided into tightly packed, axis aligned rectangular prisms). Now we feed individual 3D points with arbitrary coordinates in and each point falls within one and only one voxel (rectangular prism), but a voxel can have 0,1 or many points. Now for each voxel, we summarise the contents with the three dimensional mean (centre of gravity) of all points that lie within, as distinct from the voxel centre. This has the effect that the summarised voxelised points can still represent complex geometry (up to the Nyquist sampling rate at that sampling resolution), and there is no aliasing (stair step patterns) due to the voxelisation. points-to-voxels outputs the means, whereas points-to-voxle-indices outputs all the original points with integer i,j,k voxel associations, for further processing.

For example, assume, we would like to visualise timestamped raw velodyne data published on a socket. Velodyne data is big and to reduce it, we would like to voxelize it first.

Acquire velodyne data and convert to csv:

 netcat localhost 1234 | velodyne-to-csv --fields=x,y,z,scan

Note: at any step below you always could take a look whether the data is there and what is their format, using head:

 netcat localhost 1234 | velodyne-to-csv --fields=x,y,z,scan | head

Voxelize:

 netcat localhost 1234 | velodyne-to-csv --fields=x,y,z,scan \
                       | points-to-voxels --fields=x,y,z,block | head

View:

 netcat localhost 1234 | velodyne-to-csv --fields=x,y,z,scan \
                       | points-to-voxels --fields=x,y,z,block \
                       | view-points --fields=,,,x,y,z,,block

Now, ascii will be too slow for velodyne, therefore, once made sure that your command line works as expected, switch to binary:

 netcat localhost 1234 | velodyne-to-csv --fields=x,y,z,scan --binary \
                       | points-to-voxels --fields=x,y,z,block --binary=3d,ui \
                       | view-points --fields=,,,x,y,z,,block --binary=3ui,3d,2ui

points-to-voxel-indices

Take a stream of points, decorate with voxel indices for a voxel grid with the given origin and resolution. Indices are integers i, j, k relative to the voxel map origin (a command line parameter).

Assume we have two different point clouds from the same location:

  • plain.csv with columns x,y,z
  • coloured.csv with columns x,y,z,r,g,b
We would like to append colours to the points of plain.csv. For simplicity's sake, for each point plain.csv we will take the colour of the first point from coloured.csv from the same voxel.

Voxelize:

 cat plain.csv | points-to-voxel-indices --resolution=0.2 > plain.indexed.csv
 cat coloured.csv | points-to-voxel-indices --resolution=0.2 > coloured.indexed.csv

Append colours (in fact append the whole point from coloured.indexed.csv):

 cat plain.indexed.csv | csv-join --fields=,,,i,j,k "coloured.indexed.csv;fields=,,,,,,i,j,k" --first-matching > plain.joined.csv

Note that if there are no points from coloured.csv in the vicinity of a point from plain.csv, this point naturally will be missing plain.joined.csv.

plain.joined.csv will have columns: x,y,z,i,j,k,x,y,z,r,g,b,i,j,k; where the blue fields come from plain.indexed.csv and the green from coloured.indexed.csv.

If you would like to clean up the final file:

 cat plain.joined.csv | cut --delimiter=, --fields=1-3,10-12 > plain.coloured.csv

points-to-partitions

take a stream of points, partition, output points with partition ids

Continuing the example with velodyne data published on a socket, view partitioned velodyne data:

 netcat localhost 1234 | velodyne-to-csv --fields=x,y,z,scan \
                       | points-to-partitions --fields=x,y,z,block \
                       | view-points --fields=x,y,z,block,id

Once checked that the above command line works, make it binary for performance:

 netcat localhost 1234 | velodyne-to-csv --fields=x,y,z,scan --binary \
                       | points-to-partitions --fields=x,y,z,block --binary=3d,ui \
                       | view-points --fields=x,y,z,block,id --binary=3d,2ui

points-track-partitions

take a stream of blocks of points with ids (e.g. individually partitioned scans from velodyne) and make the partition ids consistent across scans, using a simple voting algorithm

Continuing the example above, view tracked partitions:

 netcat localhost 1234 | velodyne-to-csv --fields=x,y,z,scan \
                       | points-to-partitions --fields=x,y,z,block \
                       | points-track-partitions --fields=x,y,z,block,id \
                       | view-points --fields=x,y,z,block,id

Once checked that the above command line works, make it binary for performance:

 netcat localhost 1234 | velodyne-to-csv --fields=x,y,z,scan --binary \
                       | points-to-partitions --fields=x,y,z,block --binary=3d,ui \
                       | points-track-partitions --fields=x,y,z,block,id --binary=3d,2ui \
                       | view-points --fields=x,y,z,block,id --binary=3d,2ui

points-detect-change

detect change (additions and subtractions) between two point clouds, using ray tracing:

load a point cloud in polar from file; for each point on stdin, output whether it is blocked in the ray or not by points of the point cloud, e.g.

 cat points.csv | points-detect-change reference_points.csv > points.marked-as-blocked.csv

See also: http://www.acfr.usyd.edu.au/papers/icra13-underwood-changedetection.shtml

⚠️ **GitHub.com Fallback** ⚠️