PDAL - thomaspingel/advanced_remote_sensing GitHub Wiki

PDAL is the Point Data Abstraction Library, a companion to GDAL and MDAL. It is capable of processing point cloud data such as that derived from lidar or Structure from Motion (SfM) in a manner similar to lastools. PDAL can be run on the command line or can be accessed with the python-pdal package. It is included in the lab's lidar processing Anaconda environment.

Getting Started

  1. Watch Pingel's introduction to PDAL from Advanced Remote Sensing from his Advanced Remote Sensing (NR 6014) course.
  2. Install the lab's lidar processing Python environment that includes pdal. To activate, open an Anaconda command prompt and
activate lidar
  1. Follow the quickstart guide to see how to use pdal from the command line. An example to do a ground classification using some settings for SMRF is:
pdal ground -i input.las -o output.las --slope .2 --max_window_size 5 --cell_size 1
  1. For more flexibility with inputs and outputs, use pdal translate. This allows you to specify a json file with a set of filters, but lets you specify the inputs and outputs. For instance, you might develop a classification pipeline like this one, and invoke it like so:
pdal translate -i input.laz -o classified.laz --json classifier.json
  1. Like translate, pdal pipeline will follow a "script" written in a json file, describing a flow of input, processing, and output, including all settings. The json file acts to document the methodology. An example to create a DTM, where settings are specified in a json file is:
pdal pipeline dtm.json
  1. You can also write code to use PDAL to bring data in and out of Python. See this notebook for examples of syntax or the more complete pdal_recipes.

Examples

Remove / crop out all points from a cloud above 750 meters.

pdal translate -i input.las -o cropped.las -f filters.crop --filters.crop.where="Z>750"

Read data into a dataframe using PDAL and Python

fn_in = 'data/USGS_LPC_VA_FEMA-NRCS_SouthCentral_2017_D17_17SNB23501450.laz'
pipeline = pdal.Reader(fn_in).pipeline()
pipeline.execute()
arr = pipeline.arrays[0]

Tutorials

References