top of page

From Point to Plot - Processing LiDAR datasets using GIS

  • Writer: Arpit Shah
    Arpit Shah
  • Feb 27, 2023
  • 11 min read

Updated: 4 days ago

Laser Beam in a Lab Environment. Image Source: news.mit.edu
Figure 1: Laser Beam in a Lab Environment. Image Source: news.mit.edu











  1. Introduction


Laser beams are fascinating, aren't they? Focused and Incisive. A higher form of Intelligence, perhaps.

You can fight with it in a movie. Dance to its neon hue at shows. Shine it on a night sky to see if it reaches the clouds...or beyond. Or at the very least, flash it on sportspersons to distract them in crunch match-situations😁. It feels liberating that one can use and admire the applications of a technology without needing to know how it is formed or interacts with the surroundings.


Light Amplification by the Stimulated Emission of Radiation. That is LASER for you - I was unaware myself till I sat down to pen this post. And monochromatic, directional and coherent is why a laser beam feels different to visible light - it has a single wavelength, single color and two waves from the same source are always in synchrony. No wonder these high-intensity beams evoke a strong feeling.

And feelings do matter. Laser pulses can feel the bare earth, the terrain and our surroundings comprising natural and built-up features in ways that many other modes of illumination fall short of.
A LiDAR Point Cloud would look similar, albeit denser. Image Source: Brecht Denil on Unsplash
Figure 2: A LiDAR Point Cloud would look similar, albeit denser. Image Source: Brecht Denil on Unsplash

And this is what LiDAR i.e. Light Detection and Ranging, a Remote Sensing technique which uses Laser as an active mode of illumination takes advantage of - the sensor can throw laser pulses at very high rates (150 kHz) and obtain dense returns (150 points per square foot). Upon stitching these reflectances into a Point Cloud, one can generate three-dimensional digital models of the study area with high spatial resolution.

Tip: In case you have an ancestral home and worry it will get demolished, and along with it your fond memories of the place will be lost forever - then get it LiDAR scanned so that it can be recreated in the future! Notre Dame Cathedral has benefitted from it, so can you.

 

Those who are familiar with my previous posts would know that I tend to share an elaborate context before diving into the main subject matter. In case you would like to skip to it, here are the hyperlinks to the workflow sections-



In you prefer viewing the demonstrations, here is a compiled walkthrough of all the three workflows-

Video 1: Narrated one-hour Video Demonstration on all the three workflows of processing LiDAR Data covered in this post

Video Timestamps


00:05 - Case Details


00:19 - Caselet 1 - Extracting 3D Building Footprint from LiDAR Imagery

00:23 - C1 - Workflow 1 : Setting up & exploring the dataset

03:43 - C1 - Workflow 2 : Classifying the LiDAR Imagery Dataset

10:44 - C1 - Workflow 3: Extracting Buildings Footprint

14:12 - C1 - Workflow 4: Cleaning up the Buildings Footprint

17:25 - C1 - Workflow 5: Extracting 'Realistic' 3D Building Footprint


20:47 - Caselet 2 - Extracting 3D Roof Forms from LiDAR Imagery

20:51 - C2 - Workflow 1 : Setting up the Data & Creating Elevation Layers

30:16 - C2 - Workflow 2 : Creating 3D Buildings Footprint

33:54 - C2 - Workflow 3 : Checking Accuracy of Building Footprints & Fixing Errors


42:06 - Caselet 3 - Classifying Power Lines using Deep Learning model on LiDAR Dataset

42:10 - C3 - Workflow 1 : Setting up and Exploring the Dataset

46:23 - C3 - Workflow 2 : Training the DL Classification Model using a Sample Dataset

51:31 - C3 - Workflow 3 : Examining the Output of the Sample-Trained DL Classification Model 53:27 - C3 - Workflow 4 : Training the DL Classification Model using a Large Dataset

58:12 - C3 - Workflow 5 : Extracting Power Lines from the LiDAR Point Cloud Output


59:46 - Summary Note & Contact Us


 

Laser is a form of amplified radiation and the pulses are discharged at rapid rates. LiDAR sensor operates in the near-infrared, visible light and ultraviolet range of the electromagnetic spectrum - similar to Solar Radiation, albeit at a much higher frequency and intensity. The wavelength which a LiDAR sensor utilizes depends on the application. For example, Topographic surveys on land typically use Laser sensors which emit Near-infrared (NIR) radiation as it is imperceptible to human eye and does not interfere with other sensors, whereas Bathymetric Surveys to measure seafloor elevation make use of Green light as it is able to penetrate Water with relative ease.


Here are some of the top applications of LiDAR.

LiDAR sensors emit radiation in the Near-infrared, Visible and Ultraviolet range of the electromagnetic spectrum.  Source: Adapted from NASA ARSET
Figure 3: LiDAR sensors emit radiation in the Near-infrared, Visible and Ultraviolet range of the electromagnetic spectrum. Source: Adapted from NASA ARSET
 

With LiDAR, one is able to generate high-resolution, three-dimensional digital models of the study area. Elevation data (Z) is what transforms a 2D image into 3D and lends context and depth to the surface. The density of LiDAR point returns and the precision of its Z values influences how closely the 3D rendition mimics reality.


There are a few types of Elevation Models - two of the most commonly-used are:


  1. Digital Surface Model (DSM) - As LiDAR sensors are placed above the surface, the first point returns contain information on how the laser pulses interacted with natural or built-up objects above the bare earth, subject to such features being there in the first place. A DSM is a 3D representation of the surface with all natural and built-up features intact (refer the left image in Figure 4 below). 3D modelling of over-ground assets such as buildings, bridges, solar panels and power lines entail the need to generate and utilize this all-important type of Elevation Model


  2. Digital Elevation Model (DEM) - A substantial quantity of the emitted laser pulses do not hit the objects or features above the surface of the earth. Rather, they proceed unimpeded towards the bare earth, interact with it and some of it bounces back in the direction of its source - naturally, these would take marginally longer to arrive at the sensor than the first few returns which interact with features above the bare earth - and it is from the aggregation of such last returns that the Digital Elevation Model is formed. Essentially, a DEM is a 3D rendition of the bare earth surface, stripped of all natural and man-made structures above it (refer the right image in Figure 4 below)

Digital Surface Model or DSM (left) and Digital Elevation Model or DEM (right). Source: An Introduction to LiDAR for Archaeology - AOC Archaeology Group 2015
Figure 4: Digital Surface Model or DSM (left) and Digital Elevation Model or DEM (right). Source: An Introduction to LiDAR for Archaeology - AOC Archaeology Group 2015

This specific property of being able to map the bare earth is what makes DEM so useful. For example, it is utilized at Archaeological sites to detect historical remnants that have been overlaid by newer features with the passage of time. A DSM will not be able to show you that - refer Figure 4 where the prehistoric ramparts and ditches in Shropshire, England have been obscured by vegetation.


Height of a tree can be derived by subtracting the first return from the last (fourth) return. Source: Geospatial Romania
Figure 5: Height of a tree can be derived by subtracting the first return from the last (fourth) return. Source: Geospatial Romania

If not for techniques such as LiDAR and GPR, archaeologists would have to strip the site of the vegetation to a great extent before they get a whiff of what lies underneath. Besides detecting the existence of such remnants, one is also able to assess the nature and extent of it - such information is invaluable to plan the excavation operations in a cost-effective way.


There are several other applications and sectors which utilize these high-resolution bare-earth elevation models such as Roads construction, Railway projects, Forestry, Wind Turbine erection and Landslide and Volcano Deformation studies.


A variant of DEM - DTM or Digital Terrain Model is utilized for workflows such as Shoreline Analysis.

 

There are four distinct modes to acquire LiDAR data. The Laser-emitting sensor can be-


  1. Spaceborne i.e. placed on Satellites

Satellite-based LiDAR. Source: intechopen.com
Figure 6: Satellite-based LiDAR. Source: intechopen.com
  1. Airborne - placed on Aircrafts and Drones

Airborne LiDAR. Source: researchgate.net
Figure 7: Airborne LiDAR. Source: researchgate.net
  1. Stationary Terrestrial i.e. placed stationary at surface-level (on-ground or perched)

Stationary LiDAR
Figure 8: Stationary Terrestrial LiDAR. Source: Earth Observatory of Singapore, NTU




















  1. Mobile Terrestrial i.e. placed on automobiles and multi-terrain vehicles (Trivia: iPhone 12 mobile phone is equipped with LiDAR sensor and here's how it helped a blind person navigate.)

Terrestrial LiDAR on mobile van
Figure 9: Mobile Terrestrial LiDAR depiction. Source: Geospatial World & Counterpoint Research respectively
 

LiDAR output isn't as refined as it appears in Figure 4. Raw LiDAR output looks like this-

Video 2: Raw LiDAR output is just a dense cluster of point returns

Raw LiDAR acquisitions are just a dense cluster of returns technically known as Point Cloud, however, these are not sufficient to create a seamless depiction of the surface as seen in Figure 4.


Which begs the question - How did the transformation occur?


The magic lies in predicting the gaps in the geospatial dataset through statistical interventions, in particular a technique known as Interpolation. Know more about it here.

Processing LiDAR data is just as interesting as acquiring LiDAR data -the Point Cloud can be refined to generate bare earth elevation models as well as to detect and classify natural and built-up features over it.

Let me demonstrate LiDAR data processing for you through three interesting workflows-

 
  1. Workflow 1 - Extracting 3D Building Footprint from LiDAR data


A Building Footprint is a dataset which contains geospatial information of the built-up infrastructure in the study area. I have demonstrated the utility of this dataset in the Rooftop Solar Potential, Line-of-Sight and Automated Features Extraction posts previously - it can be used in several other workflows such as those involving Urban Planning and Risk Management.


In the video below, I will demonstrate the extraction of 3D Building Footprint from the LiDAR Point Cloud using the powerful geospatial software - ArcGIS Pro. Broadly, the process involves filtering aside less important portions from the raw output initially (ground point returns, noise) so that what is left behind are the point returns over the top of buildings and built-up infrastructure. Why is just the top section left behind? Because the LiDAR data has been acquired using an Airborne sensor.


Thereafter, I will set Footprint generation parameters in a specific geoprocessing tool and extract individual Building shapes with length, breadth and height information (X,Y,Z) from the Point Cloud. Subsequently, I will make use of the ground point returns to generate a Digital Elevation Model which I will pair with the generated Building footprint in order to create a more realistic-looking Digital Twin of the study area.

Video 3: Extracting 3D Building Footprint from LiDAR Point Cloud using GIS

While this workflow is mostly automated, one also needs to manually inspect the output, iterate the parameters, and edit the defective Building shapes. Overall, the processing chain is wholesome - blending technology with human ingenuity.


Slider 1: Raw LiDAR data versus Processed Output

 
  1. Caselet 2 - Extracting 3D Roof Forms from LiDAR Data


In this workflow, besides repeating the Data visualization and Digital Elevation Model generation steps from the LiDAR Point Cloud of another study area, I will utilize a specific geoprocessing tool to extract just the Roof Forms from the Building Footprint layer. These extracted features can be used, for example, by Local Government/Municipal Corporations to understand the level of infrastructure development in a neighborhood. Additionally, I will also deploy Statistics (Root Mean Square Error Analysis - RMSE) in order to assess the accuracy of the extracted Roof Forms.


What will the Roof Form elevations be compared to, in order to assess its accuracy?


Recollect that the First Returns of LiDAR information can be used to generate a Digital Surface Model which is the dimensional data of the surface with all the natural and built-up features over it). I will use this DSM dataset to statistically assess the extracted Roof Form elevations. Besides, I shall also demonstrate the use of editing tools to manually repair a couple of inaccurate roofs (those with high RMSEs - such as the ones marked in red and orange on the right image in Slider 2 below).

Video 4: Extracting 3D Roof Forms from LiDAR Point Cloud using GIS

Commonly-used Processing Chains, such as the one demonstrated in this workflow, are often pre-packaged by Esri, the GIS software developer. This makes it very convenient for users who can execute the sequential steps in a semi-automated manner, not only saving time but also reducing the chances of errors and omissions.


Slider 2: Raw LiDAR data versus Processed Output

 
  1. Caselet 3 - Identifying Powerlines using Deep Learning on LiDAR Data


Power Transmission Infrastructure, by virtue of being critical for residential and industrial purposes, needs to be routinely inspected for damages and obstructions (natural and man-made). I had received an actual requirement for this very workflow during the Coronavirus pandemic - the prospective client wanted to be automatically alerted, through the use of Deep Learning algorithm on Drone data, whether there was vegetation growing around the powerline or whether a hawker had set up his shop underneath the transmission tower. If only I had known this workflow at that point in time!😑


I will apply Esri's Deep Learning algorithm (previously demonstrated in this post) to detect and classify a LiDAR point cloud with the intention to identify those point returns that have bounced off the powerlines (not the transmission tower or any other built-up or natural features). I've tried to understand the technicalities involved and the parameters to be used while geoprocessing in an in-depth manner so that I can explain it clearly during the demonstration (have used a proprietary Sunflowers-in-a-Park analogy😊).


In case you would like to understand the fundamental concept driving Deep Learning - Artificial Neural Networks - here's a lucid video explainer.


The processing chain in this workflow entails training a Deep Learning model on a cross-section of the LiDAR Point Cloud which has already been classified (be it powerline or some other feature - built-up or natural). Then, I will test the trained model on another cross-section of the Point Cloud (the validation dataset) which has also been classified prior - the objective is to see how well the model has learnt i.e. whether it is able to correctly classify the Point Cloud of an unknown area.


The rate of accurate classification by the model is technically known as Recall - you will observe from the demonstration that my Deep Learning model had a good Recall, but it wasn't top-notch - this is due to the lenient parameters that I had set in order to reduce processing time as well as due to the consumer-grade 2GB GPU that I was using. Goes without saying, the stronger the computing resources, the better the Deep Learning Model will learn and the faster it will be able to process data.


In order to highlight this aspect, I have used another Deep Learning Model in the latter half of the demonstration video for validation purposes - this model was trained using an industrial-grade 24GB GPU. As anticipated, the Recall was much better (refer Slider 3 below) and most of the point returns were classified accurately.

Video 5: Using Deep Learning Algorithm to identify Powerlines from LiDAR Point Cloud using GIS

Slider 3: Raw LiDAR data versus Processed Output

One can't help but be mesmerized with the prowess of Deep Learning (a subset of Machine Learning) and imagine the vast number of applications where it can be used, in isolation or in conjunction with Artificial Intelligence and other complex technologies, to solve Real-world problems.

 
  1. Conclusion

I hope you enjoyed reading this post and had a chance to see the video demonstrations. It took me a while to prepare everything - slowly, steadily and lazily over a period of three months. I got acquainted with LiDAR from this article which highlighted how the technology helped discover sites of archaeological relevance. Some time ago, I had even received an enquiry from an architectural firm who wanted to LiDAR map some of the Hindu temples in the state of Karnataka, India with the objective to unravel new facets about the design and structure of these ancient places of worship - an opportunity that I wish had surfaced now when my firm is equipped to execute such projects. Feel free to reach out with your LiDAR data acquisition or processing requirements.

LiDAR point cloud and the 3D Building Footprint which was generated using it
Figure 10: LiDAR point cloud and the 3D Building Footprint which was generated using it
 

ABOUT US


Intelloc Mapping Services, Kolkata | Mapmyops.com offers Mapping services that can be integrated with Operations Planning, Design and Audit workflows. These include but are not limited to Drone ServicesSubsurface Mapping ServicesLocation Analytics & App DevelopmentSupply Chain ServicesRemote Sensing Services and Wastewater Treatment. The services can be rendered pan-India and will aid your organization to meet its stated objectives pertaining to Operational Excellence, Sustainability and Growth.


Broadly, the firm's area of expertise can be split into two categories - Geographic Mapping and Operations Mapping. The Infographic below highlights our capabilities-

Mapmyops (Intelloc Mapping Services) - Range of Capabilities and Problem Statements that we can help address
Mapmyops (Intelloc Mapping Services) - Range of Capabilities and Problem Statements that we can help address

Our Mapping for Operations-themed workflow demonstrations can be accessed from the firm's Website / YouTube Channel and an overview can be obtained from this brochure. Happy to address queries and respond to documented requirements. Custom Demonstration, Training & Trials are facilitated only on a paid-basis. Looking forward to being of service.


Regards,

Mapmyops I Intelloc Mapping Services

Mapmyops
  • LinkedIn Social Icon
  • Facebook
  • Twitter
  • YouTube
Intelloc Mapping Services - Mapmyops.com
bottom of page