top of page
Writer's pictureArpit Shah

From Point to Plot - LiDAR Data Processing Workflows

Updated: Aug 29


Laser Beam in a Lab
Figure 1: Laser Beam in a Lab. Source: News.MIT.edu

Laser beams are fascinating, aren't they? Focused & Incisive. Somewhat Infinite. A higher form of Intelligence, perhaps. You can fight with it in a movie. Dance along its neon-hue at Shows. Try shining it high up on a night sky and see if it reaches the clouds or somewhere beyond. Or at the very least, use it to distract footballers in crunch match-situations. I'm not sure if the latter has any impact, though.


The best part is one can admire the beauty of a phenomena without knowing how it is formed, what are the components involved, how it interacts with its surroundings, and so on. Ignorance is Bliss.


'Light Amplification by Stimulated Emission of Radiation'. That's LASER for you. I happened to know it during the preparation for this article itself. Here's more wretched knowledge for the uninitiated - Monochromatic, Directional, Coherent. If I had a scientific temperament, I'd have used these three words to describe a Laser's unique properties for you in the first line of this article instead of the feelings these high-intensity beams evoke in me.


Feelings! Pointless emotions in the scheme of things....


Absolutely NOT.

Feelings matter. Laser pulses can feel the bare earth, the terrain, our surroundings - natural & man-made objects - in ways that many Imagery Acquisition modes fall short of. And it generates innumerable data 'points' too, quite literally.
Point Cloud Representation
Figure 2: Representation of a Point Cloud. Source: Photo by Brecht Denil on Unsplash

LiDAR - Light / Laser Detection & Ranging is a remote sensing technique that takes advantage of the property of 'Return'.

When a laser pulse interacts with the surface and reflects, the inputs received by the sensor is called as Return.


Throw Pulses at the rate of half a million per second, obtain dense Point Returns (twenty points per square foot - varies), stitch these points together and create a highly accurate and clear Digital 3D Model of the area of interest. (Tip: In case you have an old/ancestral home & worry that it will get demolished which will result in your fond memories getting lost forever - then get it LiDAR scanned so that the experience can be recreated anytime in the future. If Notre Dame Cathedral can benefit from it, so can your home!)


 

Those who are familiar with my recent work on this blog would know that I have a habit of sharing an elaborate context before diving into the main aspects of the article. In case you wish to jump straight into the workflows, here's the comprehensive video explaining 3 distinct Caselets involving LiDAR data processing on ArcGIS Pro with narration -

Video 1: Processing LiDAR Data - 3 Caselets - Detailed 1 hour video with narration


The three LiDAR data processing caselets covered are -

- Extracting Roof Forms (extension to Footprint workflow)


Much thanks to Esri's Learn ArcGIS team for preparing the tutorial and developing the methodology.


The Caselet names above are hyperlinked to related sections within the article. The video progresses as per the following flow:


00:05 - Case Details


00:19 - Caselet 1 - Extracting 3D Building Footprint from LiDAR Imagery Dataset

00:23 - C1 - Workflow 1 : Setting up & exploring the dataset

03:43 - C1 - Workflow 2 : Classifying the LiDAR Imagery Dataset

10:44 - C1 - Workflow 3: Extracting Buildings Footprint

14:12 - C1 - Workflow 4: Cleaning up the Buildings Footprint

17:25 - C1 - Workflow 5: Extracting 'Realistic' 3D Building Footprint


20:47 - Caselet 2 - Extracting Roof Forms from LiDAR Imagery Dataset

20:51 - C2 - Workflow 1 : Setting up the Data & Creating Elevation Layers

30:16 - C2 - Workflow 2 : Creating 3D Buildings Footprint

33:54 - C2 - Workflow 3 : Checking Accuracy of Building Footprints & Fixing Errors


42:06 - Caselet 3 - Classifying Power Lines using Deep Learning (DL) on LiDAR Dataset

42:10 - C3 - Workflow 1 : Setting up and Exploring the Dataset

46:23 - C3 - Workflow 2 : Training the DL Classification Model using a Sample Dataset

51:31 - C3 - Workflow 3 : Examining the Output of the Sample-Trained DL Classification Model 53:27 - C3 - Workflow 4 : Training the DL Classification Model using a Large Dataset

58:12 - C3 - Workflow 5 : Extracting Power Lines from the LiDAR Point Cloud Output


59:46 - Summary Note & Contact Us

 

Getting back to developing the context.


LiDAR is an active remote sensing technique i.e. the instrument carries the source of emission of electromagnetic radiation i.e. the Laser emitter.


LiDAR radiation emissions operate in the Near Infrared to Visible Light to Ultraviolet range of the electromagnetic spectrum. In a way, this range is similar to the one used in the formation of Optical satellite imagery - which is a passive remote sensing technique (where there's no active illumination source and where the readings are dependent on natural energy reflectances - eg. sunlight reflection). The key difference here is the intensity of the illumination - Laser light is amplified radiation and is discharged in rapid pulses during LiDAR data acquisition.

Electromagnetic Spectrum of LiDAR
Figure 3: Depiction indicating LiDAR sensor emits radiation in the NIR, Visible & UV range of the electromagnetic spectrum. Source: Adapted from NASA ARSET

Even within Lidar's electromagnetic operating range, there are applicational dynamics to be considered. LiDAR to be used for Topographic (Land) Surveys often use Near-Infrared (NIR) radiation due to the accompanying advantages, while LiDAR to be used for Bathymetric (Marine) Surveys often use Green light emissions, part of the visible spectrum, to measure seafloor and riverbed elevations as it is able to penetrate water with relative ease.

 

Elevation is such an important aspect in 3D Imagery.


In mapping terms, elevation or height is represented by the letter Z. It lends context to the surface under observation. Density of LiDAR Points and the Precision of its Z values determine the accuracy of the 3D rendition of the surface under observation. LiDAR's key advantage is that we can extract very high-resolution Elevation models of the surface under observation.


Two Elevation Models of significance are -


a) DSM - Digital Surface Model - All natural and man-made objects are captured i.e. akin to taking a photograph, without color albeit). The First Return received by the LiDAR sensor (on the left in the image below) is used to build the DSM.


b) DEM - Digital Elevation Model - The laser pulses do not all reflect as First Return. A large quantity of pulses are able to penetrate natural and man-made structures and interact with and reflect from the Bare Earth surface (to the right in the image below). Hence, the data for creating DEM is captured in Subsequent Returns and the visual is a rendition of the how the area of interest would look like had it been stripped of all its natural and man-made structures. This specific property - that of Bare Earth capture - is particularly useful in Archaeology where historical remnants are often obscured by newer natural and/or man-made features. For example, the Bare Earth view below actually shows Prehistoric ramparts and ditches in Shropshire, England - which would have otherwise been obscured by vegetation as evident in the First Return (DSM) image. Typically, archaeologists would have to manually strip large sections of the site of its vegetation before they get a whiff of the hidden treasure trove. But with LiDAR, they are able to get an understanding of the availability, extent and type of archaeological evidence much in advance which greatly aids cost-effective digging operations subsequently.

Digital Surface Model & Digital Elevation Model
Figure 4: Depiction and Comparison of Two Types of LiDAR Return. Source: An Introduction to LiDAR for Archaeology - AOC Archaeology Group 2015
 

LiDAR data acquisition needn't be done using the airborne method always. There are three distinct ways to use LiDAR to capture hi-resolution geo-datasets. From sensors installed on -

a) Airborne mediums such as Aircraft, Helicopters & Drones

Airborne LiDAR
Figure 5: Airborne LiDAR Representation. Source: GIM International & ResearchGate

b) Stationary Terrestrial LiDAR installed at ground level or at a specific height (eg. when the data needs to be captured from an angle other than from vertical, such as during construction).


Stationary LiDAR
Figure 6: Stationary LiDAR System. Source: Earth Observatory of Singapore, NTU

c) Mobile Terrestrial LiDAR system setup on a Vehicle (eg. Autonomous Vehicles use these scanners to obtain a detailed understanding of the terrain whilst navigating. Recent variants of iPhones do have LiDAR sensors too and the applications can be quite useful).


Terrestrial LiDAR on mobile van
Figure 7: Mobile Terrestrial LiDAR depiction. Source: Geospatial World & Counterpoint Research respectively

Besides Sensor Orientation as explained above, there are other ways to classify LiDAR operations too. Refer here to read more about it.

 

To clarify, a fresh LiDAR acquisition isn't as refined as it appears in Figure 4.

Raw LiDAR output actually looks like this -

Video 2: LiDAR Raw Output (in Grey) is nothing but a dense agglomeration of Points


It is a dense assemblage of 'Points' i.e. Point Reflectances. This representation of LiDAR output in its raw form is called the Point Cloud.


Q: So you may wonder, even if the density of these LiDAR points is very high, how is one able to create a refined, seamless surface image as evident in the DEM & DSM image above? (Figure 4)


A: By predicting the missing elevation values in the gaps between the points. In Mapping / GIS (Geographic Information System) terms, the points are stitched together in the LiDAR data processing phase using a technique known as 'Interpolation'.


Learn more about Interpolation and some of the methods used here.

 

LiDAR Data Processing


Processing LiDAR data is as interesting a concept as is Acquiring LiDAR data. The dense LiDAR Point Clouds are meaningless if one isn't able to process it to extract features, classify terrain, chart elevation models and so on. For example, I could read up on all the theory on Imaging Spectroscopy (Hyperspectral Imaging), but understanding it via a practical data processing demonstration makes the concept and the potential benefits of using it much more clearer.


Hence, I thought of preparing 3 distinct workflow demonstrations. Beginning first with-


Caselet 1 - Extracting 3D Building Footprint from LiDAR Imagery Dataset


Refer Elaborate Case Video with Narration below-

Video 3: Narrated Demonstration of using LiDAR Point Clouds to extract 3D Building Footprint


Building Footprints layer is vital for several workflows. Essentially, this layer contains all the buildings in a particular neighborhood or a region or a city, hence the consolidated view is called a 'Footprint'.


I have used this all important Building Footprints Layer in Estimating Rooftop Solar Potential Workflow as well as in Line-of-Sight Workflow (particularly relevant in Security Applications) previously on this blog. Besides, that, there are several other applications - Urban Planning, Flood Impact Assessment, Infrastructure Studies, and so on. In a previous blog entry, I had used Esri's Deep Learning Framework to extract 2D Buildings Footprint near an airport in Madrid.


This time, the Buildings Footprint layer extracted is Three-Dimensional (3D) in nature. Besides ArcGIS Pro's smooth 3D data visualization & exploration capabilities captured in the video which would aid in your understanding of LiDAR, it is actually the method involved to extract Buildings Footprint from LiDAR point cloud files which captivated my attention.


Broadly, the process involved gnawing away all the irrelevant data from the LiDAR raw output (Ground Points, Low Noise, High Noise) so that what is left ultimately are the LiDAR points mostly covering the top section of Buildings or similar-sized infrastructure (Why just the top section? - since the data had been acquired using Airborne LiDAR).


Thereafter, the power of ArcGIS Pro's Geoprocessing Tools kicks in and one can use it to set parameters and extract building shapes with accurate X, Y & Z values (length, breadth & height). What's more, the irrelevant data that was detected and isolated away early on actually proves to be useful - the Ground Points are used to create a high-resolution DEM (Digital Elevation Model i.e. Bare-earth) which is of a much higher quality than the default Global 3D elevation (useful for macro-analysis only) in ArcGIS Pro. The hi-res DEM facilitates an even more real-life visualization of Buildings Footprint, not to mention positive impact it has on the accuracy of the Elevation values (Z) of the footprint layer.


Not every step in the process is automated, though. There arises a need to manually inspect the output, analyze data to know what parameters to use while running the geoprocessing tools, repair defective Building footprint with other geo-tools in the software package, and so on. Overall, the entire processing chain is very wholesome - a blend of technology with human ingenuity.


Raw LiDAR Data to Processed Output Depiction for Caselet 1


Slider 1: Depiction of Raw v/s Processed Output for Caselet 2 - Extracting Buildings Footprint from LiDAR

 

Caselet 2 - Extracting Roof Forms from LiDAR Imagery Dataset


Refer Elaborate Case Video with Narration below-

Video 4: Narrated Demonstration of using LiDAR Point Clouds to extract 3D Building Roof Forms


This is, in a way, an extension to the first caselet on Extracting Building Footprints. We actually use a Geoprocessing tool on a Buildings Footprint layer to extract the Roof Forms of the buildings. Besides the LiDAR Data Visualization & DEM creation which was already covered in the previous caselet, what's unique about this caselet is how we use Statistics (RMSE) to identify the accuracy of the Roof elevation values. You may wonder what are the Roof's Elevation values compared to (?) , to arrive at the RMSE.


A. They are compared with the DSMs Elevation values! - Remember, LiDAR data acquisition contains First Return that helps in the extraction of the Digital Surface Model. The DSM contains the elevation values (Z) of the surface with all the natural and man-made structures intact. Thus, we've exploited LiDAR data capabilities to process an output in a particular way (Building Footprints to Roofs) and then use an output derived innately (i.e. DSM) to spot statistical errors in the former with the latter (which is more accurate). Thereafter, we've used handy manual editing tools within ArcGIS Pro to correct Roofs with undesirable RMSEs (red and orange in color to the right of the slider depiction below) so that the Roof Footprint and resultingly, the elevation values, becomes more accurate.


Another useful aspect to know is that certain commonly-used workflows, such as this one and several others, are actually pre-packaged by the GIS software developer, Esri. This makes it very convenient for users as one doesn't need to memorize the workflow which often comprises of several sequential steps and can progress in a systematic and semi-automated way, reducing the chances of errors and omissions as a result.


These Roof forms can be used by Municipal Corporations, for example, to understand the infrastructure development in the city besides other applications (similar to the Buildings Footprint Applications mentioned in the previous Caselet's explainer directly above this section).


Raw LiDAR Data to Processed Output Depiction for Caselet 2

Slider 2: Depiction of Raw v/s Processed Output for Caselet 2 - Extracting Roof Forms from LiDAR

 

Caselet 3 - Classifying Power Lines using Deep Learning (DL) on LiDAR Dataset


Refer Elaborate Case Video with Narration below-

Video 5: Narrated Demonstration of using Deep Learning Framework on LiDAR Point Clouds to identify Powerlines from Unseen Data.


I had actually received a project requirement 2+ years ago from a client who would have benefited from this workflow for the Power Transmission vertical which he was heading at a large Renewables company here in India. Power Transmission Infrastructure, by virtue of being Critical Infrastructure to residents and commercial users, needs to be routinely inspected to see if there are any damages, breakages or obstructions in its way (such as growing vegetation around it or even hawkers setting up shop underneath, and so on).


Had I known this workflow back then, it'd have made for an even more convincing pitch to the customer. The project never culminated due to the raging Coronavirus situation in the country at that point in time (despite the assurance from my proficient Drone OEM that they'd attempt to address the client's unique processing requirement besides supplying the requisite drone unit).


I've used Esri's Deep Learning Libraries (the same which I had used to extract 2D Building footprint over an area-of-interest in Madrid) to classify LiDAR points that are Power Lines, in this caselet.

For those interested to understand a fundamental concept in Deep Learning - that of Artificial Neural Networks - here's a lucid video-explanation by Esri, USA which I'd recommend you to watch.


I've also personally put in significant effort to understand the Technicalities of the Deep Learning concept and the Parameters to be used in Geoprocessing Tools so that I can explain it in a clear manner in my video demonstration. In a way, it is because of this reason that this Caselet took me the most time to prepare as there was considerable learning which I had to do for myself before lending narration in a thoughtfully manner - I've used 'Sunflowers-in-a-park' analogy often - so that less technically-savvy viewers can grasp the subject matter with relative ease. Hence, I wouldn't wish to write an elaborate technical note below. I'd much prefer if you see the video to understand the concept-at-work.


In a nutshell, for those familiar with Deep Learning Methodology, the process in this workflow entails training a Deep Learning Classification Model using a particular small stretch of the Power Lines 3D Scene comprising a LiDAR Point Cloud which has already been classified before (i.e. a Sample Dataset where Power Lines and Other Features in the extent have already been identified and classified). Then we test the Trained model's efficacy on another similar-sized stretch of the Power Line 3D Scene - the Validation dataset (which has again been classified previously) - but has different characteristics to the earlier Small dataset - an aspect that is imperative for a dataset to be used for validation purposes - as the objective is to see how well can a trained model learn about the classification in a particular environment and still classify effectively in an environment which is significantly different than the one it had originally learnt in...


So essentially, we see how well do the trained samples in the original Small Dataset detect Powerline points in a different similar-sized dataset (the Validation dataset i.e.). Please note that we've not used rigid DL algorithm parameters as running DL algorithms is a time-consuming process and is directly proportional to a) the size of the dataset and b) the rigidity of the parameters set.


The rate of accurate detections (Recall) determines how well is the DL Model is 'learning'. Our model learned fairly well - but because we were using a small dataset with consumer-grade GPU, the model's output did not have top-notch accuracy - it was omitting to detect a significant number of Power Line points - all very vital if we need to do a study with real-life consequences. While the first two caselets had a nice blend of automatic & manual components in the workflows, the contrast in this caselet is much more starker - the stronger the computing resources (GPU) i.e. automation, the better the DL Model will learn (and the quicker it will run as well) as we can use rigid parameters and large datasets to train the model. This doesn't discount the fact that one has to understand finer aspects of the data acquisition as well as our objective i.e. the purpose of the workflow (to detect Power Lines, in our case) very well , because the technical parameters set in the DL classification Geoprocessing tool influence the outcome in a big way.


Thus, in the latter half of the video - I actually did not use the DL Classification model which we had trained on the Unseen / Unclassified Data (another large stretch in the 3D LiDAR scene) and detect / classify the Power Line points there. I used the DL Classification model readily available in the tutorial which was trained using an industrial-grade 24 GB GPU - thereby increasing our rate of accurate detections i.e. Better Learning and thereby, more Effective - to detect and classify points in the Unseen dataset. The result was particularly awesome to see (towards the right of the slider view below) - I presume >95% of the points were classified accurately.


One can't help but be mesmerized with the effectiveness of Deep Learning (a variant of Machine Learning) and the vast number of applications which it could be used, in isolation, or even in conjunction with Artificial Intelligence and several other complex, technical, ingenuous and perhaps, unheard of technologies, to solve real-life problems with a great deal of effectiveness.


Raw LiDAR Data to Processed Output Depiction - Caselet 3


Slider 3: Depiction of Raw v/s Processed Output for Caselet 3 - Classifying Power Lines using Deep Learning from LiDAR

 

I hope you enjoyed reading this article and had a chance to see the videos. It took me enormously long to compile it - slowly, steadily and lazily over a period of three months! I had first known about LiDAR when I happened to read an article on BBC explaining how the technology was used to discover new sites of archaeological relevance. I had actually received a project enquiry by a real-estate firm in Karnataka who wanted to LiDAR map the Hindu temples in the same state - hoping to unravel new facets about the architectural magnificence of these beautiful structures - an opportunity I had to reluctantly turn down at that point in time. My firm is in a much better position now to execute LiDAR projects - the technology and understanding of it is much more mainstream today - although LiDAR is still significantly more expensive than traditional survey methods. Here's a video highlighting the Top 5 Applications of LiDAR, which you may consider viewing as well.


Do reach out to us with your LiDAR data acquisition or LiDAR Data processing requirements. (email mentioned below).


Thanks for reading & watching!

3D Building Footprint with LiDAR points
Figure 8: Map used as Headline of this article - shows LiDAR points as well as the 3D Footprint which was generated using it. Source: Mapmyops
 

ABOUT US


Intelloc Mapping Services | Mapmyops.com is based in Kolkata, India and engages in providing Mapping solutions that can be integrated with Operations Planning, Design and Audit workflows. These include but are not limited to - Drone ServicesSubsurface Mapping ServicesLocation Analytics & App DevelopmentSupply Chain Services & Remote Sensing Services. The services can be rendered pan-India, some even globally, and will aid an organization to meet its stated objectives especially pertaining to Operational Excellence, Cost Reduction, Sustainability and Growth.


Broadly, our area of expertise can be split into two categories - Geographic Mapping and Operations Mapping. The Infographic below highlights our capabilities.

Mapmyops (Intelloc Mapping Services) - Range of Capabilities and Problem Statements that we can help address
Mapmyops (Intelloc Mapping Services) - Range of Capabilities and Problem Statements that we can help address

Our 'Mapping for Operations'-themed workflow demonstrations can be accessed from the firm's Website / YouTube Channel and an overview can be obtained from this flyer. Happy to address queries and respond to documented requirements. Custom Demonstration, Training & Trials are facilitated only on a paid-basis. Looking forward to being of service.


Regards,

312 views
bottom of page