From Point to Plot - Processing LiDAR datasets using GIS
- Arpit Shah

- Feb 27, 2023
- 8 min read
Updated: Dec 10, 2025

Introduction
Laser beams are fascinating, aren’t they? Focused and incisive. A higher form of intelligence, perhaps.
You can fight with them in a movie. Dance in their neon hues at concerts. Shine one into the night sky to see if it touches the clouds...or beyond. Or at the very least, flash it at sportspersons during crunch moments to distract them😁. It’s liberating that one can admire a technology without needing to know how it works.
Light Amplification by the Stimulated Emission of Radiation—that’s what LASER stands for. I didn’t know either until I sat down to write this post. A laser beam feels distinct from visible light because it is monochromatic, directional, and coherent—a single wavelength, a single colour, and waveforms that stay perfectly in sync. No wonder these high-intensity beams evoke such a strong feeling.
And feelings do matter. Laser pulses can sense the bare earth, terrain, and surrounding features—natural or built-up—in ways that most other illumination techniques cannot.

This is precisely what LiDAR (Light Detection and Ranging) takes advantage of. A LiDAR sensor emits laser pulses at extremely high rates (up to 150 kHz) and records dense returns (up to 150 points per square foot). When these reflectances are stitched together into a Point Cloud, they form a detailed 3D model of the landscape.
Tip: If you have an ancestral home and worry that it may be demolished someday—along with your memories—get it LiDAR scanned. Notre Dame Cathedral benefitted from LiDAR scans, and so can your cherished places.
Those familiar with my previous posts would know that I enjoy providing context before diving into the workflow. If you'd like to skip ahead, here are the three sections:
Extracting 3D Roof Forms (extension to the previous workflow)
Or, if you prefer demonstrations, here is a visual walkthrough of all three:
Video Timestamps
00:05 - Case Details
00:19 - Caselet 1 - Extracting 3D Building Footprint from LiDAR Imagery
00:23 - C1 - Workflow 1 : Setting up & exploring the dataset
03:43 - C1 - Workflow 2 : Classifying the LiDAR Imagery Dataset
10:44 - C1 - Workflow 3: Extracting Buildings Footprint
14:12 - C1 - Workflow 4: Cleaning up the Buildings Footprint
17:25 - C1 - Workflow 5: Extracting 'Realistic' 3D Building Footprint
20:47 - Caselet 2 - Extracting 3D Roof Forms from LiDAR Imagery
20:51 - C2 - Workflow 1 : Setting up the Data & Creating Elevation Layers
30:16 - C2 - Workflow 2 : Creating 3D Buildings Footprint
33:54 - C2 - Workflow 3 : Checking Accuracy of Building Footprints & Fixing Errors
42:06 - Caselet 3 - Classifying Power Lines using Deep Learning model on LiDAR Dataset
42:10 - C3 - Workflow 1 : Setting up and Exploring the Dataset
46:23 - C3 - Workflow 2 : Training the DL Classification Model using a Sample Dataset
51:31 - C3 - Workflow 3 : Examining the Output of the Sample-Trained DL Classification Model 53:27 - C3 - Workflow 4 : Training the DL Classification Model using a Large Dataset
58:12 - C3 - Workflow 5 : Extracting Power Lines from the LiDAR Point Cloud Output
59:46 - Summary Note & Contact Us
Credits: Esri Learn ArcGIS
LiDAR uses amplified radiation where the pulses are discharged at rapid rates, and it operates in the near-infrared, visible, and ultraviolet wavelengths—similar to solar radiation but far more intense.

The wavelength chosen depends on the application. For example,
Topographic surveys typically use near-infrared (invisible to humans and sensor-friendly).
Bathymetric surveys rely on green light, which penetrates water better.
Here are some of the top applications of LiDAR, in case you are interested to know.
LiDAR enables creation of high-resolution 3D digital models. Elevation (Z values) is what transforms a flat 2D image into a rich 3D surface. The density of returns and the precision of Z values determine how closely the model resembles reality.
Two of the most common elevation models are:
Digital Surface Model (DSM) - Captures the first returns—laser hits on treetops, roofs, and other above-ground objects. Useful for mapping buildings, bridges, power lines, solar panels, and more. Essentially, a DSM is a 3D rendition of the surface with all natural and built-up features intact (refer the left image in Figure 4 below).
Digital Elevation Model (DEM) - It is constructed from last returns—those that reach the ground after bypassing vegetation and structures. Crucial for archaeology, hydrology, engineering, and terrain studies. Essentially, a DEM is a 3D rendition of the bare earth surface, stripped of all natural and man-made structures over it (refer the right image in Figure 4 below).

The DEM’s ability to reveal what lies beneath vegetation is particularly valuable. For example, it is utilized at archaeological sites to detect historical remnants hidden underneath natural features. In the example above, prehistoric ramparts and ditches in Shropshire, England are invisible in the DSM but revealed in the DEM.

Without LiDAR or GPR, archaeologists would have to remove large patches of vegetation before discovering buried structures. DEMs allow them to plan excavations efficiently and cost-effectively.
Bare-earth models also support roads and railway planning, forestry management, wind turbine siting, landslide studies and volcano deformation mapping.
A variant of DEM, the Digital Terrain Model (DTM), is often used for analyses such as shoreline change detection.
To acquire LiDAR data, sensors can be mounted:
Spaceborne i.e. placed on Satellites

Airborne - placed on Aircrafts and Drones

Stationary Terrestrial Systems i.e. placed on tripods or fixed positions at surface-level (on-ground or perched)

Mobile Terrestrial Vehicles i.e. placed on cars, vans, multi-terrain units (Trivia: iPhone 12+ models also have LiDAR sensors—famously used by a blind person to navigate a street)

Raw LiDAR Output ≠ Ready-to-Use Data (the one you see in Figure 4). It actually looks like this-
Raw LiDAR is just a chaotic cloud of points. Not the smooth surfaces you saw in Figure 4.
So how does it transform?
Through interpolation, classification, filtering, and other statistical and geospatial operations. If you’d like to know more about interpolation, refer this content.
Processing LiDAR is just as fascinating as acquiring it. Raw point clouds can be processed to generate bare earth elevation models as well as to detect and classify natural and built-up features over it.
Let me demonstrate three processing workflows.
Workflow 1 - Extracting 3D Building Footprint from LiDAR data
A building footprint stores X, Y, Z information for built-up structures. It is widely used for rooftop solar potential assessment, line-of-sight studies, feature extraction, urban planning, risk assessment, and other types of studies.
In the video below, I will demonstrate the extraction of 3D Building Footprint from the LiDAR Point Cloud using the powerful geospatial platform - Esri's ArcGIS Pro. I will-
Filter ground returns and noise from the raw Point Cloud.
Preserve only the top-of-building returns (since the LiDAR is airborne).
Use geoprocessing tools to automatically extract building outlines with height.
Generate a DEM from ground returns.
Combine the footprint + DEM to create a realistic digital twin.
This workflow is semi-automated but still requires manual inspection and corrections—a blend of computation and human judgment.
Slider 1: Raw LiDAR data vs Processed Output
Workflow 2 - Extracting 3D Roof Forms from LiDAR Data
Here, besides repeating the data visualization and DEM generation steps shown previously, I will utilize a specific geoprocessing tool to extract just the Roof Forms from the Building Footprint layer. These extracted features can be used by municipal corporations, for example, to understand the adoption of rooftop solar panels in a neighborhood.
Broadly, the processing chain in this workflow entails:
Visualising the Point Cloud
Generating a DEM
Using a dedicated algorithm to extract only the roof forms
Computing Root Mean Square Error (RMSE) to assess accuracy
Comparing roof elevations with the DSM (from first returns)
Manually editing inaccurate roofs (those with high RMSEs - such as the ones marked in red and orange on the right image in Slider 2 below).
Esri packages many of these processing chains into its GIS software - this makes it very convenient for users, who can now execute the sequential steps in a semi-automated manner, not only saving time but also reducing the chances of errors and omissions.
Slider 2: Raw LiDAR data vs Processed Output
Caselet 3 - Identifying Powerlines using Deep Learning on LiDAR Data
Power transmission infrastructure requires frequent inspection. During the COVID-19 pandemic, I even received a requirement from a client who wanted automatic alerts—e.g., vegetation encroachment or hawker activity—around powerlines using Deep Learning on drone data. If only I had known this workflow then 😑.
In this demonstration, I use Esri’s Deep Learning tools (previously demonstrated here) to classify a LiDAR point cloud with the intention to identify those point returns that have bounced off the powerlines, not the transmission tower or any adjacent built-up or natural features.
(Note: Being nascent in terms of my own Deep Learning knowledge, I've tried to understand the technicalities involved and the parameters to be used during the geoprocessing steps in an in-depth manner so that I can explain it clearly during the video demonstration (where I have used a proprietary Sunflowers-in-a-Park analogy😊. In case you would like to understand the fundamental concept driving Deep Learning - Artificial Neural Networks - here's a lucid video explainer.)
The processing chain in this workflow entails:
Training a model on a labelled cross-section of the Point Cloud
Validating it on another classified section
Detecting only the point returns belonging to powerlines
Measuring performance using Recall
Due to my modest 2GB GPU and intentionally lenient parameters, my model's recall was decent but not exceptional. Later in the demonstration, I validate using another model trained on a 24GB industrial GPU, which performs significantly better (refer Slider 3 below).
Slider 3: Raw LiDAR data vs Processed Output
One can't help but be amazed with the prowess of Deep Learning (a subset of Machine Learning) with far-reaching applications when combined with AI and advanced sensing technologies.
Conclusion
I hope you enjoyed reading this post and watching the demonstrations. Preparing everything took time—slowly, steadily, and lazily over three months.
I was introduced to LiDAR through an article describing how archaeologists used it to uncover hidden historical structures. Later, an architectural firm approached me to LiDAR-map Hindu temples in Karnataka to study their ancient architectural nuances—an opportunity I wish had come now, when my firm is equipped to take up such assignments.
Feel free to reach out if you have LiDAR acquisition or processing requirements.

ABOUT US - OPERATIONS MAPPING SOLUTIONS FOR ORGANIZATIONS
Intelloc Mapping Services, Kolkata | Mapmyops.com offers a suite of Mapping and Analytics solutions that seamlessly integrate with Operations Planning, Design, and Audit workflows. Our capabilities include — but are not limited to — Drone Services, Location Analytics & GIS Applications, Satellite Imagery Analytics, Supply Chain Network Design, Subsurface Mapping and Wastewater Treatment. Projects are executed pan-India, delivering actionable insights and operational efficiency across sectors.
My firm's services can be split into two categories - Geographic Mapping and Operations Mapping. Our range of offerings are listed in the infographic below-

A majority of our Mapping for Operations-themed workflows (50+) can be accessed from this website's landing page. We respond well to documented queries/requirements. Demonstrations/PoC can be facilitated, on a paid-basis. Looking forward to being of service.
Regards,




