Into the World of Imaging Spectroscopy
Updated: Sep 1, 2022
The investigative aspect of Imagery Analytics is what, I suppose, draws enthusiasts to this field. To scan images for the purpose of extracting something meaningful - be it objects, materials or processes.
In this article, you will read about a powerful form of capturing and analyzing imagery - Imaging Spectroscopy or what is known in common parlance as Hyperspectral Imaging. I'll summarize my understanding of Imaging fundamentals first before delving into the subject matter and ending with an elaborate video recording through which you'd be able to get practical insights into its processing operations and usage.
Imagery can be captured using two mechanisms - a) by using a Passive sensor which captures reflected radiation from an object, the illumination source being sunlight or b) by using an Active sensor which carries its own source of illumination and which captures its reflected radiation (example: camera flashlight, radio waves).
Now, the natural-colored photograph that we routinely see is captured by a passive sensor and is depicted in three bands - RGB (Red, Green, Blue). Red, Green & Blue - which constitutes what is known as primary colors. Using a particular blend / combination of the primary colors, one can reproduce almost all the colors within the visible spectrum i.e. radiation wavelengths roughly between 400 to 700 nm (nanometers) which the human eye can detect. One can use these images to detect a variety of objects, materials & processes, as we all do in some way, everyday.
Alongside visible spectrum (RGB), Satellites carrying passive sensors can capture wavelengths which are slightly beyond as well such as Near Infra-Red, Short-Wave Infrared, Ultra-Blue etc. Sunlight contains these wavelengths - just that we cannot see them with the naked eye.
Imagery captured by such Satellites is called Multispectral imagery (comprising of reflected wavelengths from the visible spectrum + slightly beyond on either side). This type of imagery typically contains 10+ bands of wavelength ranges (eg. between 200-350 nm, 500-600 nm etc.). The more bands there are in a particular imagery, the more information we have at our disposal which could be useful during processing & analysis.
To depict an imagery on a screen, we have to use Band Combinations. In simple words, Band Combinations are 'settings' to visualize imagery in a certain color-combination (wavelength combination). To analyze the image in multiple ways, we have to deploy various types of Band Combinations i.e. Band Manipulation. This is done so as to highlight / suppress certain imagery characteristics for us to move in the direction of identifying what we have set out to.
Band Combinations are set using three channels in Multispectral imagery. To visualize the image in natural colors (RGB mode), the Band Combination would entail - Red occupying Channel 1, Green - Channel 2 and Blue - Channel 3. The Red wavelength would be present in a particular band range which we'd have to select and likewise for the other two wavelengths (colors).
For example, in Sentinel-2 Satellite Imagery: Red Occupies Band 4, Green - Band 3 & Blue - Band 2. (Bands are not standardized across all satellites).
Think of a luggage lock - Inserting 4, 3, 2 as the combination would unlock the suitcase for you giving you access to the natural colored photograph within.
Read this article to know more about some of the commonly used multispectral band combinations in Sentinel-2 Satellite Imagery and how manipulating the bands can help us to detect specific surface characteristics such as moisture content, healthy or unhealthy vegetation, geological features etc. which are not discernible using common optical imagery captured using wavelengths from the visible spectrum.
In my previous articles on this blog, I've analyzed multispectral imagery to - Assess Blast Damage, Detect Seaweed, Detect Glacial Fault, Mapping Waterbodies, Mapping Crop Types, Mapping Forest Fires and to Visualize Pollution. You can read some of these to know more about satellite imagery and the multispectral nature of it.
Please note that Multispectral imagery is different from SAR - Synthetic Aperture Radar Imagery.
SAR (Radar imagery) has several notable advantages over Multispectral imagery (optical imagery) primarily because it uses its own source of illumination - radiowaves from an active sensor.
You can read some of my Imagery analytics work using SAR satellite imagery published on this blog from this link (All excluding Multispectral).
Some of you may wonder-
Q: Why are there 3 channels in Multispectral imagery?
A: To mimic the human eye whose retina contains 3 classes of cone photo-receptors. These are adept at recognizing Long, Medium and Short wavelengths respectively. Red has the longest wavelength followed by Green while the color Blue has the shortest wavelength. Hence, R,G, B is input in the 3 channel slots respectively to visualize the image in natural color mode.
To understand the concept of channels better, you may read these informative articles - 1 & 2.
Q: In how many ways can we visualize a Multispectral image?
A: We can input in the three channels (some of the ways I know and have used, may not be exhaustive) - 1) Any of the wavelength bands available 2) input dual bands or single band only by leaving one or two channels empty, 3) use Band Maths to derive a new value to be input in a channel (for example - Band 1 + Band 8) or 4) any variation using the combination of options 1), 2) & 3). Therefore, certainly there are a multitude of ways (band combinations) to analyze an image. The skill lies in identifying the method which gives us the best chance to detect the particular object, material or process.
Q: Which factors determine the ideal method we should use for our Multispectral imagery analytics study?
A: The most important parameter is how our subject of interest (material's surface in most cases) interacts with the passive source of illumination - sunlight. However, this is but one aspect of the complete process involved.
To elaborate, there are three aspects which we have to take into consideration - a) how the illumination radiation reacts with particles while entering the atmosphere, b) how the illumination radiation interacts with our subject of interest (matter) on the earth's surface and c) how the reflected radiation reacts with the atmospheric particles enroute to the spaceborne satellite sensor.
For a) and c) we have to factor in extent of radiation absorption and radiation scattering as depicted in the visual below-
For aspect b), we have to factor in the following parameters - 1) Reflective properties of the object (whether the surface is rough or smooth), 2) Geometric Effect of reflection (angle of illumination and of reflection) and 3) Bio / Geo / Chemical characteristics of the object under consideration (factors such as moisture content, mineral properties, size of object etc.).
Researchers study this interaction between illumination and atmosphere & between illumination and matter extensively to determine the best possible ways to analyze the imagery and extract the desired information. Their final output is in the form of - validated methods of band manipulation, imagery post-processing / correction methodologies, and suitable band combinations.
The three parameters pertaining to aspect b) mentioned above can be better understood from these visuals -
3) Bio / Geo / Chemical properties
These are the characteristics of the substance which influences how illumination interacts with it. For example, certain bio/geo/chemical properties of Soil are Moisture, Organic Content, Mineral Composition, Grain Size etc. Similarly, certain properties of Vegetation which influences how illumination reacts to it are - Moisture, Chlorophyll Content, Species, Phenology etc.
To show you an example, below is the band combination (Infrared) which is commonly used to determine the healthiness of vegetation. You'll know straightaway by looking at the image that this is not a natural color image - band manipulation has been done to emphasize the chlorophyll content (more red means more chlorophyll content implying healthier vegetation).
Band 8 was input in Channel 1, Band 4 in Channel 2, & Band 3 in Channel 3 of this Sentinel-2 Imagery to visualize the imagery in this specific manner.
I felt it was important to explain Imaging fundamentals in as much detail as I've done (halfway into this article). This would help the readers to understand the concept behind Imaging Spectroscopy (Hyperspectral Imaging) with more clarity and in what ways it is different to Multispectral Imaging.
Let's jump into it, beginning with its definition, as described in the introductory course on Hyperspectral Imaging by EO College -
"Imaging spectroscopy refers to imaging sensors measuring the spectrum of solar radiation reflected by Earth surface materials in many contiguous wavebands, on the ground as well as air or spaceborne.../...instead of the usual three RGB channels there are up to hundreds of bands that allow identification and often quantification of materials based on the shape of the spectral curve. "
Contrary to what you may perceive, it is important to highlight that the topic of this article - imaging spectroscopy - is not a new concept / technology. In fact, the first imaging spectrometers (devices used to capture the hyperspectral imagery) became operational as far back as 1982. However, these were installed in research flights (airborne) to capture footage over relatively small areas at select locations. I suppose only a handful of researchers would have access to the readings to conduct further investigations. Only after the spectrometers were launched onboard satellites in the 2000s that the technology and its output's usage became mainstream.
As with any new technology, the later versions iron out the originals' flaws and benefit from the growth of the ecosystem around the concept and so is the case with Imaging Spectroscopy.
Since 2019, multiple new variants of spaceborne Hyperspectral sensors have been launched and newer algorithms have been developed which promise to usher in a new era of mapping, like never before, the geochemical, biochemical and biophysical properties of the Earth's surface and atmosphere.
Q: So how is Hyperspectral imagery superior to Multispectral imagery?
A: Each surface material has a unique spectral signature (consider it as a fingerprint of sorts) - a graph of measure showing the energy reflectance values at various wavelength ranges i.e. indicating how the surface material has interacted with the sun's illumination. (This phenomena is explained in more detail in the next section).
The narrower the range of wavelength in a band, the finer its spectral resolution would be. Spectral resolution is used to obtain a material's reflectance readings i.e. the Spectra). So finer spectral resolution implies more granular categorization of reflectance readings. This is one of the superior aspects of Hyperspectral Imaging.
Additionally, from the image below, you will gather that while multispectral bands are segregated into a certain number of non-contiguous blocks, hyperspectral imagery have contiguous bands which gives a continuous reading of the radiance reflection throughout the 'visible spectrum and slightly beyond' range. This uninterrupted view (also known as Hyperspectral Cube) is another superior aspect of Hyperspectral when compared to Multispectral imagery. This aspect is of much value in applications which require researchers to detect objects or processed from subtle differences in the reflectance signal (more on this in the next sections).
To summarize, higher spectral resolution & uninterrupted readings due to contiguous bands is what accounts for, largely, the difference between Hyperspectral and Multispectral Imaging.
However, please note that excess information is not always good information. In truth, it is a double-edged sword. Because the bands are so closely knit in Hyperspectral, the non-useful bands generate significant noise in the imagery whilst analyzing the selected useful bands. This is very problematic for researchers and entails significant correction / post processing to filter it away. Goes on to show that sometimes less information is better (less choice, less noise & more clarity).
Also - operationally, the cost of Hyperspectral data is also high and processing & analyzing it is more difficult than Multispectral data. However, with more spaceborne missions and better algorithms being developed, one expects these deficiencies will ease out in a matter of time.
Q: Can you elaborate spectral signature in more detail, though?
A: The concept of spectral signature may be confusing to some. To explain using a simple example - imagine a mixture comprising of multiple substances. Using a measurement device - we obtain the readings of the mixture i.e. how its contents behave across various temperatures in a range. Let's call the chart below, a temperature signature spectra of the mixture.
Can you point how many substances are present in the mixture and which substance is H2O (water) from the depiction below?
There are five substances and Substance C is H2O (Water). Most of you would have been able to identify H2O very quickly because its temperature signature is very well known. H2O exists in a solid state below 0 degrees celsius, in a liquid state between 0-100 degrees celsius and in a gaseous state above 100 degrees celsius.
A spectral signature is the same concept - just that instead of temperature, we see how the objects respond to illumination radiation which is measured in wavelength nm (nanometers).
For example, in the visual below, you can see the spectral signature of different types of vegetation. At lower wavelengths, the leaf pigments' reflectance property plays a role in determining whether the vegetation is healthy or stressed or dry. For healthy vegetation, more illumination is absorbed i.e. less is reflected (which is natural when you think of it - healthy plants would tend to absorb the sunlight for photosynthesis).
The same principle is prevalent for the vegetation's water content properties which respond to higher wavelengths of illumination - the dry and stressed types of vegetation reflect illumination significantly more than what healthy type of vegetation does.
To drive home the point, it is easier to delineate various types of vegetation from Hyperspectral imagery because it captures information in contiguous bands i.e. we can see the spectral response of the object across the the wavelength range of the visible spectrum & slightly beyond.
When it comes to Multispectral imagery, the spectral signature view is not uninterrupted - it may show you data between 400 to 500 nm, then from 750 to 950 nm, and then from 1100 to 1250 nm and so on. Now imagine that you have the task to delineate various types of vegetation by simply seeing reflectance spectra between 1100 & 1250 nm wavelength range from the visual above. It would be tremendously complicated. Contiguous information capture allows us to understand how the material responds to illumination across the complete set of wavelength range thereby giving us a greater chance to successfully demarcate and detect.
In contrast, refer to the image on your left which shows the spectral signature of Open & Coastal Water. Because water reflects a tiny amount of illumination only at lower wavelengths, we can even use Multispectral imagery to detect and distinguish both these types of water as we know that their spectral signature differential is contained in a narrow wavelength range unlike vegetation types whose spectral signatures were very complex & intertwining across the spectrum and would've needed Hyperspectral imaging for accurate distinguishing / detection.
I must emphasize that the spectral reflectance signature is a combination of multiple factors (reflective, geometric & biogeochemical) and should not be attributed to a single factor or seen in isolation.
Below is a short demo video which shows how vegetation readings are impacted by select three properties (2 of them biogeochemical & 1 geometric) and how the spectra would respond to various iterations of these properties.
Just as we know that a higher resolution camera on our phones would imply cleaner and sharper images, similarly, not all types of Hyperspectral imagery are equivalent. Our requirements and the material properties determine the type of Hyperspectral imagery best suited and / or the scanning system we'd have to deploy to generate one.
Some Hyperspectral imagery data are best captured using ground-based sensors in field / laboratory settings (of objects with very complicated / intertwining spectral reflectance signature under high resolution settings) whereas other hyperspectral imagery data are preferred to be captured by spaceborne sensors onboard satellites (wider coverage, lower resolution, much lesser cost, for objects with less complicated spectral reflectance signature). In between comes hyperspectral imagery captured from airborne sensors onboard research flights (less coverage but at a higher resolution and at a higher cost for objects with mid-high complicated spectral reflectance signatures).
However, please note that the concept of 'resolution' of sensor in imagery is not as simple as that of the mobile phone camera terminology we are used to.
Satellite Sensor Resolution has four aspects-
a) Spectral Resolution - range of illumination wavelengths to which the sensor is sensitive to (from 400 nm to 2400 nm for example)
b) Spatial Resolution - this resolution is more like the mobile phone resolution we all are very familiar with - it is the measure of the smallest feature which the sensor can detect (30 metres, 5 cm etc.)
c) Temporal Resolution - how quickly the sensor can detect the same object again. Spaceborne sensors are onboard Satellites which revolve around the planet. For certain earth observation requirements, it is important to have datasets captured at fairly regular intervals to maintain continuity and prevent extensive distortion. Think of a use case which requires frequent hyperspectral data during monsoons (rice cultivation, for example). It is important to have a higher temporal resolution here because during monsoons, there will inadvertently be cloud cover which would impact the sensor readings. Higher temporal resolution would imply a greater chance for sensor to get a reading over the cultivation region during days with limited cloud cover.
d) Radiometric Resolution - This is more towards the quality / strength of the sensor to capture the reflected signals accurately i.e. with limited noise. Please note that this is not referring to the scanning system deployed by the sensor (there are ways / methods to scan objects as well for obtaining best results. See video here if you wish to know more about the same.)
As you would imagine, we can't have best of all the worlds in our ideal Hyperspectral sensor. There are trade-offs to consider and the usage requirements determine the sensor configuration. Higher spatial resolution requirement would mean we'd have to capture imagery over our area of interest more slowly (we'd have to opt for airborne sensor rather than spaceborne in this case, for example) whereas lower spatial resolution requirements would imply we have scope for higher temporal resolution (quick coverage at more frequent intervals from spaceborne over airborne). Similarly, there are trade-offs between spectral and spatial resolutions and between other forms of resolutions as well.
Next, I would particularly like you to see how Hyperspectral imagery is captured using on-ground techniques (in-field and in-laboratory settings). This would give you a very interesting insight on the importance placed on the correct 'process' to capture imagery data which is as much important, if not more, than the technology involved in capturing the data.
(Source of Videos: HYPERedu, EnMAP education initiative)
a) Field Settings
b) Laboratory Settings
Another laboratory setting video can be viewed here. In case you would like to see a video on airborne sensors - you can view it here. Please note that many applications require a combination of Hyperspectral data from multiple sources (field & satellite or lab & airborne, for example) to accurately validate the finding & derive the output. Again, this goes on to show the importance of method in such high-tech maneuvers.
Q. So what are the applications of Hyperspectral Imagery? Where can it be used?
A. In many ways, you can treat Hyperspectral Imagery as a superior alternative to Multispectral Imagery for complex Earth Observation projects, in general.
Be it Land Monitoring (Crop classification, Mineral identification) or Coastal / Water Monitoring (Coral reef, Ocean color), Disaster Mapping (Drought, Deforestation, Volcanic activity etc.) or Atmospheric Monitoring (Weather Patterns, Pollution, Ozone Layer).
The concept of Hyperspectral can be (and has been) successfully extended to other fields of work such as the bio-medical sector where Spectroscopy is deployed a useful, non-invasive measure to detect malignant, diseased cells. This is because certain illumination wavelengths penetrate the human skin and thereafter the principles of reflectance readings / spectral signature for detection take over.
To read more about the Earth Observation use-cases though, refer to the document Land Management & Coastal & Water Applications. To read about the Hyperspectral Missions and Data availability, refer to the document here.
You would realize that the concept of Hyperspectral is technology-intensive. For the sake of being concise, I couldn't delve deeper into several aspects, not that I know all of it fully well either. However, should you be interested to begin learning, please feel free to enroll in EO College's MOOC - Beyond the Visible - Introduction to Hyperspectral Remote Sensing or refer to the webinars by NASA ARSET. I'm certain that you'll love the learning experience and I am thankful to them for contributing to mine as well.
Before you go, if you liked what you've read about Hyperspectral so far - you'd love watching this recorded video which gives a practical demonstration of Hyperspectral imagery processing, visualization and analysis. Do have a look and drop in your feedback in the comments section.
Thanks for Watching!
Intelloc Mapping Services | Mapmyops is engaged in providing Mapping products & services to organizations. These facilitate Operations improvement, planning & monitoring workflows and include but are not limited to Drone Services, Geo-Applications & Imagery Analytics.
Write to us on firstname.lastname@example.org.