Skip to main content

Atmospheric Correction

It is the process of removing the influence of the atmosphere from remotely sensed images so that the data accurately represent the true reflectance of Earth's surface.

When a satellite sensor captures an image, the radiation reaching the sensor is affected by gases, water vapor, aerosols, and dust in the atmosphere. These factors scatter and absorb light, changing the brightness and color of the features seen in the image.

Although these atmospheric effects are part of the recorded signal, they can distort surface reflectance values, especially when images are compared across different dates or sensors. Therefore, corrections are necessary to make data consistent and physically meaningful.


🔹 Why Do We Need Atmospheric Correction?

  1. To retrieve true surface reflectance – It separates the surface signal from atmospheric influence.

  2. To ensure comparability – Enables comparing images from different times, seasons, or sensors.

  3. To improve visual quality – Removes haze and increases image contrast.

  4. For accurate quantitative analysis – Essential for calculating vegetation, water, or urban indices (e.g., NDVI, NDWI).

  5. For change detection and mosaicking – Ensures that images have uniform brightness and color.

  6. For ground validation – Required when comparing satellite data with field reflectance measurements.


🔹 Atmospheric Effects on Satellite Images

  1. Scattering – Occurs when particles or gas molecules redirect light.

    • Rayleigh scattering: caused by very small particles (affects blue wavelengths most).

    • Mie scattering: caused by dust or smoke (affects longer wavelengths).

    • Non-selective scattering: caused by large water droplets (affects all wavelengths equally).

  2. Absorption – Certain gases (like ozone, carbon dioxide, and water vapor) absorb specific wavelengths, reducing the energy reaching the sensor.

  3. Path Radiance / Haze – Scattered light that reaches the sensor without reflecting from the ground. It adds a bright veil over the image, especially in blue bands, and reduces contrast.

  4. Transmittance – The fraction of light that successfully travels through the atmosphere from the Sun to the surface and back to the sensor.


🔹 Key Concepts and Terminologies

TermMeaning
RadianceThe total light energy received by the sensor.
ReflectanceThe fraction of incident light reflected by a surface (what we want to retrieve).
Path RadianceUnwanted light scattered into the sensor's line of sight, causing haze.
TransmittanceEfficiency of the atmosphere in letting light pass through.
AerosolsTiny particles that scatter and absorb radiation, major source of atmospheric distortion.
HazeVisual result of atmospheric scattering; reduces image clarity.
CalibrationConversion of raw digital numbers (DNs) to physical units like radiance or reflectance.

🔹 Common Atmospheric Correction Methods

Atmospheric correction can be performed using image-based or model-based methods.

1. Image-Based Methods

These rely only on the image itself and do not require external atmospheric data.

a) Histogram Minimum / Dark Pixel Subtraction

  • Assumes that some pixels (deep water, shadows, dark rocks) should have nearly zero reflectance.

  • The minimum DN value in each band is treated as atmospheric haze.

  • That value is subtracted from all pixels in the band.

  • Simple and fast, but can be inaccurate if no truly dark object exists.

b) Regression Method

  • Plots pixel values from a short wavelength band (affected by scattering) against a long wavelength band (less affected).

  • The intercept of the line indicates atmospheric path radiance.

  • That offset is subtracted from the image.

  • Works well for homogeneous areas but depends on proper band selection.

c) Empirical Line Method (ELC)

  • Uses ground reference reflectance measurements (from field spectrometer or known targets).

  • Establishes a direct relationship between sensor radiance and true surface reflectance.

  • Most accurate among empirical methods if ground data are available.

  • Commonly used for airborne or hyperspectral imagery.


2. Model-Based (Radiative Transfer) Methods

These methods use physical models of atmospheric behavior and require information about the atmospheric conditions during image capture.

Key Models:

  • LOWTRAN 7 – Early model for visible to thermal IR regions.

  • MODTRAN 4 – Advanced model for a wide spectral range.

  • 6S (Second Simulation of the Satellite Signal in the Solar Spectrum) – Widely used open-source model.

  • ATCOR (Atmospheric and Topographic Correction) – Commercial software used in ERDAS Imagine.

  • FLAASH (Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes) – For hyperspectral and multispectral data.

  • ATREM (Atmospheric REMoval) – For hyperspectral imagery.

Inputs Required:

  • Scene location (latitude and longitude)

  • Date and time of image capture

  • Sensor altitude and scene elevation

  • Atmospheric model (e.g., tropical, mid-latitude summer)

  • Visibility or aerosol optical depth

  • Water vapor and ozone concentration

These models simulate how light interacts with the atmosphere and remove its effect to retrieve surface reflectance.


🔹 Additional Step: Cloud Masking

Before atmospheric correction, clouds and their shadows must be identified and masked out, since they distort spectral values.
This step uses cloud detection algorithms (e.g., Fmask, QA bands) to remove cloudy pixels from analysis.


🔹 When Is Atmospheric Correction Necessary?

Required When:

  • Comparing multiple scenes (multi-temporal analysis)

  • Performing change detection studies

  • Creating mosaics of multiple images

  • Calculating accurate surface reflectance or biophysical parameters

Not Always Necessary When:

  • Working with a single scene for visual interpretation

  • Using ratio-based indices (e.g., NDVI), which minimize atmospheric effects



MethodTypeRequires Atmospheric Data?AccuracyTypical Use
Dark Pixel SubtractionImage-basedNoLow–MediumQuick correction, simple projects
Histogram MinimumImage-basedNoLow–MediumBasic haze removal
Regression MethodImage-basedNoMediumScenes with dark objects
Empirical Line MethodImage-basedYes (ground reflectance)HighAirborne or field-calibrated data
Radiative Transfer Models (e.g., ATCOR, MODTRAN, 6S)Model-basedYesVery HighProfessional quantitative studies


Atmospheric correction is a critical preprocessing step in remote sensing.
It ensures that image brightness truly represents the Earth's surface rather than the atmosphere above it.
Choosing the right method depends on your data availability, required accuracy, and application type — from simple visual enhancement to advanced quantitative analysis.

Comments

Popular posts from this blog

Supervised Classification

Image Classification in Remote Sensing Image classification in remote sensing involves categorizing pixels in an image into thematic classes to produce a map. This process is essential for land use and land cover mapping, environmental studies, and resource management. The two primary methods for classification are Supervised and Unsupervised Classification . Here's a breakdown of these methods and the key stages of image classification. 1. Types of Classification Supervised Classification In supervised classification, the analyst manually defines classes of interest (known as information classes ), such as "water," "urban," or "vegetation," and identifies training areas —sections of the image that are representative of these classes. Using these training areas, the algorithm learns the spectral characteristics of each class and applies them to classify the entire image. When to Use Supervised Classification:   - You have prior knowledge about the c...

Supervised Classification

In the context of Remote Sensing (RS) and Digital Image Processing (DIP) , supervised classification is the process where an analyst defines "training sites" (Areas of Interest or ROIs) representing known land cover classes (e.g., Water, Forest, Urban). The computer then uses these training samples to teach an algorithm how to classify the rest of the image pixels. The algorithms used to classify these pixels are generally divided into two broad categories: Parametric and Nonparametric decision rules. Parametric Decision Rules These algorithms assume that the pixel values in the training data follow a specific statistical distribution—almost always the Gaussian (Normal) distribution (the "Bell Curve"). Key Concept: They model the data using statistical parameters: the Mean vector ( $\mu$ ) and the Covariance matrix ( $\Sigma$ ) . Analogy: Imagine trying to fit a smooth hill over your data points. If a new point lands high up on the hill, it belongs to that cl...

Pre During and Post Disaster

Disaster management is a structured approach aimed at reducing risks, responding effectively, and ensuring a swift recovery from disasters. It consists of three main phases: Pre-Disaster (Mitigation & Preparedness), During Disaster (Response), and Post-Disaster (Recovery). These phases involve various strategies, policies, and actions to protect lives, property, and the environment. Below is a breakdown of each phase with key concepts, terminologies, and examples. 1. Pre-Disaster Phase (Mitigation and Preparedness) Mitigation: This phase focuses on reducing the severity of a disaster by minimizing risks and vulnerabilities. It involves structural and non-structural measures. Hazard Identification: Recognizing potential natural and human-made hazards (e.g., earthquakes, floods, industrial accidents). Risk Assessment: Evaluating the probability and consequences of disasters using GIS, remote sensing, and historical data. Vulnerability Analysis: Identifying areas and p...

Hazard Mapping Spatial Planning Evacuation Planning GIS

Geographic Information Systems (GIS) play a pivotal role in disaster management by providing the tools and frameworks necessary for effective hazard mapping, spatial planning, and evacuation planning. These concepts are integral for understanding disaster risks, preparing for potential hazards, and ensuring that resources are efficiently allocated during and after a disaster. 1. Hazard Mapping: Concept: Hazard mapping involves the process of identifying, assessing, and visually representing the geographical areas that are at risk of certain natural or human-made hazards. Hazard maps display the probability, intensity, and potential impact of specific hazards (e.g., floods, earthquakes, hurricanes, landslides) within a given area. Terminologies: Hazard Zone: An area identified as being vulnerable to a particular hazard (e.g., flood zones, seismic zones). Hazard Risk: The likelihood of a disaster occurring in a specific location, influenced by factors like geography, climate, an...

Isodata clustering

Iso Cluster Classification in Unsupervised Image Classification Iso Cluster Classification is a common unsupervised classification technique used in remote sensing. The "Iso Cluster" algorithm groups pixels with similar spectral characteristics into clusters, or spectral classes, based solely on the data's statistical properties. Unlike supervised classification, Iso Cluster classification doesn't require the analyst to predefine classes or training areas; instead, the algorithm analyzes the image data to find natural groupings of pixels. The analyst interprets these groups afterward to label them with meaningful information classes (e.g., water, forest, urban). How Iso Cluster Classification Works The Iso Cluster algorithm follows several steps to group pixels: Initial Data Analysis : The algorithm examines the entire dataset to understand the spectral distribution of the pixels across the spectral bands. Clustering Process :    - The algorithm starts by divid...