Skip to main content

Geometric Correction



When satellite or aerial images are captured, they often contain distortions (errors in shape, scale, or position) caused by many factors — like Earth's curvature, satellite motion, terrain height (relief), or the Earth's rotation.
These distortions make the image not properly aligned with real-world coordinates (latitude and longitude).

👉 Geometric correction is the process of removing these distortions so that every pixel in the image correctly represents its location on the Earth's surface.

After geometric correction, the image becomes geographically referenced and can be used with maps and GIS data.

Types 

1. Systematic Correction

Systematic errors are predictable and can be modeled mathematically.
They occur due to the geometry and movement of the satellite sensor or the Earth.

Common systematic distortions:

  • Scan skew – due to the motion of the sensor as it scans the Earth.

  • Mirror velocity variation – scanning mirror moves at a variable speed.

  • Cross-track distortion – image stretching across the scan direction.

  • Earth rotation skew – Earth rotates while the sensor scans, shifting positions.

  • Platform altitude variation – changes in satellite height.

  • Platform velocity variation – changes in satellite speed.

Correction method:
Systematic errors are corrected using mathematical models or formulas derived from satellite geometry and sensor parameters.
This process is often automated and is part of orthorectification, which adjusts images for terrain relief using a Digital Elevation Model (DEM).

2. Non-Systematic Correction

Non-systematic (random) errors are unpredictable — caused by sensor drift, attitude changes, or human error.
They cannot be fixed mathematically and require ground reference points.

It involves aligning image coordinates with real-world coordinates or another image.

Two main approaches:

(a) Image-to-Ground Correction (Georeferencing)

  • The image is aligned to real-world ground coordinates (latitude/longitude).

  • Requires Ground Control Points (GCPs)—known locations visible on both the image and a map.

(b) Image-to-Image Correction (Registration)

  • Used when two or more images of the same area (different times/sensors) must match perfectly.

  • One image acts as the reference, and the other is adjusted to match it.

Coordinate Transformation

This step mathematically links image coordinates (rows and columns) to map coordinates (X, Y).

A polynomial transformation is used, where the order of the polynomial defines the complexity of the correction.


👉 Examples:

  • 1st order (affine): needs 3 GCPs → corrects translation, rotation, scaling, and skew.

  • 2nd order: needs 6 GCPs → can correct moderate curvilinear distortions.

  • 3rd order: needs 10 GCPs → handles more complex distortions.

Accuracy Assessment:

Accuracy of geometric correction can be measured by Root Mean Square Error (RMSE):

[
RMSE = \sqrt{\frac{(D_1^2 + D_2^2 + D_3^2 + ... + D_n^2)}{n}}
]

Where D = distance between the corrected pixel and its true location.
A smaller RMSE means higher geometric accuracy.

Resampling

When an image is geometrically corrected or transformed, the pixel grid changes.
Resampling determines what new pixel values to assign in the corrected image.

In simple words:
It's the process of fitting old pixels into a new coordinate grid after correction.

Because the input and output grids rarely match exactly, resampling decides which value each new pixel should take.

Common Resampling Methods:

  1. Nearest Neighbour (NN):

    • Takes the value of the closest original pixel.

    • Simple and fast.

    • Best for categorical data (like land use classes).

    • May look blocky.

  2. Bilinear Interpolation:

    • Uses the average of 4 nearest pixels.

    • Produces smoother images.

    • Suitable for continuous data (like temperature, elevation).

  3. Cubic Convolution:

    • Uses 16 nearest pixels with weighted averages.

    • Produces very smooth and visually appealing images.

    • Best for display and analysis of continuous data.




Miscellaneous Pre-Processing Steps

1. Subsetting

Selecting or cutting out a smaller portion of a large image (based on AOI – Area of Interest).

  • Helps reduce file size.

  • Makes processing faster.
    Example: Cropping a satellite image to only your study district.

2. Mosaicking

Combining two or more overlapping satellite images to form one continuous image covering a larger area.

  • Useful when one scene doesn't cover the full study region.

  • Must ensure brightness matching between scenes.


StepPurposeExample / Key Point
Geometric CorrectionAlign image with real-world coordinatesCorrects distortions
Systematic CorrectionFix predictable errorsUses sensor models, orthorectification
Non-Systematic CorrectionFix random errorsUses GCPs for georeferencing
Coordinate TransformationConverts pixel to map coordinatesUses polynomial equations
ResamplingAssigns pixel values in new gridNN, Bilinear, Cubic methods
SubsettingExtracts part of an imageFocus on study area
MosaickingCombines multiple scenesCreates larger continuous image


Comments

Popular posts from this blog

Optical Sensors in Remote Sensing

1. What Are Optical Sensors? Optical sensors are remote sensing instruments that detect solar radiation reflected or emitted from the Earth's surface in specific portions of the electromagnetic spectrum (EMS) . They mainly work in: Visible region (0.4–0.7 µm) Near-Infrared – NIR (0.7–1.3 µm) Shortwave Infrared – SWIR (1.3–3.0 µm) Thermal Infrared – TIR (8–14 µm) — emitted energy, not reflected Optical sensors capture spectral signatures of surface features. Each object reflects/absorbs energy differently, creating a unique spectral response pattern . a) Electromagnetic Spectrum (EMS) The continuous range of wavelengths. Optical sensing uses solar reflective bands and sometimes thermal bands . b) Spectral Signature The unique pattern of reflectance or absorbance of an object across wavelengths. Example: Vegetation reflects strongly in NIR Water absorbs strongly in NIR and SWIR (appears dark) c) Radiance and Reflectance Radi...

Radar Sensors in Remote Sensing

Radar sensors are active remote sensing instruments that use microwave radiation to detect and measure Earth's surface features. They transmit their own energy (radio waves) toward the Earth and record the backscattered signal that returns to the sensor. Since they do not depend on sunlight, radar systems can collect data: day or night through clouds, fog, smoke, and rain in all weather conditions This makes radar extremely useful for Earth observation. 1. Active Sensor A radar sensor produces and transmits its own microwaves. This is different from optical and thermal sensors, which depend on sunlight or emitted heat. 2. Microwave Region Radar operates in the microwave region of the electromagnetic spectrum , typically from 1 mm to 1 m wavelength. Common radar frequency bands: P-band (70 cm) L-band (23 cm) S-band (9 cm) C-band (5.6 cm) X-band (3 cm) Each band penetrates and interacts with surfaces differently: Lo...

Thermal Sensors in Remote Sensing

Thermal sensors are remote sensing instruments that detect naturally emitted thermal infrared (TIR) radiation from the Earth's surface. Unlike optical sensors (which detect reflected sunlight), thermal sensors measure heat energy emitted by objects because of their temperature. They work mainly in the Thermal Infrared region (8–14 µm) of the electromagnetic spectrum. 1. Thermal Infrared Radiation All objects above 0 Kelvin (absolute zero) emit electromagnetic radiation. This is explained by Planck's Radiation Law . For Earth's surface temperature range (about 250–330 K), the peak emitted radiation occurs in the 8–14 µm thermal window . Thus, thermal sensors detect emitted energy , not reflected sunlight. 2. Emissivity Emissivity is the efficiency with which a material emits thermal radiation. Values range from 0 to 1 : Water, vegetation → high emissivity (0.95–0.99) Bare soil → medium (0.85–0.95) Metals → low (0.1–0.3) E...

Pre During and Post Disaster

Disaster management is a structured approach aimed at reducing risks, responding effectively, and ensuring a swift recovery from disasters. It consists of three main phases: Pre-Disaster (Mitigation & Preparedness), During Disaster (Response), and Post-Disaster (Recovery). These phases involve various strategies, policies, and actions to protect lives, property, and the environment. Below is a breakdown of each phase with key concepts, terminologies, and examples. 1. Pre-Disaster Phase (Mitigation and Preparedness) Mitigation: This phase focuses on reducing the severity of a disaster by minimizing risks and vulnerabilities. It involves structural and non-structural measures. Hazard Identification: Recognizing potential natural and human-made hazards (e.g., earthquakes, floods, industrial accidents). Risk Assessment: Evaluating the probability and consequences of disasters using GIS, remote sensing, and historical data. Vulnerability Analysis: Identifying areas and p...