Skip to main content

Geometric Correction



When satellite or aerial images are captured, they often contain distortions (errors in shape, scale, or position) caused by many factors — like Earth's curvature, satellite motion, terrain height (relief), or the Earth's rotation.
These distortions make the image not properly aligned with real-world coordinates (latitude and longitude).

👉 Geometric correction is the process of removing these distortions so that every pixel in the image correctly represents its location on the Earth's surface.

After geometric correction, the image becomes geographically referenced and can be used with maps and GIS data.

Types 

1. Systematic Correction

Systematic errors are predictable and can be modeled mathematically.
They occur due to the geometry and movement of the satellite sensor or the Earth.

Common systematic distortions:

  • Scan skew – due to the motion of the sensor as it scans the Earth.

  • Mirror velocity variation – scanning mirror moves at a variable speed.

  • Cross-track distortion – image stretching across the scan direction.

  • Earth rotation skew – Earth rotates while the sensor scans, shifting positions.

  • Platform altitude variation – changes in satellite height.

  • Platform velocity variation – changes in satellite speed.

Correction method:
Systematic errors are corrected using mathematical models or formulas derived from satellite geometry and sensor parameters.
This process is often automated and is part of orthorectification, which adjusts images for terrain relief using a Digital Elevation Model (DEM).

2. Non-Systematic Correction

Non-systematic (random) errors are unpredictable — caused by sensor drift, attitude changes, or human error.
They cannot be fixed mathematically and require ground reference points.

It involves aligning image coordinates with real-world coordinates or another image.

Two main approaches:

(a) Image-to-Ground Correction (Georeferencing)

  • The image is aligned to real-world ground coordinates (latitude/longitude).

  • Requires Ground Control Points (GCPs)—known locations visible on both the image and a map.

(b) Image-to-Image Correction (Registration)

  • Used when two or more images of the same area (different times/sensors) must match perfectly.

  • One image acts as the reference, and the other is adjusted to match it.

Coordinate Transformation

This step mathematically links image coordinates (rows and columns) to map coordinates (X, Y).

A polynomial transformation is used, where the order of the polynomial defines the complexity of the correction.


👉 Examples:

  • 1st order (affine): needs 3 GCPs → corrects translation, rotation, scaling, and skew.

  • 2nd order: needs 6 GCPs → can correct moderate curvilinear distortions.

  • 3rd order: needs 10 GCPs → handles more complex distortions.

Accuracy Assessment:

Accuracy of geometric correction can be measured by Root Mean Square Error (RMSE):

[
RMSE = \sqrt{\frac{(D_1^2 + D_2^2 + D_3^2 + ... + D_n^2)}{n}}
]

Where D = distance between the corrected pixel and its true location.
A smaller RMSE means higher geometric accuracy.

Resampling

When an image is geometrically corrected or transformed, the pixel grid changes.
Resampling determines what new pixel values to assign in the corrected image.

In simple words:
It's the process of fitting old pixels into a new coordinate grid after correction.

Because the input and output grids rarely match exactly, resampling decides which value each new pixel should take.

Common Resampling Methods:

  1. Nearest Neighbour (NN):

    • Takes the value of the closest original pixel.

    • Simple and fast.

    • Best for categorical data (like land use classes).

    • May look blocky.

  2. Bilinear Interpolation:

    • Uses the average of 4 nearest pixels.

    • Produces smoother images.

    • Suitable for continuous data (like temperature, elevation).

  3. Cubic Convolution:

    • Uses 16 nearest pixels with weighted averages.

    • Produces very smooth and visually appealing images.

    • Best for display and analysis of continuous data.




Miscellaneous Pre-Processing Steps

1. Subsetting

Selecting or cutting out a smaller portion of a large image (based on AOI – Area of Interest).

  • Helps reduce file size.

  • Makes processing faster.
    Example: Cropping a satellite image to only your study district.

2. Mosaicking

Combining two or more overlapping satellite images to form one continuous image covering a larger area.

  • Useful when one scene doesn't cover the full study region.

  • Must ensure brightness matching between scenes.


StepPurposeExample / Key Point
Geometric CorrectionAlign image with real-world coordinatesCorrects distortions
Systematic CorrectionFix predictable errorsUses sensor models, orthorectification
Non-Systematic CorrectionFix random errorsUses GCPs for georeferencing
Coordinate TransformationConverts pixel to map coordinatesUses polynomial equations
ResamplingAssigns pixel values in new gridNN, Bilinear, Cubic methods
SubsettingExtracts part of an imageFocus on study area
MosaickingCombines multiple scenesCreates larger continuous image


Comments

Popular posts from this blog

REMOTE SENSING INDICES

Remote sensing indices are band ratios designed to highlight specific surface features (vegetation, soil, water, urban areas, snow, burned areas, etc.) using the spectral reflectance properties of the Earth's surface. They improve classification accuracy and environmental monitoring. 1. Vegetation Indices NDVI – Normalized Difference Vegetation Index Formula: (NIR – RED) / (NIR + RED) Concept: Vegetation reflects strongly in NIR and absorbs in RED due to chlorophyll. Measures: Vegetation greenness & health Uses: Agriculture, drought monitoring, biomass estimation EVI – Enhanced Vegetation Index Formula: G × (NIR – RED) / (NIR + C1×RED – C2×BLUE + L) Concept: Corrects for soil and atmospheric noise. Measures: Vegetation vigor in dense canopies Uses: Tropical rainforest mapping, high biomass regions GNDVI – Green Normalized Difference Vegetation Index Formula: (NIR – GREEN) / (NIR + GREEN) Concept: Uses Green instead of Red ...

Energy Interaction with Atmosphere and Earth Surface

In Remote Sensing , satellites record electromagnetic radiation (EMR) that is reflected or emitted from the Earth. Before reaching the sensor, radiation interacts with: The Atmosphere The Earth's Surface These interactions control how satellite images look and how we interpret them. I. Interaction of EMR with the Atmosphere When solar radiation travels from the Sun to the Earth, four main processes occur: 1. Absorption Definition: Absorption occurs when atmospheric gases absorb radiation at specific wavelengths and convert it into heat. Main absorbing gases: Ozone (O₃) → absorbs Ultraviolet (UV) Carbon dioxide (CO₂) → absorbs Thermal Infrared Water vapour (H₂O) → absorbs Infrared Concept: Atmospheric Windows These are wavelength regions where absorption is very low, allowing radiation to pass through the atmosphere. Remote sensing depends on these windows. For example, satellites like Landsat 8 use visible, near-infrared, and thermal bands located in atmospheric windows. 2. Trans...

Atmospheric Window

The atmospheric window in remote sensing refers to specific wavelength ranges within the electromagnetic spectrum that can pass through the Earth's atmosphere relatively unimpeded. These windows are crucial for remote sensing applications because they allow us to observe the Earth's surface and atmosphere without significant interference from the atmosphere's constituents. Key facts and concepts about atmospheric windows: Visible and Near-Infrared (VNIR) window: This window encompasses wavelengths from approximately 0. 4 to 1. 0 micrometers. It is ideal for observing vegetation, water bodies, and land cover types. Shortwave Infrared (SWIR) window: This window covers wavelengths from approximately 1. 0 to 3. 0 micrometers. It is particularly useful for detecting minerals, water content, and vegetation health. Mid-Infrared (MIR) window: This window spans wavelengths from approximately 3. 0 to 8. 0 micrometers. It is valuable for identifying various materials, incl...

Landsat 8 Band designation and Band Combination.

Landsat 8 Band designation and Band Combination.  Landsat 8-9 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) Bands Wavelength (micrometers) Resolution (meters) Band 1 - Coastal aerosol 0.43-0.45 30 Band 2 - Blue 0.45-0.51 30 Band 3 - Green 0.53-0.59 30 Band 4 - Red 0.64-0.67 30 Band 5 - Near Infrared (NIR) 0.85-0.88 30 Band 6 - SWIR 1 1.57-1.65 30 Band 7 - SWIR 2 2.11-2.29 30 Band 8 - Panchromatic 0.50-0.68 15 Band 9 - Cirrus 1.36-1.38 30 Band 10 - Thermal Infrared (TIRS) 1 10.6-11.19 100 Band 11 - Thermal Infrared (TIRS) 2 11.50-12.51 100 Vineesh V Assistant Professor of Geography, Directorate of Education, Government of Kerala. https://www.facebook.com/Applied.Geography http://geogisgeo.blogspot.com

Landsat band composition

Short-Wave Infrared (7, 6 4) The short-wave infrared band combination uses SWIR-2 (7), SWIR-1 (6), and red (4). This composite displays vegetation in shades of green. While darker shades of green indicate denser vegetation, sparse vegetation has lighter shades. Urban areas are blue and soils have various shades of brown. Agriculture (6, 5, 2) This band combination uses SWIR-1 (6), near-infrared (5), and blue (2). It's commonly used for crop monitoring because of the use of short-wave and near-infrared. Healthy vegetation appears dark green. But bare earth has a magenta hue. Geology (7, 6, 2) The geology band combination uses SWIR-2 (7), SWIR-1 (6), and blue (2). This band combination is particularly useful for identifying geological formations, lithology features, and faults. Bathymetric (4, 3, 1) The bathymetric band combination (4,3,1) uses the red (4), green (3), and coastal bands to peak into water. The coastal band is useful in coastal, bathymetric, and aerosol studies because...