Skip to main content

Geometric Correction



When satellite or aerial images are captured, they often contain distortions (errors in shape, scale, or position) caused by many factors — like Earth's curvature, satellite motion, terrain height (relief), or the Earth's rotation.
These distortions make the image not properly aligned with real-world coordinates (latitude and longitude).

👉 Geometric correction is the process of removing these distortions so that every pixel in the image correctly represents its location on the Earth's surface.

After geometric correction, the image becomes geographically referenced and can be used with maps and GIS data.

Types 

1. Systematic Correction

Systematic errors are predictable and can be modeled mathematically.
They occur due to the geometry and movement of the satellite sensor or the Earth.

Common systematic distortions:

  • Scan skew – due to the motion of the sensor as it scans the Earth.

  • Mirror velocity variation – scanning mirror moves at a variable speed.

  • Cross-track distortion – image stretching across the scan direction.

  • Earth rotation skew – Earth rotates while the sensor scans, shifting positions.

  • Platform altitude variation – changes in satellite height.

  • Platform velocity variation – changes in satellite speed.

Correction method:
Systematic errors are corrected using mathematical models or formulas derived from satellite geometry and sensor parameters.
This process is often automated and is part of orthorectification, which adjusts images for terrain relief using a Digital Elevation Model (DEM).

2. Non-Systematic Correction

Non-systematic (random) errors are unpredictable — caused by sensor drift, attitude changes, or human error.
They cannot be fixed mathematically and require ground reference points.

It involves aligning image coordinates with real-world coordinates or another image.

Two main approaches:

(a) Image-to-Ground Correction (Georeferencing)

  • The image is aligned to real-world ground coordinates (latitude/longitude).

  • Requires Ground Control Points (GCPs)—known locations visible on both the image and a map.

(b) Image-to-Image Correction (Registration)

  • Used when two or more images of the same area (different times/sensors) must match perfectly.

  • One image acts as the reference, and the other is adjusted to match it.

Coordinate Transformation

This step mathematically links image coordinates (rows and columns) to map coordinates (X, Y).

A polynomial transformation is used, where the order of the polynomial defines the complexity of the correction.


👉 Examples:

  • 1st order (affine): needs 3 GCPs → corrects translation, rotation, scaling, and skew.

  • 2nd order: needs 6 GCPs → can correct moderate curvilinear distortions.

  • 3rd order: needs 10 GCPs → handles more complex distortions.

Accuracy Assessment:

Accuracy of geometric correction can be measured by Root Mean Square Error (RMSE):

[
RMSE = \sqrt{\frac{(D_1^2 + D_2^2 + D_3^2 + ... + D_n^2)}{n}}
]

Where D = distance between the corrected pixel and its true location.
A smaller RMSE means higher geometric accuracy.

Resampling

When an image is geometrically corrected or transformed, the pixel grid changes.
Resampling determines what new pixel values to assign in the corrected image.

In simple words:
It's the process of fitting old pixels into a new coordinate grid after correction.

Because the input and output grids rarely match exactly, resampling decides which value each new pixel should take.

Common Resampling Methods:

  1. Nearest Neighbour (NN):

    • Takes the value of the closest original pixel.

    • Simple and fast.

    • Best for categorical data (like land use classes).

    • May look blocky.

  2. Bilinear Interpolation:

    • Uses the average of 4 nearest pixels.

    • Produces smoother images.

    • Suitable for continuous data (like temperature, elevation).

  3. Cubic Convolution:

    • Uses 16 nearest pixels with weighted averages.

    • Produces very smooth and visually appealing images.

    • Best for display and analysis of continuous data.




Miscellaneous Pre-Processing Steps

1. Subsetting

Selecting or cutting out a smaller portion of a large image (based on AOI – Area of Interest).

  • Helps reduce file size.

  • Makes processing faster.
    Example: Cropping a satellite image to only your study district.

2. Mosaicking

Combining two or more overlapping satellite images to form one continuous image covering a larger area.

  • Useful when one scene doesn't cover the full study region.

  • Must ensure brightness matching between scenes.


StepPurposeExample / Key Point
Geometric CorrectionAlign image with real-world coordinatesCorrects distortions
Systematic CorrectionFix predictable errorsUses sensor models, orthorectification
Non-Systematic CorrectionFix random errorsUses GCPs for georeferencing
Coordinate TransformationConverts pixel to map coordinatesUses polynomial equations
ResamplingAssigns pixel values in new gridNN, Bilinear, Cubic methods
SubsettingExtracts part of an imageFocus on study area
MosaickingCombines multiple scenesCreates larger continuous image


Comments

Popular posts from this blog

Natural Disasters

A natural disaster is a catastrophic event caused by natural processes of the Earth that results in significant loss of life, property, and environmental resources. It occurs when a hazard (potentially damaging physical event) interacts with a vulnerable population and leads to disruption of normal life . Key terms: Hazard → A potential natural event (e.g., cyclone, earthquake). Disaster → When the hazard causes widespread damage due to vulnerability. Risk → Probability of harmful consequences from interaction of hazard and vulnerability. Vulnerability → Degree to which a community or system is exposed and unable to cope with the hazard. Resilience → Ability of a system or society to recover from the disaster impact. 👉 Example: An earthquake in an uninhabited desert is a hazard , but not a disaster unless people or infrastructure are affected. Types Natural disasters can be classified into geophysical, hydrological, meteorological, clim...

Types of Remote Sensing

Remote Sensing means collecting information about the Earth's surface without touching it , usually using satellites, aircraft, or drones . There are different types of remote sensing based on the energy source and the wavelength region used. 🛰️ 1. Active Remote Sensing 📘 Concept: In active remote sensing , the sensor sends out its own energy (like a signal or pulse) to the Earth's surface. The sensor then records the reflected or backscattered energy that comes back from the surface. ⚙️ Key Terminology: Transmitter: sends energy (like a radar pulse or laser beam). Receiver: detects the energy that bounces back. Backscatter: energy that is reflected back to the sensor. 📊 Examples of Active Sensors: RADAR (Radio Detection and Ranging): Uses microwave signals to detect surface roughness, soil moisture, or ocean waves. LiDAR (Light Detection and Ranging): Uses laser light (near-infrared) to measure elevation, vegetation...

geostationary and sun-synchronous

Orbital characteristics of Remote sensing satellite geostationary and sun-synchronous  Orbits in Remote Sensing Orbit = the path a satellite follows around the Earth. The orbit determines what part of Earth the satellite can see , how often it revisits , and what applications it is good for . Remote sensing satellites mainly use two standard orbits : Geostationary Orbit (GEO) Sun-Synchronous Orbit (SSO)  Geostationary Satellites (GEO) Characteristics Altitude : ~35,786 km above the equator. Period : 24 hours → same as Earth's rotation. Orbit type : Circular, directly above the equator . Appears "stationary" over one fixed point on Earth. Concepts & Terminologies Geosynchronous = orbit period matches Earth's rotation (24h). Geostationary = special type of geosynchronous orbit directly above equator → looks fixed. Continuous coverage : Can monitor the same area all the time. Applications Weather...

Linear Arrays Along-Track Scanners or Pushbroom Scanners

Multispectral Imaging Using Linear Arrays (Along-Track Scanners or Pushbroom Scanners) Multispectral Imaging: As previously defined, this involves capturing images using multiple sensors that are sensitive to different wavelengths of electromagnetic radiation. Linear Array of Detectors (A): This refers to a row of discrete detectors arranged in a straight line. Each detector is responsible for measuring the radiation within a specific wavelength band. Focal Plane (B): This is the plane where the image is formed by the lens system. It is the location where the detectors are placed to capture the focused image. Formed by Lens Systems (C): The lens system is responsible for collecting and focusing the incoming radiation onto the focal plane. It acts like a camera lens, creating a sharp image of the scene. Ground Resolution Cell (D): As previously defined, this is the smallest area on the ground that can be resolved by a remote sensing sensor. In the case of linear array scanne...

India remote sensing

1. Foundational Phase (Early 1970s – Early 1980s) Objective: To explore the potential of space-based observation for national development. 1972: The Space Applications Programme (SAP) was initiated by the Indian Space Research Organisation (ISRO), focusing on applying space technology for societal benefits. 1975: The Department of Space (DoS) was established, providing an institutional base for space applications, including remote sensing. 1977: India began aerial and balloon-borne experiments to study Earth resources and assess how remote sensing data could aid in agriculture, forestry, and hydrology. 1978 (June 7): Bhaskara-I launched by the Soviet Union — India's first experimental Earth Observation satellite . Payloads: TV cameras (for land and ocean surface observation) and a Microwave Radiometer. Significance: Proved that satellite-based Earth observation was feasible for India's needs. 1981 (November 20): Bhaskara-II launche...