Skip to main content

Platforms in Remote Sensing


In remote sensing, a platform is the physical structure or vehicle that carries a sensor (camera, scanner, radar, etc.) to observe and collect information about the Earth's surface.

Platforms are classified mainly by their altitude and mobility:

Ground-Based Platforms

  • Definition: Sensors mounted on the Earth's surface or very close to it.

  • Examples: Tripods, towers, ground vehicles, handheld instruments.

  • Applications:

    • Calibration and validation of satellite data

    • Detailed local studies (e.g., soil properties, vegetation health, air quality)

  • Strength: High spatial detail but limited coverage.

Airborne Platforms

  • Definition: Sensors carried by aircraft, balloons, or drones (UAVs).

  • Altitude: A few hundred meters to ~20 km.

  • Examples:

    • Airplanes with multispectral scanners

    • UAVs with high-resolution cameras or LiDAR

    • High-altitude balloons (stratospheric platforms)

  • Applications:

    • Local-to-regional mapping

    • Disaster assessment (floods, landslides, forest fires)

    • Precision agriculture, urban planning

  • Strength: Flexible deployment, high resolution, cloud-free data (depending on altitude).

Spaceborne Platforms (Satellites)

  • Definition: Satellites carrying remote sensing sensors orbiting the Earth.

  • Altitude: Typically 200 km – 36,000 km.

  • Types of Orbits / Platforms:

    1. Low Earth Orbit (LEO)

      • Altitude: ~200–1000 km

      • Examples: Landsat, Sentinel, SPOT, IRS

      • Application: Land cover mapping, agriculture, disaster monitoring

    2. Medium Earth Orbit (MEO)

      • Altitude: ~1000–35,000 km

      • Example: Navigation satellites (GPS, GLONASS)

      • Application: Positioning and navigation

    3. Geostationary Orbit (GEO)

      • Altitude: ~36,000 km, orbit period matches Earth's rotation

      • Example: GOES, INSAT, Meteosat

      • Application: Weather and climate monitoring, continuous observation

  • Strength: Large area coverage, repetitive observations, long-term monitoring.


  • Ground-based → local, detailed, used for calibration.

  • Airborne → regional, flexible, high-resolution (e.g., UAV, aircraft).

  • Spaceborne → global/regional, systematic, used for large-scale monitoring.

Comments

Popular posts from this blog

RADIOMETRIC CORRECTION

  Radiometric correction is the process of removing sensor and environmental errors from satellite images so that the measured brightness values (Digital Numbers or DNs) truly represent the Earth's surface reflectance or radiance. In other words, it corrects for sensor defects, illumination differences, and atmospheric effects. 1. Detector Response Calibration Satellite sensors use multiple detectors to scan the Earth's surface. Sometimes, each detector responds slightly differently, causing distortions in the image. Calibration adjusts all detectors to respond uniformly. This includes: (a) De-Striping Problem: Sometimes images show light and dark vertical or horizontal stripes (banding). Caused by one or more detectors drifting away from their normal calibration — they record higher or lower values than others. Common in early Landsat MSS data. Effect: Every few lines (e.g., every 6th line) appear consistently brighter or darker. Soluti...

Geometric Correction

When satellite or aerial images are captured, they often contain distortions (errors in shape, scale, or position) caused by many factors — like Earth's curvature, satellite motion, terrain height (relief), or the Earth's rotation . These distortions make the image not properly aligned with real-world coordinates (latitude and longitude). 👉 Geometric correction is the process of removing these distortions so that every pixel in the image correctly represents its location on the Earth's surface. After geometric correction, the image becomes geographically referenced and can be used with maps and GIS data. Types  1. Systematic Correction Systematic errors are predictable and can be modeled mathematically. They occur due to the geometry and movement of the satellite sensor or the Earth. Common systematic distortions: Scan skew – due to the motion of the sensor as it scans the Earth. Mirror velocity variation – scanning mirror moves at a va...

Atmospheric Correction

It is the process of removing the influence of the atmosphere from remotely sensed images so that the data accurately represent the true reflectance of Earth's surface . When a satellite sensor captures an image, the radiation reaching the sensor is affected by gases, water vapor, aerosols, and dust in the atmosphere. These factors scatter and absorb light, changing the brightness and color of the features seen in the image. Although these atmospheric effects are part of the recorded signal, they can distort surface reflectance values , especially when images are compared across different dates or sensors . Therefore, corrections are necessary to make data consistent and physically meaningful. 🔹 Why Do We Need Atmospheric Correction? To retrieve true surface reflectance – It separates the surface signal from atmospheric influence. To ensure comparability – Enables comparing images from different times, seasons, or sensors. To improve visual quality – Remo...

Supervised Classification

In the context of Remote Sensing (RS) and Digital Image Processing (DIP) , supervised classification is the process where an analyst defines "training sites" (Areas of Interest or ROIs) representing known land cover classes (e.g., Water, Forest, Urban). The computer then uses these training samples to teach an algorithm how to classify the rest of the image pixels. The algorithms used to classify these pixels are generally divided into two broad categories: Parametric and Nonparametric decision rules. Parametric Decision Rules These algorithms assume that the pixel values in the training data follow a specific statistical distribution—almost always the Gaussian (Normal) distribution (the "Bell Curve"). Key Concept: They model the data using statistical parameters: the Mean vector ( $\mu$ ) and the Covariance matrix ( $\Sigma$ ) . Analogy: Imagine trying to fit a smooth hill over your data points. If a new point lands high up on the hill, it belongs to that cl...

Supervised Classification

Image Classification in Remote Sensing Image classification in remote sensing involves categorizing pixels in an image into thematic classes to produce a map. This process is essential for land use and land cover mapping, environmental studies, and resource management. The two primary methods for classification are Supervised and Unsupervised Classification . Here's a breakdown of these methods and the key stages of image classification. 1. Types of Classification Supervised Classification In supervised classification, the analyst manually defines classes of interest (known as information classes ), such as "water," "urban," or "vegetation," and identifies training areas —sections of the image that are representative of these classes. Using these training areas, the algorithm learns the spectral characteristics of each class and applies them to classify the entire image. When to Use Supervised Classification:   - You have prior knowledge about the c...