Skip to main content

Supervised Classification


Supervised classification is a digital image classification method where the analyst guides the classification process by defining classes of interest and providing representative training samples.
The classifier uses these training samples to learn the spectral signatures of each class and then assigns every pixel in the image to the most appropriate class.

This method relies heavily on prior knowledge of the study area.

How Supervised Classification Works

✔ Step 1: Define Information Classes

These are real-world land-cover classes such as:

  • water

  • forest

  • agriculture

  • urban

  • barren land

✔ Step 2: Select Training Areas

Training areas (also called ROIs—Regions of Interest) are chosen on the image where the analyst is confident about the land-cover type.

✔ Step 3: Extract Spectral Signatures

The classifier calculates:

  • mean

  • variance

  • covariance

  • pixel distribution

for each class across different spectral bands.

✔ Step 4: Apply Decision Rules

The classification algorithm uses statistical rules to assign each pixel to a class.

✔ Step 5: Produce Classified Output

The final output is a thematic map showing land-cover classes.

When to Use Supervised Classification

Use supervised classification when:

  • You have prior knowledge of the landscape.

  • Ground truth or ancillary data is available (GPS points, survey data).

  • You can identify distinct, homogeneous training sites for each class.

  • The objective is to extract specific land-cover categories.

Information Class vs Spectral Class

Understanding the difference between these two is essential:

Information Class

  • Defined by the analyst based on real-world concepts.

  • Examples: village, river, wetland, cropland.

  • Represents semantic categories used for mapping and interpretation.

Spectral Class

  • Group of pixels that are spectrally similar, based on reflectance values.

  • Identified statistically by the software.

  • May not always match real-world categories exactly.

๐Ÿ“Œ Mapping involves matching spectral classes to information classes.

Supervised Training

Supervised training involves:

  • Manually selecting representative pixel samples

  • Ensuring the samples capture the full spectral variability of each class
    (e.g., different shades of vegetation or soil types)

  • Evaluating spectral signatures using

    • histograms

    • scatter plots

    • spectral profiles

    • separability indices (e.g., Jeffries–Matusita)

✔ Characteristics

  • Analyst-controlled

  • Knowledge-driven

  • Often more accurate

  • Requires skill in selecting high-quality training data

Classification Decision Rules (Supervised)

Decision rules determine how the classifier decides which class a pixel belongs to.

They fall into two broad groups:

Parametric Decision Rules

Parametric classifiers assume pixel values follow a normal (Gaussian) distribution.

These rules rely on statistical measures such as:

  • class mean

  • variance

  • covariance

  • probability density functions

Minimum Distance Classifier

  • Computes Euclidean or Mahalanobis distance between pixel and class mean.

  • Assigns pixel to the closest class mean.

  • Simple and fast but may misclassify overlapping classes.

Maximum Likelihood Classifier (MLC)

  • Most widely used supervised classifier.

  • Considers:

    • class mean

    • variance

    • covariance

    • overall probability distribution

  • Assigns pixel to the class with the highest likelihood of belonging.

  • Requires good training data; performs best when classes are normally distributed.

Nonparametric Decision Rules

Do not assume any specific statistical distribution; useful when pixel distributions are irregular.

Parallelepiped Classifier

  • Creates "boxes" using min–max values for each band.

  • A pixel is assigned to a class if its values fall within the box.

  • Fast, but may leave pixels:

    • unclassified (if no box contains the pixel)

    • ambiguously classified (if pixel falls in more than one box)

Feature Space Classifier

  • Plots pixel values in a multi-dimensional feature space.

  • Uses polygons in the feature space to define classes.

  • More flexible and accurate than parallelepiped.

  • Good for visually evaluating class separability.



Comments

Popular posts from this blog

Platforms in Remote Sensing

In remote sensing, a platform is the physical structure or vehicle that carries a sensor (camera, scanner, radar, etc.) to observe and collect information about the Earth's surface. Platforms are classified mainly by their altitude and mobility : Ground-Based Platforms Definition : Sensors mounted on the Earth's surface or very close to it. Examples : Tripods, towers, ground vehicles, handheld instruments. Applications : Calibration and validation of satellite data Detailed local studies (e.g., soil properties, vegetation health, air quality) Strength : High spatial detail but limited coverage. Airborne Platforms Definition : Sensors carried by aircraft, balloons, or drones (UAVs). Altitude : A few hundred meters to ~20 km. Examples : Airplanes with multispectral scanners UAVs with high-resolution cameras or LiDAR High-altitude balloons (stratospheric platforms) Applications : Local-to-regional mapping ...

Types of Remote Sensing

Remote Sensing means collecting information about the Earth's surface without touching it , usually using satellites, aircraft, or drones . There are different types of remote sensing based on the energy source and the wavelength region used. ๐Ÿ›ฐ️ 1. Active Remote Sensing ๐Ÿ“˜ Concept: In active remote sensing , the sensor sends out its own energy (like a signal or pulse) to the Earth's surface. The sensor then records the reflected or backscattered energy that comes back from the surface. ⚙️ Key Terminology: Transmitter: sends energy (like a radar pulse or laser beam). Receiver: detects the energy that bounces back. Backscatter: energy that is reflected back to the sensor. ๐Ÿ“Š Examples of Active Sensors: RADAR (Radio Detection and Ranging): Uses microwave signals to detect surface roughness, soil moisture, or ocean waves. LiDAR (Light Detection and Ranging): Uses laser light (near-infrared) to measure elevation, vegetation...

Optical Sensors in Remote Sensing

1. What Are Optical Sensors? Optical sensors are remote sensing instruments that detect solar radiation reflected or emitted from the Earth's surface in specific portions of the electromagnetic spectrum (EMS) . They mainly work in: Visible region (0.4–0.7 ยตm) Near-Infrared – NIR (0.7–1.3 ยตm) Shortwave Infrared – SWIR (1.3–3.0 ยตm) Thermal Infrared – TIR (8–14 ยตm) — emitted energy, not reflected Optical sensors capture spectral signatures of surface features. Each object reflects/absorbs energy differently, creating a unique spectral response pattern . a) Electromagnetic Spectrum (EMS) The continuous range of wavelengths. Optical sensing uses solar reflective bands and sometimes thermal bands . b) Spectral Signature The unique pattern of reflectance or absorbance of an object across wavelengths. Example: Vegetation reflects strongly in NIR Water absorbs strongly in NIR and SWIR (appears dark) c) Radiance and Reflectance Radi...

Resolution of Sensors in Remote Sensing

Spatial Resolution ๐Ÿ—บ️ Definition : The smallest size of an object on the ground that a sensor can detect. Measured as : The size of a pixel on the ground (in meters). Example : Landsat → 30 m (each pixel = 30 × 30 m on Earth). WorldView-3 → 0.31 m (very detailed, you can see cars). Fact : Higher spatial resolution = finer details, but smaller coverage. Spectral Resolution ๐ŸŒˆ Definition : The ability of a sensor to capture information in different parts (bands) of the electromagnetic spectrum . Measured as : The number and width of spectral bands. Types : Panchromatic (1 broad band, e.g., black & white image). Multispectral (several broad bands, e.g., Landsat with 7–13 bands). Hyperspectral (hundreds of very narrow bands, e.g., AVIRIS). Fact : Higher spectral resolution = better identification of materials (e.g., minerals, vegetation types). Radiometric Resolution ๐Ÿ“Š Definition : The ability of a sensor to ...

geostationary and sun-synchronous

Orbital characteristics of Remote sensing satellite geostationary and sun-synchronous  Orbits in Remote Sensing Orbit = the path a satellite follows around the Earth. The orbit determines what part of Earth the satellite can see , how often it revisits , and what applications it is good for . Remote sensing satellites mainly use two standard orbits : Geostationary Orbit (GEO) Sun-Synchronous Orbit (SSO)  Geostationary Satellites (GEO) Characteristics Altitude : ~35,786 km above the equator. Period : 24 hours → same as Earth's rotation. Orbit type : Circular, directly above the equator . Appears "stationary" over one fixed point on Earth. Concepts & Terminologies Geosynchronous = orbit period matches Earth's rotation (24h). Geostationary = special type of geosynchronous orbit directly above equator → looks fixed. Continuous coverage : Can monitor the same area all the time. Applications Weather...