Skip to main content

Supervised Classification

Image Classification in Remote Sensing

Image classification in remote sensing involves categorizing pixels in an image into thematic classes to produce a map. This process is essential for land use and land cover mapping, environmental studies, and resource management. The two primary methods for classification are Supervised and Unsupervised Classification. Here's a breakdown of these methods and the key stages of image classification.


1. Types of Classification

Supervised Classification

In supervised classification, the analyst manually defines classes of interest (known as information classes), such as "water," "urban," or "vegetation," and identifies training areas—sections of the image that are representative of these classes. Using these training areas, the algorithm learns the spectral characteristics of each class and applies them to classify the entire image.

  • When to Use Supervised Classification:   - You have prior knowledge about the classes.   - You can verify the training areas with ground truth data.   - You can identify distinct, homogeneous regions for each class.

Unsupervised Classification

Unsupervised classification, on the other hand, uses the spectral properties of the image data to automatically group pixels with similar spectral characteristics into spectral classes. These classes are later labeled by the analyst based on the spectral patterns and ground-truth information.

  • When to Use Unsupervised Classification:   - You have limited prior knowledge about the image's content.   - You need a large number of classes or wish to explore the data's spectral characteristics.   - It's beneficial for quickly exploring unknown regions.

2. Key Stages of Image Classification

Image classification follows a systematic series of stages to produce accurate thematic maps.

  1. Raw Data Collection: Initial, unprocessed image data is collected.
  2. Preprocessing: Prepares the data for analysis by correcting atmospheric effects, removing noise, and aligning geometry. This stage is essential to ensure data accuracy.
  3. Signature Collection: In supervised classification, the analyst collects samples, called signatures, representing each class. These signatures capture the typical spectral characteristics for each category.
  4. Signature Evaluation: The quality and distinctiveness of signatures are evaluated to ensure that they are statistically separate and represent the classes accurately.
  5. Classification: Using the collected signatures, the classification algorithm assigns each pixel to a specific class, producing the classified map.

3. Information Class vs. Spectral Class

  • Information Class: An information class represents real-world categories, such as water bodies, urban areas, or vegetation, specified by the analyst for extraction from the image.
  • Spectral Class: A spectral class is determined by the clustering of pixels with similar spectral (color or brightness) values. These classes are automatically identified based on statistical similarities in pixel values across multiple spectral bands.

4. Supervised vs. Unsupervised Training

To classify an image, a system needs to be trained to recognize patterns.

  • Supervised Training:   - Controlled by the analyst, who selects representative pixels and instructs the system on what each class should look like.   - Often more accurate but requires skill and understanding of the region.
  • Unsupervised Training:   - The computer automatically groups pixels based on spectral properties, with the analyst specifying the desired number of classes.   - This approach requires less skill but may be less accurate.

5. Classification Decision Rules in Supervised Classification

In supervised classification, different decision rules guide the process of assigning pixels to classes. Here are some common ones:

Parametric Decision Rules

These rules assume that pixel values follow a normal distribution, which allows the system to use statistical measures for classification.

  • Minimum Distance Classifier:   - Calculates the distance between a candidate pixel and the mean of each class signature.   - Assigns the pixel to the class with the shortest distance (e.g., Euclidean or Mahalanobis distance).
  • Maximum Likelihood Classifier:   - Considers both variance and covariance within class signatures.   - Assumes a normal distribution and assigns pixels to the class with the highest probability of belonging.

Nonparametric Decision Rules

These rules do not assume a specific distribution.

  • Parallelepiped Classifier:   - Uses minimum and maximum values for each class and assigns pixels within these limits to the corresponding class.

  • Feature Space Classifier:   - Analyzes classes based on polygons within a feature space, which is often more accurate than the parallelepiped method.


Summary Table

AspectSupervised ClassificationUnsupervised Classification
DefinitionUses predefined classes and training areas.Uses statistical groupings based on spectral properties.
ClassesInformation Classes: Known classes defined by the analyst.Spectral Classes: Classes identified by the system.
Training ProcessAnalyst selects and verifies classes.System automatically groups pixels; analyst labels classes.
Best Use CaseWhen classes are known, distinct, and verifiable with ground truth.When classes are unknown or when exploratory analysis is needed.
Accuracy and Skill RequirementHigh accuracy; requires skill and knowledge.Generally lower accuracy; requires less skill.
Decision RulesMinimum Distance, Maximum Likelihood, Parallelepiped, Feature Space.Classes grouped by spectral similarity.

https://geogisgeo.blogspot.com/2023/01/minimum-distance-gaussian-maximum.html



PG and Research Department of Geography,
Government College Chittur, Palakkad
https://g.page/vineeshvc

Comments

Popular posts from this blog

Optical Sensors in Remote Sensing

1. What Are Optical Sensors? Optical sensors are remote sensing instruments that detect solar radiation reflected or emitted from the Earth's surface in specific portions of the electromagnetic spectrum (EMS) . They mainly work in: Visible region (0.4–0.7 µm) Near-Infrared – NIR (0.7–1.3 µm) Shortwave Infrared – SWIR (1.3–3.0 µm) Thermal Infrared – TIR (8–14 µm) — emitted energy, not reflected Optical sensors capture spectral signatures of surface features. Each object reflects/absorbs energy differently, creating a unique spectral response pattern . a) Electromagnetic Spectrum (EMS) The continuous range of wavelengths. Optical sensing uses solar reflective bands and sometimes thermal bands . b) Spectral Signature The unique pattern of reflectance or absorbance of an object across wavelengths. Example: Vegetation reflects strongly in NIR Water absorbs strongly in NIR and SWIR (appears dark) c) Radiance and Reflectance Radi...

Radar Sensors in Remote Sensing

Radar sensors are active remote sensing instruments that use microwave radiation to detect and measure Earth's surface features. They transmit their own energy (radio waves) toward the Earth and record the backscattered signal that returns to the sensor. Since they do not depend on sunlight, radar systems can collect data: day or night through clouds, fog, smoke, and rain in all weather conditions This makes radar extremely useful for Earth observation. 1. Active Sensor A radar sensor produces and transmits its own microwaves. This is different from optical and thermal sensors, which depend on sunlight or emitted heat. 2. Microwave Region Radar operates in the microwave region of the electromagnetic spectrum , typically from 1 mm to 1 m wavelength. Common radar frequency bands: P-band (70 cm) L-band (23 cm) S-band (9 cm) C-band (5.6 cm) X-band (3 cm) Each band penetrates and interacts with surfaces differently: Lo...

Thermal Sensors in Remote Sensing

Thermal sensors are remote sensing instruments that detect naturally emitted thermal infrared (TIR) radiation from the Earth's surface. Unlike optical sensors (which detect reflected sunlight), thermal sensors measure heat energy emitted by objects because of their temperature. They work mainly in the Thermal Infrared region (8–14 µm) of the electromagnetic spectrum. 1. Thermal Infrared Radiation All objects above 0 Kelvin (absolute zero) emit electromagnetic radiation. This is explained by Planck's Radiation Law . For Earth's surface temperature range (about 250–330 K), the peak emitted radiation occurs in the 8–14 µm thermal window . Thus, thermal sensors detect emitted energy , not reflected sunlight. 2. Emissivity Emissivity is the efficiency with which a material emits thermal radiation. Values range from 0 to 1 : Water, vegetation → high emissivity (0.95–0.99) Bare soil → medium (0.85–0.95) Metals → low (0.1–0.3) E...

Geometric Correction

When satellite or aerial images are captured, they often contain distortions (errors in shape, scale, or position) caused by many factors — like Earth's curvature, satellite motion, terrain height (relief), or the Earth's rotation . These distortions make the image not properly aligned with real-world coordinates (latitude and longitude). 👉 Geometric correction is the process of removing these distortions so that every pixel in the image correctly represents its location on the Earth's surface. After geometric correction, the image becomes geographically referenced and can be used with maps and GIS data. Types  1. Systematic Correction Systematic errors are predictable and can be modeled mathematically. They occur due to the geometry and movement of the satellite sensor or the Earth. Common systematic distortions: Scan skew – due to the motion of the sensor as it scans the Earth. Mirror velocity variation – scanning mirror moves at a va...

LiDAR in Remote Sensing

LiDAR (Light Detection and Ranging) is an active remote sensing technology that uses laser pulses to measure distances to the Earth's surface and create high-resolution 3D maps . LiDAR sensors emit short pulses of laser light (usually in the near-infrared range) and measure the time it takes for the pulse to return after hitting an object. Because LiDAR measures distance very precisely, it is excellent for mapping: terrain vegetation height buildings forests coastlines flood plains ✅ 1. Active Sensor LiDAR sends its own laser energy, unlike passive sensors that rely on sunlight. ✅ 2. Laser Pulse LiDAR emits thousands of pulses per second (even millions). Wavelengths commonly used: Near-Infrared (NIR) → land and vegetation mapping Green (532 nm) → water/ bathymetry (penetrates shallow water) ✅ 3. Time of Flight (TOF) The sensor measures the time taken for the laser to travel: from the sensor → to the sur...