Skip to main content

Hybrid classification and Post-classification smoothing


Hybrid classification is a combined classification approach that uses both supervised and unsupervised classification techniques together.
It is designed to take advantage of the strengths of each method and to overcome their weaknesses.

What Is Hybrid Classification?

Hybrid classification blends:

  • Unsupervised classification (e.g., ISODATA, K-means)

  • Supervised classification (e.g., Maximum Likelihood, SVM)

✔ Concept

  1. First, an unsupervised algorithm groups pixels into spectral clusters without prior knowledge.

  2. These clusters are then labeled or merged into meaningful land-cover classes using supervised training data.

✔ Why use hybrid methods?

  • Unsupervised classification captures natural spectral groupings.

  • Supervised classification improves accuracy by using reference samples.

  • Together, they reduce errors caused by poor training data or complex landscapes.

✔ Key Terminology

  • Cluster: a group of pixels with similar spectral characteristics.

  • Signature training: giving labels to clusters.

  • Spectral homogeneity: similarity within a cluster.

  • Class merging: combining multiple clusters into one land-cover type.

Hybrid Classification

Step 1: Unsupervised Clustering

Algorithms like ISODATA or K-means group pixels into 20–50 clusters based only on spectral properties.

Step 2: Assign Clusters to Land-Cover Classes

Use training samples, field data, or expert knowledge to assign each cluster to:

  • water

  • forest

  • agriculture

  • built-up

  • soil
    (or other classes)

Step 3: Supervised Refinement

Run a supervised classifier (e.g., Maximum Likelihood) using the cluster-based signatures.

Step 4: Merge & Edit Classes

Check for:

  • confused clusters

  • isolated patches

  • mixed clusters
    Clusters are merged or adjusted.

Step 5: Final classified image

A clean, corrected classification map is produced.

Advantages of Hybrid Classification

  • Improves accuracy in heterogeneous landscapes

  • Handles mixed pixels better

  • Reduces reliance on perfect training samples

  • Captures subtle spectral differences

  • Reduces spectral confusion between similar classes

✔ Suitable for:

  • complex landscapes

  • urban environments

  • vegetation mosaics

  • large areas with limited training data

Limitations

  • More interactive and time-consuming

  • Requires expertise for cluster labeling

  • Too many clusters can make interpretation difficult

Post-Classification Smoothing

After classification, the resulting land-cover map often has:

  • salt-and-pepper noise

  • scattered small patches

  • isolated mislabeled pixels

Post-classification smoothing removes these artifacts to produce a cleaner, generalized map.

What Is Post-Classification Smoothing?

It is the process of cleaning and refining a classified image by applying spatial filters or majority rules to reduce noise and improve map readability.

✔ Why smoothing is needed?

Because pixel-based classifiers classify each pixel individually, ignoring spatial relationships.
This results in:

  • random noisy pixels

  • speckled appearances

  • unrealistic boundaries

Smoothing creates spatially coherent regions.

Common Smoothing Techniques

A. Majority (Mode) Filter

  • A moving window (3×3, 5×5) scans the classified image.

  • Each pixel is replaced by the most common class in the window.

  • Removes small patches and isolated noise.

B. Median Filter

  • Similar to majority filter but uses median instead of majority.

  • Preserves edges better.

C. Morphological Operations

  • Opening: removes small isolated pixels.

  • Closing: fills gaps in homogeneous regions.

D. Region Growing

  • Groups contiguous pixels belonging to the same class into larger coherent regions.

E. Elimination of Small Patches

  • Removes polygons smaller than a defined threshold (e.g., <1 hectare).

Results of Smoothing

  • More realistic class shapes

  • Reduced classification noise

  • Better readability for maps

  • Improved accuracy for urban and natural landscapes

  • Cleaner boundaries between classes


Comments

Popular posts from this blog

Platforms in Remote Sensing

In remote sensing, a platform is the physical structure or vehicle that carries a sensor (camera, scanner, radar, etc.) to observe and collect information about the Earth's surface. Platforms are classified mainly by their altitude and mobility : Ground-Based Platforms Definition : Sensors mounted on the Earth's surface or very close to it. Examples : Tripods, towers, ground vehicles, handheld instruments. Applications : Calibration and validation of satellite data Detailed local studies (e.g., soil properties, vegetation health, air quality) Strength : High spatial detail but limited coverage. Airborne Platforms Definition : Sensors carried by aircraft, balloons, or drones (UAVs). Altitude : A few hundred meters to ~20 km. Examples : Airplanes with multispectral scanners UAVs with high-resolution cameras or LiDAR High-altitude balloons (stratospheric platforms) Applications : Local-to-regional mapping ...

Types of Remote Sensing

Remote Sensing means collecting information about the Earth's surface without touching it , usually using satellites, aircraft, or drones . There are different types of remote sensing based on the energy source and the wavelength region used. ๐Ÿ›ฐ️ 1. Active Remote Sensing ๐Ÿ“˜ Concept: In active remote sensing , the sensor sends out its own energy (like a signal or pulse) to the Earth's surface. The sensor then records the reflected or backscattered energy that comes back from the surface. ⚙️ Key Terminology: Transmitter: sends energy (like a radar pulse or laser beam). Receiver: detects the energy that bounces back. Backscatter: energy that is reflected back to the sensor. ๐Ÿ“Š Examples of Active Sensors: RADAR (Radio Detection and Ranging): Uses microwave signals to detect surface roughness, soil moisture, or ocean waves. LiDAR (Light Detection and Ranging): Uses laser light (near-infrared) to measure elevation, vegetation...

Optical Sensors in Remote Sensing

1. What Are Optical Sensors? Optical sensors are remote sensing instruments that detect solar radiation reflected or emitted from the Earth's surface in specific portions of the electromagnetic spectrum (EMS) . They mainly work in: Visible region (0.4–0.7 ยตm) Near-Infrared – NIR (0.7–1.3 ยตm) Shortwave Infrared – SWIR (1.3–3.0 ยตm) Thermal Infrared – TIR (8–14 ยตm) — emitted energy, not reflected Optical sensors capture spectral signatures of surface features. Each object reflects/absorbs energy differently, creating a unique spectral response pattern . a) Electromagnetic Spectrum (EMS) The continuous range of wavelengths. Optical sensing uses solar reflective bands and sometimes thermal bands . b) Spectral Signature The unique pattern of reflectance or absorbance of an object across wavelengths. Example: Vegetation reflects strongly in NIR Water absorbs strongly in NIR and SWIR (appears dark) c) Radiance and Reflectance Radi...

Resolution of Sensors in Remote Sensing

Spatial Resolution ๐Ÿ—บ️ Definition : The smallest size of an object on the ground that a sensor can detect. Measured as : The size of a pixel on the ground (in meters). Example : Landsat → 30 m (each pixel = 30 × 30 m on Earth). WorldView-3 → 0.31 m (very detailed, you can see cars). Fact : Higher spatial resolution = finer details, but smaller coverage. Spectral Resolution ๐ŸŒˆ Definition : The ability of a sensor to capture information in different parts (bands) of the electromagnetic spectrum . Measured as : The number and width of spectral bands. Types : Panchromatic (1 broad band, e.g., black & white image). Multispectral (several broad bands, e.g., Landsat with 7–13 bands). Hyperspectral (hundreds of very narrow bands, e.g., AVIRIS). Fact : Higher spectral resolution = better identification of materials (e.g., minerals, vegetation types). Radiometric Resolution ๐Ÿ“Š Definition : The ability of a sensor to ...

Radar Sensors in Remote Sensing

Radar sensors are active remote sensing instruments that use microwave radiation to detect and measure Earth's surface features. They transmit their own energy (radio waves) toward the Earth and record the backscattered signal that returns to the sensor. Since they do not depend on sunlight, radar systems can collect data: day or night through clouds, fog, smoke, and rain in all weather conditions This makes radar extremely useful for Earth observation. 1. Active Sensor A radar sensor produces and transmits its own microwaves. This is different from optical and thermal sensors, which depend on sunlight or emitted heat. 2. Microwave Region Radar operates in the microwave region of the electromagnetic spectrum , typically from 1 mm to 1 m wavelength. Common radar frequency bands: P-band (70 cm) L-band (23 cm) S-band (9 cm) C-band (5.6 cm) X-band (3 cm) Each band penetrates and interacts with surfaces differently: Lo...