Supervised classification is a digital image classification method where the analyst guides the classification process by defining classes of interest and providing representative training samples.
The classifier uses these training samples to learn the spectral signatures of each class and then assigns every pixel in the image to the most appropriate class.
This method relies heavily on prior knowledge of the study area.
How Supervised Classification Works
✔ Step 1: Define Information Classes
These are real-world land-cover classes such as:
-
water
-
forest
-
agriculture
-
urban
-
barren land
✔ Step 2: Select Training Areas
Training areas (also called ROIs—Regions of Interest) are chosen on the image where the analyst is confident about the land-cover type.
✔ Step 3: Extract Spectral Signatures
The classifier calculates:
-
mean
-
variance
-
covariance
-
pixel distribution
for each class across different spectral bands.
✔ Step 4: Apply Decision Rules
The classification algorithm uses statistical rules to assign each pixel to a class.
✔ Step 5: Produce Classified Output
The final output is a thematic map showing land-cover classes.
When to Use Supervised Classification
Use supervised classification when:
-
You have prior knowledge of the landscape.
-
Ground truth or ancillary data is available (GPS points, survey data).
-
You can identify distinct, homogeneous training sites for each class.
-
The objective is to extract specific land-cover categories.
Information Class vs Spectral Class
Understanding the difference between these two is essential:
✔ Information Class
-
Defined by the analyst based on real-world concepts.
-
Examples: village, river, wetland, cropland.
-
Represents semantic categories used for mapping and interpretation.
✔ Spectral Class
-
Group of pixels that are spectrally similar, based on reflectance values.
-
Identified statistically by the software.
-
May not always match real-world categories exactly.
๐ Mapping involves matching spectral classes to information classes.
Supervised Training
Supervised training involves:
-
Manually selecting representative pixel samples
-
Ensuring the samples capture the full spectral variability of each class
(e.g., different shades of vegetation or soil types) -
Evaluating spectral signatures using
-
histograms
-
scatter plots
-
spectral profiles
-
separability indices (e.g., Jeffries–Matusita)
-
✔ Characteristics
-
Analyst-controlled
-
Knowledge-driven
-
Often more accurate
-
Requires skill in selecting high-quality training data
Classification Decision Rules (Supervised)
Decision rules determine how the classifier decides which class a pixel belongs to.
They fall into two broad groups:
Parametric Decision Rules
Parametric classifiers assume pixel values follow a normal (Gaussian) distribution.
These rules rely on statistical measures such as:
-
class mean
-
variance
-
covariance
-
probability density functions
✔ Minimum Distance Classifier
-
Computes Euclidean or Mahalanobis distance between pixel and class mean.
-
Assigns pixel to the closest class mean.
-
Simple and fast but may misclassify overlapping classes.
✔ Maximum Likelihood Classifier (MLC)
-
Most widely used supervised classifier.
-
Considers:
-
class mean
-
variance
-
covariance
-
overall probability distribution
-
-
Assigns pixel to the class with the highest likelihood of belonging.
-
Requires good training data; performs best when classes are normally distributed.
Nonparametric Decision Rules
Do not assume any specific statistical distribution; useful when pixel distributions are irregular.
✔ Parallelepiped Classifier
-
Creates "boxes" using min–max values for each band.
-
A pixel is assigned to a class if its values fall within the box.
-
Fast, but may leave pixels:
-
unclassified (if no box contains the pixel)
-
ambiguously classified (if pixel falls in more than one box)
-
✔ Feature Space Classifier
-
Plots pixel values in a multi-dimensional feature space.
-
Uses polygons in the feature space to define classes.
-
More flexible and accurate than parallelepiped.
-
Good for visually evaluating class separability.
Comments
Post a Comment