Skip to main content

Discrete Detectors and Scanning mirrors Across the track scanner Whisk broom scanner.

Multispectral Imaging Using Discrete Detectors and Scanning Mirrors (Across-Track Scanner or Whisk Broom Scanner)

Multispectral Imaging: This technique involves capturing images of the Earth's surface using multiple sensors that are sensitive to different wavelengths of electromagnetic radiation. This allows for the identification of various features and materials based on their spectral signatures.

Discrete Detectors: These are individual sensors that are arranged in a linear or array configuration. Each detector is responsible for measuring the radiation within a specific wavelength band.

Scanning Mirrors: These are optical components that are used to deflect the incoming radiation onto the discrete detectors. By moving the mirrors, the sensor can scan across the scene, capturing data from different points.

Across-Track Scanner or Whisk Broom Scanner: This refers to the scanning mechanism where the mirror moves perpendicular to the direction of flight. This allows for the collection of data along a swath, covering a wide area on the ground.

Remote Sensing Terminologies

A. Rotating Mirror

  • Definition: A mechanical component in some satellite-based remote sensing systems that rotates to scan the Earth's surface. It directs sunlight onto a sensor, enabling the collection of data over a wide area.
  • Purpose: To increase the coverage area of the sensor, allowing for rapid data acquisition.

B. Internal Detectors

  • Definition: Sensors within a remote sensing instrument that convert electromagnetic radiation into electrical signals. These signals are then processed to produce images or data.
  • Purpose: To capture and measure the intensity of radiation reflected or emitted from the Earth's surface.

C. Instantaneous Field of View (IFOV)

  • Definition: The smallest area on the ground that can be resolved by a remote sensing sensor at a given time.
  • Purpose: To determine the spatial resolution of the sensor, indicating the level of detail it can capture.

D. Ground Resolution Cell Viewed (GRCV)

  • Definition: The area on the ground corresponding to the IFOV of a sensor at a specific altitude.
  • Purpose: To measure the size of the smallest distinguishable feature on the Earth's surface.

E. Angular Field of View (AFOV)

  • Definition: The angle between the extreme rays of the field of view of a sensor.
  • Purpose: To determine the extent of the area that can be observed by the sensor at a given distance.

F. Swath

  • Definition: The width of the area on the ground that a sensor can cover in a single pass.
  • Purpose: To measure the lateral coverage of the sensor, indicating the efficiency of data collection.

How it works:

  1. Radiation Collection: The scanning mirror deflects incoming radiation from the Earth's surface onto the array of discrete detectors.
  2. Spectral Separation: Each detector measures the radiation within its specific wavelength band, capturing information about different materials and features.
  3. Scanning: The scanning mirror moves across the scene, allowing the sensor to collect data from multiple points.
  4. Data Processing: The collected data is processed to create multispectral images that can be analyzed to identify and classify features based on their spectral signatures.

Key advantages of this approach:

  • High spatial resolution: Can capture detailed images of the Earth's surface.
  • Wide swath coverage: Can cover a large area in a single pass.
  • Versatility: Can be used for various remote sensing applications, such as land use mapping, vegetation monitoring, and mineral exploration.
Warm regards.
..
Vineesh V
AISHE and UGC Nodal Officer
Assistant Professor of Geography,
Government College Chittur, Palakkad
https://g.page/vineeshvc

Comments

Popular posts from this blog

Supervised Classification

Image Classification in Remote Sensing Image classification in remote sensing involves categorizing pixels in an image into thematic classes to produce a map. This process is essential for land use and land cover mapping, environmental studies, and resource management. The two primary methods for classification are Supervised and Unsupervised Classification . Here's a breakdown of these methods and the key stages of image classification. 1. Types of Classification Supervised Classification In supervised classification, the analyst manually defines classes of interest (known as information classes ), such as "water," "urban," or "vegetation," and identifies training areas —sections of the image that are representative of these classes. Using these training areas, the algorithm learns the spectral characteristics of each class and applies them to classify the entire image. When to Use Supervised Classification:   - You have prior knowledge about the c...

Supervised Classification

In the context of Remote Sensing (RS) and Digital Image Processing (DIP) , supervised classification is the process where an analyst defines "training sites" (Areas of Interest or ROIs) representing known land cover classes (e.g., Water, Forest, Urban). The computer then uses these training samples to teach an algorithm how to classify the rest of the image pixels. The algorithms used to classify these pixels are generally divided into two broad categories: Parametric and Nonparametric decision rules. Parametric Decision Rules These algorithms assume that the pixel values in the training data follow a specific statistical distribution—almost always the Gaussian (Normal) distribution (the "Bell Curve"). Key Concept: They model the data using statistical parameters: the Mean vector ( $\mu$ ) and the Covariance matrix ( $\Sigma$ ) . Analogy: Imagine trying to fit a smooth hill over your data points. If a new point lands high up on the hill, it belongs to that cl...

History of GIS

The history of Geographic Information Systems (GIS) is rooted in early efforts to understand spatial relationships and patterns, long before the advent of digital computers. While modern GIS emerged in the mid-20th century with advances in computing, its conceptual foundations lie in cartography, spatial analysis, and thematic mapping. Early Roots of Spatial Analysis (Pre-1960s) One of the earliest documented applications of spatial analysis dates back to  1832 , when  Charles Picquet , a French geographer and cartographer, produced a cholera mortality map of Paris. In his report  Rapport sur la marche et les effets du choléra dans Paris et le département de la Seine , Picquet used graduated color shading to represent cholera deaths per 1,000 inhabitants across 48 districts. This work is widely regarded as an early example of choropleth mapping and thematic cartography applied to epidemiology. A landmark moment in the history of spatial analysis occurred in  1854 , when  John Snow  inv...

Pre During and Post Disaster

Disaster management is a structured approach aimed at reducing risks, responding effectively, and ensuring a swift recovery from disasters. It consists of three main phases: Pre-Disaster (Mitigation & Preparedness), During Disaster (Response), and Post-Disaster (Recovery). These phases involve various strategies, policies, and actions to protect lives, property, and the environment. Below is a breakdown of each phase with key concepts, terminologies, and examples. 1. Pre-Disaster Phase (Mitigation and Preparedness) Mitigation: This phase focuses on reducing the severity of a disaster by minimizing risks and vulnerabilities. It involves structural and non-structural measures. Hazard Identification: Recognizing potential natural and human-made hazards (e.g., earthquakes, floods, industrial accidents). Risk Assessment: Evaluating the probability and consequences of disasters using GIS, remote sensing, and historical data. Vulnerability Analysis: Identifying areas and p...

Atmospheric Correction

It is the process of removing the influence of the atmosphere from remotely sensed images so that the data accurately represent the true reflectance of Earth's surface . When a satellite sensor captures an image, the radiation reaching the sensor is affected by gases, water vapor, aerosols, and dust in the atmosphere. These factors scatter and absorb light, changing the brightness and color of the features seen in the image. Although these atmospheric effects are part of the recorded signal, they can distort surface reflectance values , especially when images are compared across different dates or sensors . Therefore, corrections are necessary to make data consistent and physically meaningful. 🔹 Why Do We Need Atmospheric Correction? To retrieve true surface reflectance – It separates the surface signal from atmospheric influence. To ensure comparability – Enables comparing images from different times, seasons, or sensors. To improve visual quality – Remo...