Skip to main content

LiDaR Principles and applications

LIDAR, which stands for Light Detection and Ranging, is a remote sensing technology that uses laser light to measure distances and create detailed three-dimensional representations of objects and environments. Here are the fundamental principles of LIDAR remote sensing:


1. Laser Emission: LIDAR systems emit laser pulses towards the target area. These laser pulses are typically in the form of short, intense bursts of light.


2. Propagation of Light: The emitted laser pulses travel through the atmosphere, where they may interact with particles or molecules, experiencing scattering and absorption. The laser light continues until it strikes an object or a surface.


3. Reflection or Scattering: When the laser pulse encounters an object or surface, a portion of the light is reflected back towards the LIDAR sensor. The time it takes for the laser pulse to travel to the target and return is measured.


4. Time-of-Flight Measurement: LIDAR calculates the distance to the target by precisely measuring the time it takes for the laser pulse to travel to the target and back. This is done using the speed of light as a constant.


5. Multiple Measurements: LIDAR systems typically emit thousands or even millions of laser pulses per second and record the return time and intensity of each pulse. This results in a dense cloud of points, often referred to as a "point cloud."


6. Data Processing: The collected data is processed to generate a detailed three-dimensional representation of the target area. This point cloud can be used to create digital elevation models, maps, or 3D models of objects and terrain.


7. Applications: LIDAR remote sensing is used in various applications, including topographic mapping, forestry management, urban planning, archaeology, autonomous vehicles, and more. Its ability to provide precise elevation and object information makes it invaluable for many industries.


LIDAR technology can be implemented in various ways, such as airborne LIDAR using aircraft or UAVs, terrestrial LIDAR for ground-based scanning, and even spaceborne LIDAR for planetary exploration. It has revolutionized the way we collect detailed geospatial information and has numerous practical applications in science, engineering, and environmental monitoring.



1. ICESat-2 (Ice, Cloud, and land Elevation Satellite-2): Launched in 2018 by NASA, ICESat-2 is designed for Earth science research, specifically to measure changes in ice sheet thickness and sea ice freeboard. It uses a LIDAR system called the Advanced Topographic Laser Altimeter System (ATLAS) to collect elevation data.


2. GEDI (Global Ecosystem Dynamics Investigation): Launched as part of the International Space Station (ISS) payload, GEDI is a LIDAR instrument that measures the three-dimensional structure of forests and ecosystems. It provides valuable data for understanding the Earth's carbon cycle.


3. LRO (Lunar Reconnaissance Orbiter): While primarily designed for lunar exploration, NASA's LRO carries a LIDAR instrument called the Lunar Orbiter Laser Altimeter (LOLA). LOLA measures the surface topography of the Moon with high precision.


4. TanDEM-X: This is a German radar satellite mission operated in conjunction with TerraSAR-X. Although it primarily uses radar technology, it also features a bistatic mode that, when combined with TerraSAR-X, can produce a global digital elevation model (DEM) with unprecedented accuracy.


5. ISAT (Indian Satellite for Antarctic Observation): ISAT-1, an Indian remote sensing satellite, was equipped with a LIDAR altimeter. It was used for monitoring ice sheet dynamics and elevation changes in the polar regions.


6. ATLAS (Advanced Topographic Laser Altimeter System): ATLAS is a LIDAR instrument onboard the Earth, Science, and Climate Pathfinder satellite, which is part of NASA's Earth System Science Pathfinder Program. It's used to monitor ice sheet elevation changes.





Comments

Popular posts from this blog

Photogrammetry – Types of Photographs

In photogrammetry, aerial photographs are categorized based on camera orientation , coverage , and spectral sensitivity . Below is a breakdown of the major types: 1️⃣ Based on Camera Axis Orientation Type Description Key Feature Vertical Photo Taken with the camera axis pointing directly downward (within 3° of vertical). Used for maps and measurements Oblique Photo Taken with the camera axis tilted away from vertical. Covers more area but with distortions Low Oblique: Horizon not visible High Oblique: Horizon visible 2️⃣ Based on Number of Photos Taken Type Description Single Photo One image taken of an area Stereoscopic Pair Two overlapping photos for 3D viewing and depth analysis Strip or Mosaic Series of overlapping photos covering a long area, useful in mapping large regions 3️⃣ Based on Spectral Sensitivity Type Description Application Panchromatic Captures images in black and white General mapping Infrared (IR) Sensitive to infrared radiation Veget...

Photogrammetry – Geometry of a Vertical Photograph

Photogrammetry is the science of making measurements from photographs, especially for mapping and surveying. When the camera axis is perpendicular (vertical) to the ground, the photo is called a vertical photograph , and its geometry is central to accurate mapping.  Elements of Vertical Photo Geometry In a vertical aerial photograph , the geometry is governed by the central projection principle. Here's how it works: 1. Principal Point (P) The point on the photo where the optical axis of the camera intersects the photo plane. It's the geometric center of the photo. 2. Nadir Point (N) The point on the ground directly below the camera at the time of exposure. Ideally, in a perfect vertical photo, the nadir and principal point coincide. 3. Photo Center (C) Usually coincides with the principal point in a vertical photo. 4. Ground Coordinates (X, Y, Z) Real-world (map) coordinates of objects photographed. 5. Flying Height (H) He...

Raster Data Structure

Raster Data Raster data is like a digital photo made up of small squares called cells or pixels . Each cell shows something about that spot — like how high it is (elevation), how hot it is (temperature), or what kind of land it is (forest, water, etc.). Think of it like a graph paper where each box is colored to show what's there. Key Points What's in the cell? Each cell stores information — for example, "water" or "forest." Where is the cell? The cell's location comes from its place in the grid (like row 3, column 5). We don't need to store its exact coordinates. How Do We Decide a Cell's Value? Sometimes, one cell covers more than one thing (like part forest and part water). To choose one value , we can: Center Point: Use whatever feature is in the middle. Most Area: Use the feature that takes up the most space in the cell. Most Important: Use the most important feature (like a road or well), even if it...

Logical Data Model in GIS

In GIS, a logical data model defines how data is structured and interrelated—independent of how it is physically stored or implemented. It serves as a blueprint for designing databases, focusing on the organization of entities, their attributes, and relationships, without tying them to a specific database technology. Key Features Abstraction : The logical model operates at an abstract level, emphasizing the conceptual structure of data rather than the technical details of storage or implementation. Entity-Attribute Relationships : It identifies key entities (objects or concepts) and their attributes (properties), as well as the logical relationships between them. Business Rules : Business logic is embedded in the model to enforce rules, constraints, and conditions that ensure data consistency and accuracy. Technology Independence : The logical model is platform-agnostic—it is not tied to any specific database system or storage format. Visual Representat...

Photogrammetry

Photogrammetry is the science of taking measurements from photographs —especially to create maps, models, or 3D images of objects, land, or buildings. Imagine you take two pictures of a mountain from slightly different angles. Photogrammetry uses those photos to figure out the shape, size, and position of the mountain—just like our eyes do when we see in 3D! Concepts and Terminologies 1. Photograph A picture captured by a camera , either from the ground (terrestrial) or from above (aerial or drone). 2. Stereo Pair Two overlapping photos taken from different angles. When seen together, they help create a 3D effect —just like how two human eyes work. 3. Overlap To get a 3D model, photos must overlap each other: Forward overlap : Between two photos in a flight line (usually 60–70%) Side overlap : Between adjacent flight lines (usually 30–40%) 4. Scale The ratio of the photo size to real-world size. Example: A 1:10,000 scale photo means 1 cm on the photo...