Skip to main content

#Landsat #NASA #USGS #Earth. Great Bahama Bank


When oceanographer Serge Andréfouet first saw a satellite image of the Great Bahama Bank, he knew the colors and contours were special. He passed the unique image to a colleague, who submitted it to NASA's Earth Observatory (EO) for an Image of the Day in 2002 (top image). Nearly eighteen years later, the image is still much appreciated. In fact, it knocked off more recent satellite imagery to win EO's Tournament Earth 2020.

"There are many nice seagrass and sand patterns worldwide, but none like this anywhere on Earth," said Andréfouet, who is now studying reefs at the Institute for Marine Research & Observation in Indonesia. "I am not surprised it is still a favorite, especially for people who see it for the first time." He said the image has been featured over the years on numerous websites, in books, and even at rave parties.

The varying colors and curves remind us of graceful strokes on a painting, but the features were sculpted by geologic processes and ocean creatures. The Great Bahama Bank was dry land during past ice ages, but it slowly submerged as sea levels rose. Today, the bank is covered by water, though it can be as shallow as two meters (seven feet) deep in places. The bank itself is composed of white carbonate sand and limestone, mainly from the skeletal fragments of corals. The Florida peninsula was built from similar deposits.

Andréfouet's image (top) shows a small section of the bank as it appeared on January 17, 2001, and was acquired by the Enhanced Thematic Mapper Plus (ETM+) on the Landsat 7 satellite (using bands 1-2-3). At that time the instrument's blue channel (band 1) helped distinguish shallow water features better than previous satellite mission.

The wave-shaped ripples in the images are sand on the seafloor. The curves follow the slopes of underwater dunes, which were probably shaped by a fairly strong current near the sea bottom. Sand and seagrass are present in different quantities and at different depths, which gives the image a range of blues and greens. The area appeared largely the same when Landsat 8 passed over on February 15, 2020.

The shallow bank quickly drops off into a deep, dark region known as the "Tongue of the Ocean." Diving about 2,000 meters (6,500 feet) deep, the Tongue of the Ocean is home to more than 160 fish and coral species. It lies adjacent to the Andros Island, the largest in the Bahamas and one of the largest fringing reefs in the world. The image above was acquired on April 4, 2020, by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite.

At the time of the 2001 image, researchers did not have a good understanding of the location and distribution of reef systems across the world. Global maps of coral reefs had not changed much since the 19th Century. So researchers turned to satellites for a better view. Andréfouet's image was collected as part of the NASA-funded Millennium Coral Reef Mapping Project, which aimed to image and map coral reefs worldwide. The project gathered more than 1,700 images with Landsat 7, the first Landsat to take images over coastal waters and the open ocean.

Today, many satellites and research programs continue to map and monitor coral reef systems, and marine scientists have a better idea of where the reefs are and how they are faring. Researchers now use reef images and maps in tandem with sea surface temperature data to identify areas vulnerable to coral bleaching.

NASA Earth Observatory images by Joshua Stevens, using Landsat data from the U.S. Geological Survey, and MODIS data from NASA EOSDIS/LANCE and GIBS/Worldview.2002 imagery courtesy Serge Andrefouet, University of South Florida. Story by Kasha Patel.


#Landsat #NASA #USGS #Earth


....

Vineesh V
Assistant Professor of Geography,
Directorate of Education,
Government of Kerala.
https://g.page/vineeshvc

Comments

Popular posts from this blog

Photogrammetry – Types of Photographs

In photogrammetry, aerial photographs are categorized based on camera orientation , coverage , and spectral sensitivity . Below is a breakdown of the major types: 1️⃣ Based on Camera Axis Orientation Type Description Key Feature Vertical Photo Taken with the camera axis pointing directly downward (within 3° of vertical). Used for maps and measurements Oblique Photo Taken with the camera axis tilted away from vertical. Covers more area but with distortions Low Oblique: Horizon not visible High Oblique: Horizon visible 2️⃣ Based on Number of Photos Taken Type Description Single Photo One image taken of an area Stereoscopic Pair Two overlapping photos for 3D viewing and depth analysis Strip or Mosaic Series of overlapping photos covering a long area, useful in mapping large regions 3️⃣ Based on Spectral Sensitivity Type Description Application Panchromatic Captures images in black and white General mapping Infrared (IR) Sensitive to infrared radiation Veget...

Photogrammetry – Geometry of a Vertical Photograph

Photogrammetry is the science of making measurements from photographs, especially for mapping and surveying. When the camera axis is perpendicular (vertical) to the ground, the photo is called a vertical photograph , and its geometry is central to accurate mapping.  Elements of Vertical Photo Geometry In a vertical aerial photograph , the geometry is governed by the central projection principle. Here's how it works: 1. Principal Point (P) The point on the photo where the optical axis of the camera intersects the photo plane. It's the geometric center of the photo. 2. Nadir Point (N) The point on the ground directly below the camera at the time of exposure. Ideally, in a perfect vertical photo, the nadir and principal point coincide. 3. Photo Center (C) Usually coincides with the principal point in a vertical photo. 4. Ground Coordinates (X, Y, Z) Real-world (map) coordinates of objects photographed. 5. Flying Height (H) He...

Raster Data Structure

Raster Data Raster data is like a digital photo made up of small squares called cells or pixels . Each cell shows something about that spot — like how high it is (elevation), how hot it is (temperature), or what kind of land it is (forest, water, etc.). Think of it like a graph paper where each box is colored to show what's there. Key Points What's in the cell? Each cell stores information — for example, "water" or "forest." Where is the cell? The cell's location comes from its place in the grid (like row 3, column 5). We don't need to store its exact coordinates. How Do We Decide a Cell's Value? Sometimes, one cell covers more than one thing (like part forest and part water). To choose one value , we can: Center Point: Use whatever feature is in the middle. Most Area: Use the feature that takes up the most space in the cell. Most Important: Use the most important feature (like a road or well), even if it...

Logical Data Model in GIS

In GIS, a logical data model defines how data is structured and interrelated—independent of how it is physically stored or implemented. It serves as a blueprint for designing databases, focusing on the organization of entities, their attributes, and relationships, without tying them to a specific database technology. Key Features Abstraction : The logical model operates at an abstract level, emphasizing the conceptual structure of data rather than the technical details of storage or implementation. Entity-Attribute Relationships : It identifies key entities (objects or concepts) and their attributes (properties), as well as the logical relationships between them. Business Rules : Business logic is embedded in the model to enforce rules, constraints, and conditions that ensure data consistency and accuracy. Technology Independence : The logical model is platform-agnostic—it is not tied to any specific database system or storage format. Visual Representat...

Photogrammetry

Photogrammetry is the science of taking measurements from photographs —especially to create maps, models, or 3D images of objects, land, or buildings. Imagine you take two pictures of a mountain from slightly different angles. Photogrammetry uses those photos to figure out the shape, size, and position of the mountain—just like our eyes do when we see in 3D! Concepts and Terminologies 1. Photograph A picture captured by a camera , either from the ground (terrestrial) or from above (aerial or drone). 2. Stereo Pair Two overlapping photos taken from different angles. When seen together, they help create a 3D effect —just like how two human eyes work. 3. Overlap To get a 3D model, photos must overlap each other: Forward overlap : Between two photos in a flight line (usually 60–70%) Side overlap : Between adjacent flight lines (usually 30–40%) 4. Scale The ratio of the photo size to real-world size. Example: A 1:10,000 scale photo means 1 cm on the photo...