Skip to main content

Landsat #NASA #USGS #Earth Taking Temperatures from ISS

Taking Temperatures from ISS

When remote sensing scientists observe Earth, they often look for heat signatures. Fires, volcanoes, ice, water, and even sunlit or shaded landscapes emit and reflect heat and light—energy—in ways that make them stand out from their surroundings. NASA scientists recently used a new sensor to read some of those signatures more clearly.

Through nearly a year of testing on the International Space Station (ISS), the experimental Compact Thermal Imager (CTI) collected more than 15 million images of Earth, and the results were compelling. Researchers were impressed by the breadth and quality of the imagery CTI collected in 10 months on the ISS, particularly of fires.

For instance, CTI captured several images of the unusually severe fires in Australia that burned for four months in 2019-20. With its 80-meter (260 foot) per pixel resolution, CTI was able to detect the shape and location of fire fronts and how far they were from settled areas—information that is critically important to first responders.

For the past two decades, scientists have generally relied upon coarse resolution (375–1000 m) thermal data from the satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) sensors to monitor fire activity from above. During its flight test, CTI made observations of fires with 20 times more detail than VIIRS and 190 times more detail than MODIS.

The images above highlight the difference. Both images show CTI's view of large fires burning in the Gondwana Rainforests of New South Wales on November 1, 2019. The right image also includes the VIIRS fire detections (red diamonds) of the same area that day. The data were overlaid on a natural-color image acquired by the Operational Land Imager (OLI) on Landsat 8.

The image below, acquired by the European Space Agency's Sentinel-2 spacecraft on November 1, shows a more detailed view of one of the fire clusters, along with the CTI data.

"CTI's deployment on the space station was primarily a test of how well the hardware would perform in space. It was not initially designed as a science mission," explained Doug Morton, chief of the Biospheric Sciences Laboratory at NASA's Goddard Space Flight Center. "Nonetheless, CTI data proved scientifically useful as we monitored several high-profile fire outbreaks this past summer."

One aspect of CTI's mission that was of particular interest to Morton was the timing of the images. MODIS and VIIRS have polar orbits and make observations over a given area at the same time each day (roughly 10:30 a.m. and 1:30 p.m.). Imagers on the ISS provide more variety and less consistency in timing, as the orbit of the International Space Station is more variable, as is the lighting and angles as it passes over different locations.

"We ended up getting these amazing images of fires at times of the day when we don't usually get them," said Morton. Fire researchers are eager to have more views of fires around dawn and dusk, which are sometimes missed by MODIS and VIIRS. "It was a reminder of how much critical science we could do if we had a whole fleet of sensors like CTI giving us such detailed measurements multiple times a day."

CTI was designed at NASA's Goddard Space Flight Center and installed on the ISS in 2019 as part of the Robotic Refueling Mission 3. It used an advanced detector called a strained layer superlattice (SLS), an improved version of the detector technology that is part of the Thermal Infrared Sensor (TIRS) of Landsat 8 and 9.

"The new SLS technology operates at a much warmer temperature with greater sensitivity and has a broader spectral response than the TIRS technology, resulting in a smaller and less costly instrument to design and build," said Murzy Jhabvala, principal investigator for CTI. "SLS has proved itself. This technology is now a viable candidate for the future Landsat 10 and a variety of other lunar, planetary, and asteroid missions."

NASA Earth Observatory images by Lauren Dauphin, using Landsat data from the U.S. Geological Survey, VIIRS data from NASA EOSDIS/LANCE and GIBS/Worldview and the Suomi National Polar-orbiting Partnership, topographic data from the Shuttle Radar Topography Mission (SRTM), and modified Copernicus Sentinel data (2018) processed by the European Space Agency. CTI data courtesy of the CTI team at NASA's Goddard Space Flight Center. The sensor was developed with QmagiQ and funded by the Earth Science Technology Office (ESTO). Story by Adam Voiland.

Read More at:


and/or


#Landsat #NASA #USGS #Earth

....


Vineesh V
Assistant Professor of Geography,
Directorate of Education,
Government of Kerala.
https://g.page/vineeshvc

Comments

Popular posts from this blog

Photogrammetry – Types of Photographs

In photogrammetry, aerial photographs are categorized based on camera orientation , coverage , and spectral sensitivity . Below is a breakdown of the major types: 1️⃣ Based on Camera Axis Orientation Type Description Key Feature Vertical Photo Taken with the camera axis pointing directly downward (within 3° of vertical). Used for maps and measurements Oblique Photo Taken with the camera axis tilted away from vertical. Covers more area but with distortions Low Oblique: Horizon not visible High Oblique: Horizon visible 2️⃣ Based on Number of Photos Taken Type Description Single Photo One image taken of an area Stereoscopic Pair Two overlapping photos for 3D viewing and depth analysis Strip or Mosaic Series of overlapping photos covering a long area, useful in mapping large regions 3️⃣ Based on Spectral Sensitivity Type Description Application Panchromatic Captures images in black and white General mapping Infrared (IR) Sensitive to infrared radiation Veget...

Photogrammetry – Geometry of a Vertical Photograph

Photogrammetry is the science of making measurements from photographs, especially for mapping and surveying. When the camera axis is perpendicular (vertical) to the ground, the photo is called a vertical photograph , and its geometry is central to accurate mapping.  Elements of Vertical Photo Geometry In a vertical aerial photograph , the geometry is governed by the central projection principle. Here's how it works: 1. Principal Point (P) The point on the photo where the optical axis of the camera intersects the photo plane. It's the geometric center of the photo. 2. Nadir Point (N) The point on the ground directly below the camera at the time of exposure. Ideally, in a perfect vertical photo, the nadir and principal point coincide. 3. Photo Center (C) Usually coincides with the principal point in a vertical photo. 4. Ground Coordinates (X, Y, Z) Real-world (map) coordinates of objects photographed. 5. Flying Height (H) He...

Raster Data Structure

Raster Data Raster data is like a digital photo made up of small squares called cells or pixels . Each cell shows something about that spot — like how high it is (elevation), how hot it is (temperature), or what kind of land it is (forest, water, etc.). Think of it like a graph paper where each box is colored to show what's there. Key Points What's in the cell? Each cell stores information — for example, "water" or "forest." Where is the cell? The cell's location comes from its place in the grid (like row 3, column 5). We don't need to store its exact coordinates. How Do We Decide a Cell's Value? Sometimes, one cell covers more than one thing (like part forest and part water). To choose one value , we can: Center Point: Use whatever feature is in the middle. Most Area: Use the feature that takes up the most space in the cell. Most Important: Use the most important feature (like a road or well), even if it...

Photogrammetry

Photogrammetry is the science of taking measurements from photographs —especially to create maps, models, or 3D images of objects, land, or buildings. Imagine you take two pictures of a mountain from slightly different angles. Photogrammetry uses those photos to figure out the shape, size, and position of the mountain—just like our eyes do when we see in 3D! Concepts and Terminologies 1. Photograph A picture captured by a camera , either from the ground (terrestrial) or from above (aerial or drone). 2. Stereo Pair Two overlapping photos taken from different angles. When seen together, they help create a 3D effect —just like how two human eyes work. 3. Overlap To get a 3D model, photos must overlap each other: Forward overlap : Between two photos in a flight line (usually 60–70%) Side overlap : Between adjacent flight lines (usually 30–40%) 4. Scale The ratio of the photo size to real-world size. Example: A 1:10,000 scale photo means 1 cm on the photo...

Logical Data Model in GIS

In GIS, a logical data model defines how data is structured and interrelated—independent of how it is physically stored or implemented. It serves as a blueprint for designing databases, focusing on the organization of entities, their attributes, and relationships, without tying them to a specific database technology. Key Features Abstraction : The logical model operates at an abstract level, emphasizing the conceptual structure of data rather than the technical details of storage or implementation. Entity-Attribute Relationships : It identifies key entities (objects or concepts) and their attributes (properties), as well as the logical relationships between them. Business Rules : Business logic is embedded in the model to enforce rules, constraints, and conditions that ensure data consistency and accuracy. Technology Independence : The logical model is platform-agnostic—it is not tied to any specific database system or storage format. Visual Representat...