Skip to main content

Posts

Showing posts from January, 2023

Water vapor Imagery

Water vapor imagery is useful for weather analysis and forecasting. Water vapor is transparent to visible and IR radiation. Water vapor absorbs and emits radiation between 6.5 and 6.9 microns. Satellite radiometers measure the amount of radiation emitted by the atmosphere at these wavelengths to detect water vapor. Water vapor satellite images show water vapor concentration between 200 and 500 mb. Low amounts of water vapor are indicated by black and high concentrations by milky white. Bright white regions correspond to cirrus clouds. Strong contrast in water vapor in middle latitude regions indicates the presence of a jet stream. 

Digital Image

Electromagnetic energy may be detected either photographically or electronically. The photographic process uses chemical reactions on the surface of light-sensitive film to detect and record energy variations. It is important to distinguish between the terms images and photographs in remote sensing. An image refers to any pictorial representation, regardless of what wavelengths or remote sensing device has been used to detect and record the electromagnetic energy. A photograph refers specifically to images that have been detected as well as recorded on photographic film. The black and white photo to the left, of part of the city of Ottawa, Canada was taken in the visible part of the spectrum. Photos are normally recorded over the wavelength range from 0.3 µm to 0.9 µm - the visible and reflected infrared. Based on these definitions, we can say that all photographs are images, but not all images are photographs. Therefore, unless we are talking specifically about an image recorded photo

EMR spectrum. Electromagnetic Spectrum

Electromagnetic spectrum ranges from the shorter wavelengths (including gamma and x-rays) to the longer wavelengths (including microwaves and broadcast radio waves). There are several regions of the electromagnetic spectrum which are useful for remote sensing.   For most purposes, the ultraviolet or UV portion of the spectrum has the shortest wavelengths which are practical for remote sensing. This radiation is just beyond the violet portion of the visible wavelengths, hence its name. Some Earth surface materials, primarily rocks and minerals, fluoresce or emit visible light when illuminated by UV radiation.   The light which our eyes - our "remote sensors" - can detect is part of the visible spectrum. It is important to recognize how small the visible portion is relative to the rest of the spectrum. There is a lot of radiation around us which is "invisible" to our eyes, but can be detected by other remote sensing instruments and used to our advantage. The visible w

Ideal Remote Sensing System.

Remote sensing is the science (and to some extent, art) of acquiring information about the Earth's surface without actually being in contact with it. This is done by sensing and recording reflected or emitted energy and processing, analyzing, and applying that information." In much of remote sensing, the process involves an interaction between incident radiation and the targets of interest. This is exemplified by the use of imaging systems where the following seven elements are involved. Note, however that remote sensing also involves the sensing of emitted energy and the use of non-imaging sensors.  1. Energy Source or Illumination (A) - the first requirement for remote sensing is to have an energy source which illuminates or provides electromagnetic energy to the target of interest. 2. Radiation and the Atmosphere (B) - as the energy travels from its source to the target, it will come in contact with and interact with the atmosphere it passes through. This interaction may ta

Rayleigh scattering. Mie scattering. Nonselective scattering.

Rayleigh scattering Mie scattering Nonselective scattering.  Interactions with the Atmosphere Before radiation used for remote sensing reaches the Earth's surface it has to travel through some distance of the Earth's atmosphere. Particles and gases in the atmosphere can affect the incoming light and radiation. These effects are caused by the mechanisms of scattering and absorption. Earth's atmosphere Scattering occurs when particles or large gas molecules present in the atmosphere interact with and cause the electromagnetic radiation to be redirected from its original path. How much scattering takes place depends on several factors including the wavelength of the radiation, the abundance of particles or gases, and the distance the radiation travels through the atmosphere. There are three (3) types of scattering which take place. Rayleigh scattering Mie scattering Nonselective scattering.  Rayleigh scattering occurs when particles are very small compared to the wavelength of

Application of Remote Sensing. Disaster Management

Disaster monitoring and early warning: Remote sensing can be used to monitor and forecast natural disasters such as floods, hurricanes, and earthquakes, which can help in providing early warning and evacuation planning. Damage assessment: Remote sensing can be used to assess the extent and severity of damage caused by natural disasters, which can help in planning and coordinating response and recovery efforts. Search and rescue operations: Remote sensing can be used to locate and track people and vehicles in affected areas, which can help in search and rescue operations. Mapping of affected areas: Remote sensing can be used to create detailed maps of affected areas, which can help in navigation, logistics, and emergency response planning. Identification of critical infrastructure: Remote sensing can be used to identify and map critical infrastructure such as roads, bridges, and buildings, which can help in assessing the impact of disasters on these structures and in planning repairs an

Application of Remote Sensing. Urban Planning ...

Urban land use and land cover mapping: Remote sensing can be used to map and monitor land use and land cover changes in urban areas, such as the expansion of housing developments, commercial areas, and transportation infrastructure. Identification of areas of urban sprawl: Remote sensing can be used to identify areas of urban sprawl, which can help in planning for sustainable growth and managing land use. Building inventory and change detection: Remote sensing can be used to create a detailed inventory of buildings and other structures in urban areas, and to detect changes in the built environment over time. Transportation planning: Remote sensing can be used to map and monitor transportation infrastructure, such as roads, bridges, and rail lines, which can help in planning for new transportation projects and managing existing infrastructure. Mapping of green spaces: Remote sensing can be used to map and monitor the distribution and health of green spaces in urban areas, such as parks,

Application of Remote Sensing. Agriculture

Crop mapping and monitoring: Remote sensing can be used to map and monitor crop growth, yield, and health, and to detect changes in crop cover over time. Crop yield forecasting: Remote sensing data can be used to estimate crop yields, which can help farmers plan for planting, harvesting, and marketing of crops. Identification of crop stress: Remote sensing can be used to identify crop stress caused by factors such as drought, pests, and disease, which can help farmers take action to mitigate the effects of these stressors. Irrigation management: Remote sensing can be used to assess the water needs of crops and to optimize irrigation schedules, which can help farmers save water and reduce costs. Soil moisture monitoring: Remote sensing can be used to monitor soil moisture levels, which can help farmers to optimize irrigation schedules and improve crop yields. Precision agriculture: Remote sensing can be used in precision agriculture to generate high-resolution maps of crop fields, which

Application of Remote Sensing. Vegetation Mapping

Mapping and monitoring of vegetation cover and changes: Remote sensing can be used to map and monitor vegetation cover, including forests, grasslands, and croplands, and to detect changes in vegetation over time. Identification of different types of vegetation: Remote sensing can be used to classify different types of vegetation, such as deciduous, coniferous, and mixed forests, and to differentiate between different types of crops. Measurement of vegetation productivity and health: Remote sensing can be used to measure the productivity and health of vegetation, by generating vegetation indices such as NDVI (Normalized Difference Vegetation Index) and LAI (Leaf Area Index) Assessment of water resources and watersheds: Remote sensing can be used to assess the health and distribution of vegetation in relation to water resources and watersheds, which can help in managing irrigation systems, water supply and flood control. Forest inventory and management: Remote sensing can be used to esti

Radar image. Polarization in Remote Sensing

L band radars operate on a wavelength of 15-30 cm and a frequency of 1-2 GHz. L band radars are mostly used for clear air turbulence studies. S band radars operate on a wavelength of 8-15 cm and a frequency of 2-4 GHz. Because of the wavelength and frequency, S band radars are not easily attenuated. . Polarization refers to the direction of travel of an electromagnetic wave vector's tip: vertical (up and down), horizontal (left to right), or  circular (rotating in a constant plane left or right). . a synthetic aperture radar (SAR) for high-resolution imaging. a radar altimeter, to measure the ocean topography. echo amplitude a wind scatterometer to measure wind speed and direction. Other types of radars have been flown for Earth observation missions: precipitation radars such as the  Tropical Rainfall Measuring Mission, or cloud radars like the one used on Cloudsat. . RISAT-1 (SAR, ISRO India, 2012) RORSAT (SAR, Soviet Union, 1967-1988) Seasat (SAR, altimeter, scatterometer, US, 19

Neural networks in Remote Sensing

Neural networks are a type of algorithm used in image classification and other areas of machine learning. They are based on the structure and function of the human brain, and are made up of layers of interconnected nodes, called neurons. The neurons in the input layer receive input data, and the neurons in the output layer produce the predicted class label. The neurons in the hidden layers process the input data and transmit it to the output layer. In the context of image classification, the input data is the image pixels and the output is the class label. The neurons in the input layer receive the image pixels, and the neurons in the output layer produce the predicted class label. The neurons in the hidden layers process the image pixels and transmit it to the output layer. Neural networks are known for their ability to learn from data and improve their performance over time. They are considered as a supervised learning algorithm, it requires a labeled dataset to train the network on.

Atmospheric correction

Atmospheric correction in remote sensing refers to the process of removing the effects of the Earth's atmosphere on the signal being detected by a remote sensing instrument. This is necessary because the atmosphere can scatter and absorb light, causing a distortion of the signal that is being detected. There are several methods used for atmospheric correction, including radiative transfer models, atmospheric inversion techniques, and empirical methods. Radiative transfer models use mathematical equations to simulate the interactions between light and the atmosphere, and can be used to correct for atmospheric effects such as scattering and absorption. Atmospheric inversion techniques use atmospheric measurements, such as atmospheric temperature and water vapor, to correct for atmospheric effects. Empirical methods use statistical techniques to correct for atmospheric effects based on observations of the scene. Atmospheric correction is an important step in remote sensing, as it allo

Radiometric correction in remote sensing

Radiometric correction in remote sensing refers to the process of adjusting the brightness and contrast of an image to make it consistent with the real-world conditions under which it was acquired. This correction is necessary because the sensors used in remote sensing can be affected by factors such as atmospheric conditions, sun angle, and sensor noise, which can result in variations in the brightness and contrast of the image. The main goal of radiometric correction is to remove these variations and make the image more representative of the real-world conditions. This is done by using mathematical algorithms and models to adjust the brightness and contrast of the image based on various factors such as the sun angle, atmospheric conditions, and sensor noise. There are several different methods used for radiometric correction, including atmospheric correction, sensor correction, and image enhancement. Each method is used to correct a specific type of variation in the image. For exampl

Geometric correction

Geometric correction in remote sensing refers to the process of removing geometric distortions from images acquired by a sensor. These distortions can be caused by factors such as sensor tilt, altitude, and terrain curvature. The goal of geometric correction is to produce an image that is geometrically accurate, meaning that the features in the image correspond to their true locations on the ground. One commonly used method for geometric correction is called rectification. This involves transforming the image so that it is projected onto a uniform scale, such as a map projection or a digital elevation model. This can be done using a process called orthorectification, which involves using information from the sensor's attitude and position, as well as a digital elevation model, to correct for distortions caused by terrain relief. For example, an image of a mountainous area acquired by a sensor mounted on an aircraft may appear distorted due to the angle of the sensor and the shape o

Post classification smoothing in remote sensing

Post classification smoothing in remote sensing is a technique used to reduce the amount of noise and improve the overall accuracy of a classification. It is typically applied after the initial classification has been completed and is used to smooth out any inconsistencies or errors that may be present in the classified image. The main goal of post classification smoothing is to improve the overall accuracy of the classification by reducing the amount of noise and smoothing out any inconsistencies or errors that may be present in the classified image. This is typically achieved by applying mathematical algorithms that take into account the spatial relationships between pixels in the image, as well as the spectral characteristics of the pixels. Post-classification smoothing in remote sensing refers to a technique used to reduce the "salt and pepper" noise that can occur in a classified image. This noise can be caused by errors in the classification process or by variations in

Fuzzy classification in remote sensing

Fuzzy classification in remote sensing is a method of image classification that uses fuzzy logic to assign multiple class membership values to each pixel in an image. This approach allows for a more nuanced and accurate representation of the features present in the image, as it acknowledges the possibility of overlap and uncertainty in class boundaries. In a fuzzy classification, each pixel is assigned a set of membership values, or "fuzzy membership grades," that indicate the degree to which the pixel belongs to each class. These membership values can then be used to create a final, crisp classification of the image, or they can be used to represent the uncertainty of the classification. Fuzzy membership grades in remote sensing refer to the degree to which a pixel in an image belongs to a particular class. In a fuzzy classification, each pixel is assigned a set of membership values, one for each class, that indicate the degree of class membership. These membership values ar

Spectral Mixture Analysis. Classification of mixed pixels

Spectral Mixture Analysis. Classification of mixed pixels   Classification of mixed pixels in remote sensing refers to the process of identifying and categorizing pixels in an image that contain multiple materials or land covers. These pixels are known as "mixed pixels" as they contain multiple spectral signatures, making it difficult to classify them using traditional classification techniques. Spectral mixture analysis (SMA) is a technique used to classify mixed pixels. It is based on the principle that different materials reflect light differently and have unique spectral signatures in different parts of the electromagnetic spectrum. SMA uses a set of known spectral signatures for different materials, such as vegetation, water, soil, and rock, and compares them to the spectral reflectance of an image. The technique then estimates the proportion of each material present in the image by analyzing the spectral reflectance of each pixel. This information can be used to map the

Mixed Pixels in the Image. Remote Sensing.

Mixed pixels in remote sensing refer to pixels in an image that contain more than one land cover or land use type. These pixels can occur due to the presence of multiple land cover types in the same area, or due to the limited spatial resolution of the image. For example, a mixed pixel may contain both vegetation and water, or both urban and natural vegetation. In such cases, it can be difficult to assign a single class label to the pixel, as it contains multiple land cover types. This is a common problem in remote sensing, especially when using high-resolution satellite imagery or aerial imagery. Mixed pixels can have a significant impact on the accuracy of image classification, as they can lead to misclassification of land cover types and can result in errors in land cover mapping. To overcome this problem, different methods such as Spectral mixture analysis, Object-based classification, Decision tree or Random Forest classifier, and Hybrid methods can be used to classify mixed pixel

Representative subscene classification

Representative subscene classification refers to the process of selecting a representative subset of an image, or a "subscene," and using it to classify the entire image. This approach is used when the image data is too large or complex to be classified as a whole, and instead, a smaller representative subset of the image is used to classify the entire image. The process of representative subscene classification typically begins with the selection of the representative subset, which is a small portion of the image that is representative of the entire image. This subset is then manually or automatically labeled with the appropriate class labels. Next, an algorithm, such as a decision tree or a support vector machine, is trained on the labeled subset, and then used to classify the entire image. This approach is useful when the image data is too large or complex to be classified as a whole and also when it's too costly to manually label the entire image. By selecting a repre