Skip to main content

Grey level thresholding. Level slicing. Contrast stretching lo p

Grey level thresholding.

Level slicing.

Contrast stretching.


Image enhancement


Lillesand and Kiefer (1994) explained the goal of image enhancement procedures is to improve the visual interpretability of any image by increasing the apparent distinction between the features in the scene. This objective is to create "new" image from the original image in order to increase the amount of information that can be visually interpreted from the data.


Enhancement operations are normally applied to image data after the appropriate restoration procedures have been performed. Noise removal, in particular, is an important precursor to most enhancements. In this study, typical image enhancement techniques are as follows:


Grey level thresholding


Grey level thresholding is a simple lookup table, which partitions the gray levels in an image into one or two categories - those below a user-selected threshold and those above. Thresholding is one of many methods for creating a binary mask for an image. Such masks are used to restrict subsequent processing to a particular region within an image.


This procedure is used to segment an input image into two classes: one for those pixels having values below an analyst- defined gray level and one for those above this value. (Lillesand and Kiefer, 1994).


Level slicing


Level slicing is an enhancement technique whereby the Digital Numbers (DN) distributed along the x-axis of an image histogram is divided into a series of analyst-specified intervals of "slices". All of DNs falling within a given interval in the input image are then displayed at a single DN in the output image (Lillesand and Kiefer, 1994).


Contrast stretching


Most satellites and airborne sensor were designed to accommodate a wide range of illumination conditions, from poorly lit arctic regions to high reflectance desert regions. Because of this, the pixel values in the majority of digital scenes occupy a relatively small portion of the possible range of image values. If the pixel values are displayed in their original form, only a small range of gray values will be used, resulting in a low contrast display on which similar features night is indistinguishable.


A contrast stretch enhancement expands the range of pixel values so that they are displayed over a fuller range of gray values. (PCI, 1997)


Generally, image display and recording devices typically operate over a range of 256 gray levels (the maximum number represent in 8-bit computer encoding). In the case of 8-bit single image, is to expand the narrow range of brightness values typically present in an output image over a wider range of gray value. The result is an output image that is designed to accentuate the contrast between features of interest to the image analyst (Lillesand and Kiefer, 1994).

The grey level or grey value indicates the brightness of a pixel. The minimum grey level is 0. The maximum grey level depends on the digitisation depth of the image. For an 8-bit-deep image it is 255. In a binary image a pixel can only take on either the value 0 or the value 255.



Comments

Popular posts from this blog

Radar Sensors in Remote Sensing

Radar sensors are active remote sensing instruments that use microwave radiation to detect and measure Earth's surface features. They transmit their own energy (radio waves) toward the Earth and record the backscattered signal that returns to the sensor. Since they do not depend on sunlight, radar systems can collect data: day or night through clouds, fog, smoke, and rain in all weather conditions This makes radar extremely useful for Earth observation. 1. Active Sensor A radar sensor produces and transmits its own microwaves. This is different from optical and thermal sensors, which depend on sunlight or emitted heat. 2. Microwave Region Radar operates in the microwave region of the electromagnetic spectrum , typically from 1 mm to 1 m wavelength. Common radar frequency bands: P-band (70 cm) L-band (23 cm) S-band (9 cm) C-band (5.6 cm) X-band (3 cm) Each band penetrates and interacts with surfaces differently: Lo...

Optical Sensors in Remote Sensing

1. What Are Optical Sensors? Optical sensors are remote sensing instruments that detect solar radiation reflected or emitted from the Earth's surface in specific portions of the electromagnetic spectrum (EMS) . They mainly work in: Visible region (0.4–0.7 µm) Near-Infrared – NIR (0.7–1.3 µm) Shortwave Infrared – SWIR (1.3–3.0 µm) Thermal Infrared – TIR (8–14 µm) — emitted energy, not reflected Optical sensors capture spectral signatures of surface features. Each object reflects/absorbs energy differently, creating a unique spectral response pattern . a) Electromagnetic Spectrum (EMS) The continuous range of wavelengths. Optical sensing uses solar reflective bands and sometimes thermal bands . b) Spectral Signature The unique pattern of reflectance or absorbance of an object across wavelengths. Example: Vegetation reflects strongly in NIR Water absorbs strongly in NIR and SWIR (appears dark) c) Radiance and Reflectance Radi...

Geometric Correction

When satellite or aerial images are captured, they often contain distortions (errors in shape, scale, or position) caused by many factors — like Earth's curvature, satellite motion, terrain height (relief), or the Earth's rotation . These distortions make the image not properly aligned with real-world coordinates (latitude and longitude). 👉 Geometric correction is the process of removing these distortions so that every pixel in the image correctly represents its location on the Earth's surface. After geometric correction, the image becomes geographically referenced and can be used with maps and GIS data. Types  1. Systematic Correction Systematic errors are predictable and can be modeled mathematically. They occur due to the geometry and movement of the satellite sensor or the Earth. Common systematic distortions: Scan skew – due to the motion of the sensor as it scans the Earth. Mirror velocity variation – scanning mirror moves at a va...

Thermal Sensors in Remote Sensing

Thermal sensors are remote sensing instruments that detect naturally emitted thermal infrared (TIR) radiation from the Earth's surface. Unlike optical sensors (which detect reflected sunlight), thermal sensors measure heat energy emitted by objects because of their temperature. They work mainly in the Thermal Infrared region (8–14 µm) of the electromagnetic spectrum. 1. Thermal Infrared Radiation All objects above 0 Kelvin (absolute zero) emit electromagnetic radiation. This is explained by Planck's Radiation Law . For Earth's surface temperature range (about 250–330 K), the peak emitted radiation occurs in the 8–14 µm thermal window . Thus, thermal sensors detect emitted energy , not reflected sunlight. 2. Emissivity Emissivity is the efficiency with which a material emits thermal radiation. Values range from 0 to 1 : Water, vegetation → high emissivity (0.95–0.99) Bare soil → medium (0.85–0.95) Metals → low (0.1–0.3) E...

LiDAR in Remote Sensing

LiDAR (Light Detection and Ranging) is an active remote sensing technology that uses laser pulses to measure distances to the Earth's surface and create high-resolution 3D maps . LiDAR sensors emit short pulses of laser light (usually in the near-infrared range) and measure the time it takes for the pulse to return after hitting an object. Because LiDAR measures distance very precisely, it is excellent for mapping: terrain vegetation height buildings forests coastlines flood plains ✅ 1. Active Sensor LiDAR sends its own laser energy, unlike passive sensors that rely on sunlight. ✅ 2. Laser Pulse LiDAR emits thousands of pulses per second (even millions). Wavelengths commonly used: Near-Infrared (NIR) → land and vegetation mapping Green (532 nm) → water/ bathymetry (penetrates shallow water) ✅ 3. Time of Flight (TOF) The sensor measures the time taken for the laser to travel: from the sensor → to the sur...