Skip to main content

Digital Image

Electromagnetic energy may be detected either photographically or electronically. The photographic process uses chemical reactions on the surface of light-sensitive film to detect and record energy variations. It is important to distinguish between the terms images and photographs in remote sensing. An image refers to any pictorial representation, regardless of what wavelengths or remote sensing device has been used to detect and record the electromagnetic energy. A photograph refers specifically to images that have been detected as well as recorded on photographic film. The black and white photo to the left, of part of the city of Ottawa, Canada was taken in the visible part of the spectrum. Photos are normally recorded over the wavelength range from 0.3 µm to 0.9 µm - the visible and reflected infrared. Based on these definitions, we can say that all photographs are images, but not all images are photographs. Therefore, unless we are talking specifically about an image recorded photographically, we use the term image.


A photograph could also be represented and displayed in a digital format by subdividing the image into small equal-sized and shaped areas, called picture elements or pixels, and representing the brightness of each area with a numeric value or digital number. Indeed, that is exactly what has been done to the photo to the left. In fact, using the definitions we have just discussed, this is actually a digital image of the original photograph! The photograph was scanned and subdivided into pixels with each pixel assigned a digital number representing its relative brightness. The computer displays each digital value as different brightness levels. Sensors that record electromagnetic energy, electronically record the energy as an array of numbers in digital format right from the start. These two different ways of representing and displaying remote sensing data, either pictorially or digitally, are interchangeable as they convey the same information (although some detail may be lost when converting back and forth).


In previous sections we described the visible portion of the spectrum and the concept of colours. We see colour because our eyes detect the entire visible range of wavelengths and our brains process the information into separate colours. Can you imagine what the world would look like if we could only see very narrow ranges of wavelengths or colours? That is how many sensors work. The information from a narrow wavelength range is gathered and stored in a channel, also sometimes referred to as a band. We can combine and display channels of information digitally using the three primary colours (blue, green, and red). The data from each channel is represented as one of the primary colours and, depending on the relative brightness (i.e. the digital value) of each pixel in each channel, the primary colours combine in different proportions to represent different colours.



When we use this method to display a single channel or range of wavelengths, we are actually displaying that channel through all three primary colours. Because the brightness level of each pixel is the same for each primary colour, they combine to form a black and white image, showing various shades of gray from black to white. When we display more than one channel each as a different primary colour, then the brightness levels may be different for each channel/primary colour combination and they will combine to form a colour image.


Comments

Popular posts from this blog

Photogrammetry – Types of Photographs

In photogrammetry, aerial photographs are categorized based on camera orientation , coverage , and spectral sensitivity . Below is a breakdown of the major types: 1️⃣ Based on Camera Axis Orientation Type Description Key Feature Vertical Photo Taken with the camera axis pointing directly downward (within 3° of vertical). Used for maps and measurements Oblique Photo Taken with the camera axis tilted away from vertical. Covers more area but with distortions Low Oblique: Horizon not visible High Oblique: Horizon visible 2️⃣ Based on Number of Photos Taken Type Description Single Photo One image taken of an area Stereoscopic Pair Two overlapping photos for 3D viewing and depth analysis Strip or Mosaic Series of overlapping photos covering a long area, useful in mapping large regions 3️⃣ Based on Spectral Sensitivity Type Description Application Panchromatic Captures images in black and white General mapping Infrared (IR) Sensitive to infrared radiation Veget...

Photogrammetry – Geometry of a Vertical Photograph

Photogrammetry is the science of making measurements from photographs, especially for mapping and surveying. When the camera axis is perpendicular (vertical) to the ground, the photo is called a vertical photograph , and its geometry is central to accurate mapping.  Elements of Vertical Photo Geometry In a vertical aerial photograph , the geometry is governed by the central projection principle. Here's how it works: 1. Principal Point (P) The point on the photo where the optical axis of the camera intersects the photo plane. It's the geometric center of the photo. 2. Nadir Point (N) The point on the ground directly below the camera at the time of exposure. Ideally, in a perfect vertical photo, the nadir and principal point coincide. 3. Photo Center (C) Usually coincides with the principal point in a vertical photo. 4. Ground Coordinates (X, Y, Z) Real-world (map) coordinates of objects photographed. 5. Flying Height (H) He...

Raster Data Structure

Raster Data Raster data is like a digital photo made up of small squares called cells or pixels . Each cell shows something about that spot — like how high it is (elevation), how hot it is (temperature), or what kind of land it is (forest, water, etc.). Think of it like a graph paper where each box is colored to show what's there. Key Points What's in the cell? Each cell stores information — for example, "water" or "forest." Where is the cell? The cell's location comes from its place in the grid (like row 3, column 5). We don't need to store its exact coordinates. How Do We Decide a Cell's Value? Sometimes, one cell covers more than one thing (like part forest and part water). To choose one value , we can: Center Point: Use whatever feature is in the middle. Most Area: Use the feature that takes up the most space in the cell. Most Important: Use the most important feature (like a road or well), even if it...

Photogrammetry

Photogrammetry is the science of taking measurements from photographs —especially to create maps, models, or 3D images of objects, land, or buildings. Imagine you take two pictures of a mountain from slightly different angles. Photogrammetry uses those photos to figure out the shape, size, and position of the mountain—just like our eyes do when we see in 3D! Concepts and Terminologies 1. Photograph A picture captured by a camera , either from the ground (terrestrial) or from above (aerial or drone). 2. Stereo Pair Two overlapping photos taken from different angles. When seen together, they help create a 3D effect —just like how two human eyes work. 3. Overlap To get a 3D model, photos must overlap each other: Forward overlap : Between two photos in a flight line (usually 60–70%) Side overlap : Between adjacent flight lines (usually 30–40%) 4. Scale The ratio of the photo size to real-world size. Example: A 1:10,000 scale photo means 1 cm on the photo...

Solar Radiation and Remote Sensing

Satellite Remote Sensing Satellite remote sensing is the science of acquiring information about Earth's surface and atmosphere without physical contact , using sensors mounted on satellites. These sensors detect and record electromagnetic radiation (EMR) that is either emitted or reflected from the Earth's surface. Solar Radiation & Earth's Energy Balance Solar Radiation is the primary source of energy for Earth's climate system. It originates from the Sun and travels through space as electromagnetic waves . Incoming Shortwave Solar Radiation (insolation) consists mostly of ultraviolet, visible, and near-infrared wavelengths . When it reaches Earth, it can be: Absorbed by the atmosphere, clouds, or surface Reflected back to space Scattered by atmospheric particles Outgoing Longwave Radiation is the infrared energy emitted by Earth back into space after absorbing solar energy. This process helps maintain Earth's thermal bala...