Non-Rationalised Geography NCERT Notes, Solutions and Extra Q & A (Class 6th to 12th) | |||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6th | 7th | 8th | 9th | 10th | 11th | 12th |
Chapter 7 Introduction To Remote Sensing
Following the study of aerial photography, which extends human observation and recording capabilities, this chapter introduces Remote Sensing. While aerial photography and human eyes primarily respond to light within the visible spectrum, modern remote sensing devices can detect and measure a much wider range of energy reflected, emitted, absorbed, or transmitted by objects on the Earth's surface. Objects at any temperature above absolute zero (0 Kelvin or $-273^\circ C$) emit electromagnetic radiation.
The term "remote sensing" was coined in the early 1960s. It is defined as the process of acquiring and measuring information about the properties of objects or phenomena on the Earth's surface using a recording device (sensor) that is not in physical contact with the objects being studied. Essentially, remote sensing involves an object surface, the sensor, and the energy waves that carry information between them (Figure 7.1).
Diagram illustrating the basic concept of remote sensing, involving a source of energy, interaction with an object's surface, and detection of reflected or emitted energy by a sensor.
Glossary terms introduced in the text:
- Absorptance: Ratio of absorbed radiant energy to received energy.
- Band: A specific wavelength interval in the electromagnetic spectrum.
- Digital image: Grid of digital numbers representing intensity values at locations.
- Digital Number (DN): Intensity value of a pixel.
- Digital Image Processing: Numerical manipulation of DNs to extract information.
- Electromagnetic Radiation (EMR): Energy propagating as waves at light speed.
- Electromagnetic Spectrum: Continuum of EMR wavelengths.
- False Colour Composite (FCC): Image using artificial colour assignment to different EMR bands.
- Gray scale: Range of tones from black to white.
- Image: Pictorial record of features, photographic or digital.
- Scene: Ground area covered by an image.
- Sensor: Device detecting EMR for recording/display.
- Reflectance: Ratio of reflected radiant energy to received energy.
- Spectral Band: A specific range of wavelengths (e.g., green band, NIR band).
Stages In Remote Sensing
Remote sensing data acquisition is a multi-stage process involving the collection of information about Earth's surface properties without physical contact (Figure 7.2 outlines these stages).
Diagram showing the step-by-step process involved in acquiring remote sensing data, from energy source to data output.
The basic processes involved are:
- Source of Energy: Providing the energy that interacts with the target.
- Transmission through Atmosphere: Energy travels from the source to the Earth's surface.
- Interaction with Surface: Energy interacts with objects on the ground.
- Propagation back through Atmosphere: Reflected or emitted energy travels back through the atmosphere.
- Detection by Sensor: Sensor collects the returning energy.
- Recording Data: Sensor converts energy into a photographic or digital record.
- Data Processing: Raw data is processed and corrected.
- Information Extraction: Useful information is derived from the processed data.
- Output: Information is presented as maps, tables, etc.
Let's elaborate on some key stages:
a. Source of Energy: The Sun is the most common natural energy source used in remote sensing (passive remote sensing). Energy can also be artificially generated by the sensing system itself (active remote sensing), such as the radar system which transmits microwave energy and records the backscattered signal.
b. Transmission of Energy (Electromagnetic Radiation - EMR): Energy propagates from the source (Sun or sensor) to the target in the form of electromagnetic radiation (EMR). EMR travels at the speed of light and behaves as waves with varying wavelengths and frequencies. The complete range of EMR is the Electromagnetic Spectrum (Figure 7.3), broadly divided into gamma rays, X-rays, ultraviolet, visible, infrared, microwaves, and radio waves. Remote sensing primarily utilizes the visible, infrared, and microwave regions of the spectrum.
Diagram illustrating the electromagnetic spectrum, showing the range of energy wavelengths from short (gamma rays) to long (radio waves) and highlighting regions used in remote sensing (visible, infrared, microwave).
c. Interaction of Energy with the Earth’s Surface: When EMR reaches the Earth's surface, it interacts with objects. The energy may be absorbed by the object, transmitted through it, reflected from its surface, or emitted by the object (if its temperature is above absolute zero). The way an object interacts with EMR depends on its physical and chemical properties (composition, texture, moisture content) and the wavelength of the energy. Different objects have unique "spectral signatures" - how they reflect and absorb energy across different wavelengths (Figure 7.4 shows examples of spectral signatures).
Graph illustrating typical spectral reflectance curves for common Earth surface materials (soil, vegetation, water) across different regions of the electromagnetic spectrum.
For example, healthy vegetation strongly reflects near-infrared energy due to its internal cell structure, while water bodies absorb most EMR in the red and infrared regions, appearing dark. Turbid water, containing suspended particles, reflects more in the blue and green, appearing lighter than clear water (Figure 7.5 shows different responses of water bodies in different bands).
Satellite images of Sambhar Lake, Rajasthan, taken in different spectral bands (Green and Infrared), demonstrating how the lake's appearance varies depending on the wavelength of energy captured by the sensor.
d. Propagation of Reflected/Emitted Energy through Atmosphere: After interacting with the surface, energy travels back through the atmosphere towards the sensor. During this second pass, the energy interacts with atmospheric constituents (gases, water molecules, dust particles). Certain atmospheric gases absorb energy at specific wavelengths (e.g., water vapour, $CO_2$, $H_2$ absorb in the middle infrared). Particles can scatter energy (e.g., dust scatters blue light). Energy that is absorbed or scattered before reaching the sensor cannot be recorded, which can affect the information received about the object.
e. Detection of Reflected/Emitted Energy by the Sensor: A sensor is a device on an airborne or spaceborne platform that detects and records the EMR reflected or emitted from the Earth's surface. Remote sensing satellites are typically placed in near-polar sun-synchronous orbits (700-900 km altitude), allowing them to pass over the same area at roughly the same local time every revisit cycle. Weather monitoring and telecommunication satellites are often in geostationary orbits (around 36,000 km altitude), appearing stationary over a specific point on the Earth and providing continuous coverage of a large area (Figure 7.6 and Box 7.1 compare these orbits and satellite types).
Diagram illustrating the orbital paths of Sun-Synchronous satellites (polar orbits, lower altitude) and Geostationary satellites (equatorial orbit, higher altitude).
Orbital Characteristics | Sun-Synchronous Satellites | Geostationary Satellites |
---|---|---|
Altitude | 700 – 900 km | ~ 36,000 km |
Coverage | Covers the entire globe over time as Earth rotates beneath | Covers ~ 1/3rd of the Globe (appears stationary over one point) |
Orbital period | ~ 14 orbits per day | 24 hours (synchronous with Earth's rotation) |
Resolution (Spatial) | Fine (e.g., 182 metre down to < 1 metre) | Coarse (typically 1 km x 1 km or larger) |
Primary Uses | Earth Resources Applications (mapping, monitoring land/water resources, environment) | Telecommunication and Weather monitoring (continuous view of large areas) |
Remote sensing satellites carry sensors designed to detect specific wavelengths of EMR reflected or emitted from the surface.
f. Conversion to Data Product: The energy detected by the sensor is converted into a usable format, typically a digital image. A digital image is composed of picture elements (pixels), each with a numerical value (Digital Number or DN) representing the intensity of the energy recorded for that area. These digital numbers are arranged in rows and columns. Digital data is then transmitted to Earth Receiving Stations for processing. In India, a station is located near Hyderabad.
g. Information Extraction: After initial processing and correction of errors, information is extracted from the data products. This can be done through visual interpretation (manual analysis of images) or digital image processing (using computer algorithms to analyze DN values).
h. Output: The extracted information is converted into useful formats like thematic maps (showing specific themes like land use, geology, vegetation) or tabular data for analysis.
Sensors
A sensor is the core component of a remote sensing system that collects EMR, converts it into a signal, and records it. Sensors can be photographic (recording on film, creating analogue data) or non-photographic (scanning devices recording electronically, creating digital data).
Photographic sensors (cameras) capture an image instantaneously. Non-photographic sensors, or scanners, acquire images bit-by-bit by systematically sweeping across the area. We will focus on scanners used in satellite remote sensing.
Multispectral Scanners
Multispectral Scanners (MSS) are sensors used in satellite remote sensing that collect reflected or emitted energy in multiple specific wavelength bands simultaneously. They build up an image scene by recording energy along a series of scan lines. Scanners typically use a rotating mirror to direct incoming energy into detectors that are sensitive to specific spectral ranges. The total width of the area scanned on the ground is called the swath. Each individual sensing element on the ground that is recorded as a single pixel in the image is related to the sensor's Instantaneous Field of View (IFOV), which determines the spatial resolution.
Multispectral Scanners are categorized into two main types based on their scanning mechanism:
Whiskbroom Scanners
Also known as across-track scanners, these use a single detector (or a small number of detectors for different bands) and a rotating or oscillating mirror (Figures 7.7). The mirror sweeps back and forth across the satellite's path, collecting energy from a strip of ground perpendicular to the flight direction. As the satellite moves forward, successive strips are scanned, building up the image. The range of the mirror's sweep determines the swath width. This method collects data for one pixel at a time along each scan line.
Diagram showing the scanning mechanism of a whiskbroom scanner, where a mirror sweeps across the scene perpendicular to the satellite's path, directing light to a detector.
Pushbroom Scanners
Also known as along-track scanners, these use a linear array of multiple detectors (Figure 7.8). Each detector in the array is assigned to a specific pixel location across the satellite's path on the ground. As the satellite moves forward, the entire linear array of detectors collects energy simultaneously along a line perpendicular to the flight direction. This method collects a full line of pixels at a time. The number of detectors in the array corresponds to the number of pixels across the swath width at the sensor's spatial resolution. Pushbroom scanners generally have higher spatial resolution and sensitivity than whiskbroom scanners.
Diagram showing the scanning mechanism of a pushbroom scanner, where a linear array of detectors collects data simultaneously along the satellite's path.
Resolving Powers Of The Satellites
Remote sensing satellites are characterized by their capabilities to distinguish features, often referred to as their resolving powers or resolutions. One important aspect is the temporal resolution or revisit time.
Temporal Resolution (Revisit Time): This is the time interval between successive acquisitions of images for the exact same geographical area by a satellite. For sun-synchronous satellites, this interval is predetermined (e.g., 14 orbits cover the globe daily, but full coverage of a specific point might take days depending on swath width). Higher temporal resolution allows for frequent monitoring of dynamic processes and changes over time (Figure 7.9 shows images of the Himalayas and Northern Indian Plain from different times, illustrating visible changes). For example, images acquired before and after a major event like the 2004 tsunami clearly show the extent of the damage (Figure 7.10a and 7.10b illustrate pre- and post-tsunami images of Banda Aceh).
Satellite images of the Himalayas and Northern Indian Plain taken in May and November, demonstrating how images from different time periods can capture changes in vegetation and cultivated areas over seasons.
Satellite image of Banda Aceh, Indonesia, acquired before the 2004 tsunami (June 2004), showing the area's appearance prior to the disaster.
Satellite image of Banda Aceh, Indonesia, acquired after the 2004 tsunami (December 2004), showing the damage and changes caused by the event compared to the pre-tsunami image.
Sensor Resolutions
Remote sensors are characterized by different types of resolution that determine their ability to capture detailed information about the Earth's surface. The key resolutions are spatial, spectral, and radiometric.
Spatial Resolution
Spatial resolution refers to the sensor's ability to distinguish between two closely spaced objects on the ground as separate features. It is often defined by the size of the smallest area on the ground that is represented by a single pixel in the image. For example, a sensor with 10-meter spatial resolution means that each pixel represents a 10m x 10m area on the ground. Higher spatial resolution (smaller pixel size) allows for the identification of smaller objects and finer details on the Earth's surface. This is analogous to the resolving power of human eyes or using spectacles to see finer details.
Spectral Resolution
Spectral resolution refers to the sensor's ability to detect and record energy in specific, narrow wavelength intervals (bands) within the electromagnetic spectrum. Multispectral sensors collect data simultaneously in multiple discrete bands (e.g., visible blue, green, red, and near-infrared). Hyperspectral sensors collect data in a very large number of very narrow, contiguous bands. The ability to differentiate objects based on how they reflect or emit energy in different spectral bands is crucial for identifying and classifying various Earth surface features (Figure 7.11 shows images in different spectral bands). Different objects have unique spectral signatures. This concept is similar to how white light is dispersed into a spectrum of colours by a prism or water droplets (Box 7.2).
Satellite images of parts of Najafgarh, Delhi, taken in different spectral bands (Green and Infrared), showing how features like water bodies and dry surfaces appear differently depending on the wavelength recorded by the sensor.
Diagrams conceptually illustrating the principle of light dispersion, showing how white light is separated into its constituent colours by a prism or naturally (like a rainbow), demonstrating the concept behind collecting data in different spectral bands.
Radiometric Resolution
Radiometric resolution refers to the sensor's ability to distinguish between subtle differences in the intensity (brightness or radiance) of the energy recorded for each pixel. It determines the number of distinct brightness levels the sensor can record. Higher radiometric resolution means the sensor can detect smaller variations in energy intensity, resulting in a greater range of possible Digital Number (DN) values for each pixel. For example, an 8-bit radiometric resolution allows for $2^8 = 256$ distinct brightness levels (DN values from 0 to 255), while a 6-bit resolution allows for $2^6 = 64$ levels (DN values from 0 to 63). Higher radiometric resolution provides more detailed information about the energy signal and improves the ability to differentiate between similar target surfaces.
Table 7.1 provides examples of spatial, spectral (number of bands), and radiometric resolution for various remote sensing satellite sensors.
Satellite/Sensor | Spatial Resolution (in metres) | Number of Bands (Spectral) | Radiometric Range (Number of Grey Levels) |
---|---|---|---|
Landsat MSS (USA) | 80.0 × 80.0 | 4 | 0 - 64 (7-bit data typically) |
IRS LISS – I (India) | 72.5 × 72.5 | 4 | 0 - 127 (8-bit data typically) |
IRS LISS – II (India) | 36.25 × 36.25 | 4 | 0 - 127 (8-bit data typically) |
Landsat TM (USA) | 30.00 × 30.00 | 4 (plus 2 other bands) | 0 - 255 (8-bit data) |
IRS LISS III (India) | 23.00 × 23.00 | 4 (Visible, NIR, SWIR) | 0 - 127 (8-bit data typically) |
SPOT HRV - I (France) | 20.00 × 20.00 (Multispectral) | 3 | 0 - 255 (8-bit data) |
SPOT HRV – II (France) | 10.00 × 10.00 (Panchromatic) | 1 | 0 - 255 (8-bit data) |
IRS PAN (India) | 5.80 × 5.80 | 1 (Panchromatic) | 0 - 127 (8-bit data typically) |
*Note: Radiometric range is often expressed as the number of bits (e.g., 6-bit allows 64 levels, 7-bit allows 128, 8-bit allows 256). The range 0-64 implies 65 levels; 0-127 implies 128 levels; 0-255 implies 256 levels.
Data Products
The output from remote sensors are data products, which can be either photographic or digital. Photographic systems use light-sensitive film to directly record energy variations, producing photographic images (analog data) as discussed in aerial photography (Chapter 6). Scanning devices, on the other hand, acquire data electronically and convert it into digital images (digital data).
It's important to note the distinction between "images" and "photographs". An image is a general term for a pictorial representation, regardless of how it was created or the wavelength regions used. A photograph specifically refers to an image recorded on photographic film using visible light or near-infrared captured by a camera.
Photographic Images
These images are typically acquired in the optical region (visible and near-infrared, 0.3-0.9 μm) of the electromagnetic spectrum using cameras. Different types of photographic film emulsions are used, including black and white, colour, black and white infrared, and colour infrared. Black and white film is commonly used in aerial photography. Photographic images can often be enlarged, and their visual details are generally preserved well during enlargement.
Digital Images
Digital images are composed of a grid of discrete picture elements called pixels. Each pixel represents a specific area on the ground and has a corresponding numerical value (a Digital Number or DN value) representing the intensity of the energy recorded for that area. The range of possible DN values depends on the sensor's radiometric resolution. For example, an 8-bit sensor records DN values from 0 to 255. The size of the pixel (related to spatial resolution) determines the amount of detail represented; smaller pixels capture finer detail. While digital images can be processed and manipulated numerically (Figure 7.12 illustrates pixels and DNs), zooming too far into a digital image will eventually cause the individual pixels to become apparent, leading to a loss of perceived detail.
Illustration showing a digital image, a zoomed-in section revealing individual pixels, and a table showing the digital number (brightness) value for each pixel.
Digital images are analyzed using computer-based digital image processing techniques. Analogue (photographic) data products are interpreted using visual interpretation methods.
Interpretation Of Satellite Imageries
The goal of processing and analyzing remotely sensed data is to extract meaningful information about the features and phenomena on the Earth's surface. This information extraction can be done through two main approaches: visual interpretation (manual analysis) or digital image processing (computer-based analysis). Visual interpretation is a manual process of examining an image to identify features based on their visual characteristics. Digital image processing involves numerical manipulation of the digital numbers in an image using specialized software and hardware.
Since digital image processing requires specific technical resources, visual interpretation methods are often discussed as a primary way to extract information from images, particularly from photographic or analog data products, but also from digital images displayed pictorially.
Elements Of Visual Interpretation
Visual interpretation relies on recognizing the characteristics of objects as they appear in an image. These characteristics, or elements of visual interpretation, help us identify features and understand their significance. They can be broadly grouped into image characteristics and terrain characteristics.
Image characteristics are based on how objects appear visually in the image: tone/colour, texture, size, shape, pattern, and shadow. Terrain characteristics relate to the location of the object and its association with surrounding features.
Tone Or Colour
Tone refers to the shade of gray in a black and white image, ranging from black to white. Colour refers to the hue, saturation, and brightness in a colour image. Tone and colour depend on how objects interact with EMR – the amount of energy they reflect and emit across different wavelengths. Smooth, dry surfaces generally reflect more energy and appear in lighter tones or brighter colours than rough, moist surfaces. Different objects also have distinct spectral responses across the spectrum (as shown in spectral signature curves), leading to variations in tone/colour depending on the spectral band(s) used to create the image. For example, clear water absorbs much energy and appears dark/black, while turbid water reflects more in visible bands and appears lighter (Figures 7.13 a and b illustrate turbid vs fresh water appearance). Standard False Colour Composites (FCC) are artificially created images where colors are assigned to different spectral bands (e.g., near-infrared assigned to red, red to green, green to blue), making healthy vegetation appear bright red due to its strong near-infrared reflectance (Table 7.2 provides colour signatures in standard FCC).
S. No. | Earth Surface Feature | Colour (In Standard FCC) |
---|---|---|
1. | Healthy Vegetation and Cultivated Areas | Red to magenta (Evergreen), Brown to red (Deciduous), Light brown with red patches (Scrubs), Bright red (Cropped land) |
Fallow land | Light blue to white | |
2. | Waterbody | Dark blue to black (Clear water), Light blue (Turbid waterbody) |
3. | Built – up area | Dark blue to bluish green (High density), Light blue (Low density) |
4. | Waste lands/Rock outcrops | Light brown (Rock outcrops), Light blue to white (Sandy deserts/River sand/Salt affected) |
Deep ravines | Dark green | |
Shallow ravines | Light green | |
Water logged/Wet lands | Motelled black |
Texture
Texture refers to the visual roughness or smoothness of an image area, caused by the frequency and arrangement of tonal or color variations. Areas with many small, frequent tonal changes have a fine texture, while areas with larger, less frequent changes have a coarse texture. Texture helps differentiate features that might have similar overall tone/color but different spatial arrangements of their components (Figures 7.14 a and b illustrate textures). For example, high-density residential areas might have a fine texture due to closely packed houses, while low-density areas appear coarser. A dense forest might have a fine texture compared to the coarse texture of a scrubland or certain crops.
Image segment displaying a coarse texture, characterized by relatively large, distinct variations in tone or color.
Image segment displaying a fine texture, characterized by small, frequent variations in tone or color, appearing smoother from a distance.
Size
The size of an object as it appears in the image, influenced by the image's scale and resolution, is a significant element for identification (Figure 7.15 shows size variations). Recognizing the size of features helps distinguish between different types of objects that might otherwise look similar. For example, knowing the typical size helps differentiate individual houses from industrial buildings, or a large sports stadium from a smaller local ground. Size is also key in identifying the hierarchy of settlements (villages, towns, cities).
Satellite image examples showing how the size difference between large institutional buildings and smaller residential areas can be visually identified in urban images.
Shape
The shape, or the general outline and form of an object, is often one of the most distinctive elements for image interpretation (Figure 7.16 shows shapes of transport lines). Many natural and man-made features have characteristic shapes that aid in their identification. For example, the circular shape of a stadium, the rectilinear pattern of city blocks, the distinctive form of a specific building (like a parliament house), or the winding shape of a river are strong clues. A railway line can be distinguished from a road by its relatively straighter path and gradual curves compared to the sharper bends often seen in roads.
Image illustrating how the characteristic shapes of different transportation features, such as the long straight lines and gradual curves of a railway track compared to the potentially sharper bends of a road, are used for identification.
Shadow
The shadow cast by an object can provide additional information for interpretation, particularly regarding its height and shape (Figure 7.14 shows shadows of mangroves). Shadow length is determined by the sun's angle and the object's height. The shape of the shadow can reveal the shape of the object from a different perspective, aiding identification of tall features like buildings, towers, or trees. However, shadows can also obscure features lying beneath them, making identification difficult in those areas. Shadow is most useful in large-scale aerial photographs where features are clearly visible and distortions from tilt are minimal.
Pattern
Pattern refers to the spatial arrangement and repetition of individual objects or features in an image (Figure 7.17 shows a residential pattern). Many features occur in distinctive, organized patterns that are recognizable. Examples include the regular grid pattern of streets and blocks in a planned city, the uniform spacing of trees in an orchard or plantation, the branching pattern of a drainage network, or the linear arrangement of houses along a road or river in a linear settlement pattern. Identifying these patterns helps classify areas or features.
Image showing the organized and repetitive spatial arrangement of features in a planned residential area, illustrating the interpretation element of pattern.
Association
Association refers to the geographical location of an object relative to other surrounding features and its typical context. Objects are often found in predictable associations. For example, a large educational institution is typically located in or near residential areas and might be associated with a playground. Industrial sites are often found along highways on the periphery of cities. Slum settlements might be associated with locations near drains or railway lines. Identifying these typical associations helps confirm the identity of features or understand their function and relationship within the landscape.
Exercise
Choose The Right Answer From The Four Alternatives Given Below
Content for Multiple Choice Questions is excluded as per your instructions.
Answer The Following Questions In About 30 Words
Content for Short Questions is excluded as per your instructions.
Answer The Following Questions In About 125 Words
Content for Long Questions is excluded as per your instructions.
Activity
Content for Activity is excluded as per your instructions.