The industries of the world are in the midst of a technology-driven transformation. As technology runs on data, collecting and integrating data continues to be a complicated and technical task, especially with respect to geospatial technology. The exponential growth of geospatial technologies in recent years has made available new instruments and capabilities for gathering and managing spatial data.
Remote sensing, the global positioning system (GPS) and geographic information systems (GIS) are important geospatial technologies. While remote sensing and GPS are methods for collecting information about the Earth’s surface, GIS is a complex mapping tool for organizing and analyzing information. In this article, we shall dive deeper into the science, err.., the art behind geospatial data collection via remote sensing.
The basics of remote sensing and its sources
Remote sensing is the process of obtaining information about objects, areas or phenomena from a distance, typically from aircraft or satellites. It includes the use of satellite or aircraft-based sensor technologies to detect and classify objects on the Earth’s surface and in the atmosphere and oceans.
The age of remote sensing can be said to have started in 1860 with James Wallace Black’s photograph of Boston from a balloon. According to an article published in Journal of Extension, most of the remotely sensed data used for mapping and spatial analysis is collected as reflected electromagnetic radiation, which is then processed into a digital image that can be overlaid with other spatial data.
Let’s understand the sources for remote sensing data in detail:
Satellites have been used for capturing geospatial information for over 60 years now. Satellite data is used for an ever-expanding collection of uses, such as weather forecasting, mapping, environmental research, military intelligence and more.
So, how much detail does the satellite actually see? Satellites carry sensors, sometimes more than one, for sensing the Earth that read amounts of reflected energy transmitted to them. For instance, a weather satellite also carries a special instrument for recording multispectral data. The satellite’s sensor observes a small portion of Earth at a time called a pixel. The pixel size represents a squarish area that is, for example, 30 meters (100 feet) on a side. The pixel size varies depending on the satellite sensor.
According to a presentation published by NASA’s Landsat Education team, a common misconception about satellite images is that they are photographs. However, they are quite different. Satellites use remote sensing to collect information digitally.
The images are composed of thousands of pixels that the satellite scanned into rows and columns. The satellite gathers a group of rows into a computer file. People use computers to convert this information to images. This information is stored and converted to picture format.
Different objects absorb and reflect different wavelengths. For example, green vegetation reflects in the infrared quite well. This is why we can use remote sensing technology to observe our world in new ways, the article points out.
Satellite images often record visible light or other forms of radiation. Visible-light images are useful for determining the locations and sizes of rivers, lakes, ice-covered or snow-covered areas and other surface features.
- Aerial Photography
Aerial photography is one of the earliest forms of remote sensing and is still one of the most widely used and cost-effective methods of remote sensing. The advent of drones, unmanned aerial vehicles have made aerial photography easier for commercial and non-commercial purposes.
They say the first form or remote sensing began in the 1860s, even before the Wright brothers first flew their plane. Geographers photographed the earth from above using balloons and kites to capture a larger area. With the introduction of airplanes, aerial photography could capture images from much higher. Today, the altitude of aerial photographs ranges from only a short distance above the ground to heights a little more than 60,000 feet. Lower altitude photographs can capture more detail, which implies that with more height the fine details will be obscured, but a wider area and the relationships between features will be shown.
Aerial photography can be conducted at a variety of scales and in a range of formats (e.g., color, black and white and infra-red) and has become popular in vegetation and ocean mapping. Small-scale, radio-controlled (RC) model aircraft and helicopters using 35 mm SLR and video cameras have been used to acquire panchromatic, color, color infrared (CIR) and multispectral aerial photography for a wide range of environmental applications (Green, 2016).
According to experts, this technology was not initially viewed as a serious source of aerial photography. However, with the developments in miniaturized sensors, camera and battery technology, data storage and small multirotor and fixed-wing aerial platforms, known as unmanned aerial vehicles (UAVs), over the past decades have served to reinvent the potential that such small platforms and sensors have for the low-cost acquisition of a wide range of aerial data and imagery.
As per studies, With advances in battery technology, navigational controls and payload capacities, many of the smaller UAVs are now capable of utilizing a number of different sensors to collect photographic data, video footage and multispectral, thermal and hyperspectral imagery as well as LiDAR. With the aid of low-cost image processing and soft-copy photogrammetric software, photographic stills can easily be mosaiced and three-dimensional models of the terrain and features constructed. (Source: Science Direct)
LiDAR is a technique for capturing geospatial data that uses laser scanning to create three-dimensional point clouds of geographic features. It is an active remote sensing system which means that the system itself generates energy – in this case, light – to measure things on the ground. LiDAR sensors can be mounted on UAVs, airplanes or satellites.
According to an article, LiDAR fundamentally works on LiDAR is fundamentally a distance technology. From an airplane or helicopter, LiDAR systems send light to the ground. This pulse hits the ground and returns to the sensor. Then, it measures how long it takes for the light to return back to the sensor. By recording the return time, this is how LiDAR measures distance. In fact, this is also how LiDAR got its name – Light Detection and Ranging.
LiDAR systems allow scientists and mapping professionals to examine both natural and manmade environments with accuracy, precision and flexibility. LiDAR uses ultraviolet, visible, or near infrared light to image objects. It can capture a wide range of things, including non-metallic objects, trees, rocks, rain, clouds and even single molecules. Its laser beam can map physical features with very high resolutions; for example, an aircraft can map terrain at 30-centimetre (12 in) resolution or better.
There are a wide variety of applications for LiDAR, including agriculture and vegetation mapping, plant species classification, atmosphere, biology and conservation, geology and soil science, law enforcement, military, obstacle detection and road environment recognition, object detection for transportation systems, mining and more.
Data collection via remote sensing and its benefits
The increasing capabilities of computers and communication technology have facilitated the development of remote sensing applications. Here are some of the advantages of using remote sensing technology:
- Systematic collection of data: Remote sensing allows for easy collection of data over a variety of scales and resolutions. Data acquisition can be performed systematically and can be processed very fast using machines and artificial intelligence.
- One image, multiple applications: A single image captured via remote sensing can be analyzed for different applications and purposes. This facilitates research and study in several fields at the same time. There are no limits on the extent of information that can be gathered from a single image.
- Detection of natural calamities: Remote sensing is capable of detecting natural calamities such as forest fires, volcanic eruptions, floods and the areas around it. This is a huge advantage because it helps stakeholders respond immediately and locate the exact areas that need assistance.
- Unobstructive: Remote sensing is unobstructive, i.e. it does not disturb the object or area of interest, especially when it is recording the electromagnetic radiation passively from an area.
- Relatively cheaper: Remote sensing allows for the revision of maps at a small to medium scale making it relatively cheaper and faster than other methods of data collection and mapping. The cost per unit area is less in the case of large areas.
- Large area coverage: It is possible to cover the entire globe and collect a very large amount of data with the help of remote sensing imagery. Not just that, inaccessible areas such as oceans and deep valleys can be easily mapped using remote sensing.
- Unbiased processing images: The data is digital and can be readily processed on machines in an unbiased way. Moreover, remotely sensed imagery is analyzed in the laboratory under fair conditions.
- Repetitive coverage: Repetitive coverage allows monitoring of dynamic themes like water, land, agriculture and more.
Like everything in the world, along with the advantages come some disadvantages too. For instance, data has to be verified with ground truth before use. And that’s exactly what AiDash does. We combine satellite imagery with ground truth to provide intelligent asset management to core industries. To know more about how we use satellite technology to solve vegetation management challenges for power utilities, click here.