Home Articles High resolution geographic imagery and its impact on GIS

High resolution geographic imagery and its impact on GIS

12 Minutes Read

John W Allan
ERDAS Inc., Telford House, Fulbourn
Cambridge CB1 6DY, United Kingdom
[email protected]

Introduction
The era of 1-meter satellite imagery presents new and exciting opportunities for users of spatial data. With Space Imaging’s IKONOS satellite already in orbit and satellites from EarthWatch Inc., Orbital Imaging Corp. and, of course, ISRO scheduled for launch in the near future, high resolution imagery will add an entirely new level of geographic knowledge and detail to the intelligent maps that we create from imagery.

Geographic imagery is now widely used in GIS applications worldwide. Decisions made using these GIS systems by national, regional and local governments, as well as commercial companies, affect millions of people, so it is critical that the information in the GIS is up to date. In most instances, aerial or satellite imagery provides is the most up to date source of data available, helping to ensure accurate and reliable decisions.

However, with technological advancements come new opportunities and challenges. The challenge now facing the geotechnology industry is twofold – how best to fully exploit high-resolution imagery and how to get access to it in a timely manner

It is very easy to show high-resolution imagery in new and innovative applications and many papers are already being presented at Map India 2001 that show this. However, it is also very easy to focus on purely the “artistic” side of the imagery in the application and to lose sight entirely of any commercial issues that will help or hinder the application from being commercially successful. This paper will explore these issues and will provide an objective view of problems that the industry has to overcome before it can achieve true commercial acceptance.

Is high-resolution imagery making a difference?
There is no doubt that the GIS press has been deluged with high-resolution imagery for the last 12 months. Showing an application with an imagery backdrop provides an immediate visua l cue for readers. Without the imagery backdrop, the context is lost and the basic map, comprising polygons, lines and points becomes more difficult for the layman to interpret. It is the context or visual clues that provide the useful information and it is this information that is the inherent value of the imagery.

The higher the resolution of the imagery, the more man made objects that can be identified. The human eye – the best image processor of all – can quickly detect and identify these objects. If the application is therefore one that just requires an operator to identify objects and manually add them into the GIS database, then the imagery is making a positive difference. It is adding a new data source for the GIS Manager to use.

However, if the imagery requires information to be extracted from it in an automated and semi automated fashion (for example, a land cover classification), it is a different matter. If the same techniques that were developed for earlier lower resolution satellite imagery are used on the high-resolution imagery, (such as maximum likelihood classification), the results can actually create a negative impact. Whilst lower resolution imagery isn’t affected greatly by artifacts such as shadows, high-resolution data can be. Lower resolution data also “smoothes” out variations across ranges of individual pixels, allowing statistical processing to create effective land cover maps. Higher resolution data doesn’t do this – individual pixels can represent individual objects like manhole covers, puddles and bushes – and contiguous pixels in an image can vary dramatically, creating very mixed or “confused” classification results. There is also the issue of linear feature extraction. Lines of communication on a lower resolution image (such as roads) can be identified and extracted as a single line. However, on a high-resolution image, a road comprises the road markings, the road itself, the kerb (and its shadow) and the pavement (or sidewalk). A very different method of feature extraction is therefore needed. Figure 1 shows the range and variety of information contained in a high-resolution image and the problems caused by shadows, overhanging trees and parked cars.

It’s not just the spatial resolution that can affect the usage of the imagery. With 11 bit imagery becoming available, the ability of the GIS to work with high spectral content imagery becomes key. 11 bit data means that up to 2048 levels of grey can be stored and viewed. If the software being used to view the imagery assumes it is 8 bit (256 levels), then it will either a) display only the information below the 255 level (creating either a black or very poor image) or b) try to compress the 2048 levels into 256, also reducing the quality of the displayed image considerably. Having 2048 levels allows more information in shadowy areas to be extracted as well as enabling more precise spectral signatures to be defined to aid in feature identification. However, without the correct software, this added “bonus” can easily turn into a pr oblem.

One other area that needs to be addressed in terms of usage is the actual availability of data to the end user. Application papers tend only show us the finished results without giving any indication of the actual project itself and the problems that may have been encountered in the actual running of the project. In many instances, availability of data is limited, especially from spaceborne sensors and users have to look elsewhere for data.

An increasingly common source of image data is therefore existing aerial survey photographs. With the massive improvement in scanning technology and orthophoto production software, these old photo archives can be readily made available to GIS users. No licencing fees are required (as the organization generally ow ns the photography) and the data can easily be made available internally within the organization. The only downside is the question of how recent the imagery is. Contrast this with the high-resolution satellite data. If it is not archived data, then the data has to be acquired, which is dependent upon both the weather and other demands on the satellite. If it is acquired then it has to be processed and shipped out via tape or CD/DVD (as bandwidth is limited) and finally, it usage is limited by licencing – single user, multiple user, site usage etc. pricing is therefore a key issue. The message here is clear. High-resolution satellite data will not replace other sources of data -it will in fact only complement them.

Finally, the issue of digital versus analog is also being addressed in this new digital age. Old airphotos need to be scanned to convert them to a digital format. New digital airborne cameras get around this step, providing high quality airborne imagery at any user defined resolution. Depending upon the application and the levels of accuracy needed, cameras ranging in price from the hundreds to the millions of dollars can be used. The drop in price and increased availability of GPS units is also aiding the growth in the use of low cost digital cameras for GIS applications. Attached to remotely controlled aircraft or helicopters, they can provide very high-resolution, targeted aerial surveys for specific applications. Information (and its extraction) is the key element
As mentioned above, high-resolution imagery from both aerial and space borne sensors provides a challenge to the user community in terms of information extraction. The human eye and brain can identify objects in the image but the computer finds it difficult. If we cannot automate this process, then we will most certainly lose out on some of the major economic benefits of the imagery.

If the human brain can do it, why can’t the computer? Well it actually can if it uses rules or knowledge based processing, just as the human brain does. The brain can make a decision on an image very quickly by understand and using context. If we see grassland in the center of an urban development, we can easily decide that it is a park, as opposed to agricultural land. To make this decision we are using knowledge and experience to create expertise and computer based expert systems are beginning to emerge that mimic this process.

For many years, expert systems have been used successfully for medical diagnoses and various information technology (IT) applications but only recently have they been applied successfully to GIS applications.

Statistical image processing routines, such as maximum likelihood and ISODATA classifiers, work extremely well at performing pixel-by-pixel analyses of images to identify land-cover types by common spectral signature. Expert-system technology takes the classification concept a giant step further by analyzing and identifying features based on spatial relationships with other features and their context within an image.

Expert systems contain sets of decision rules that examine spatial relationships and image context. These rules are structured like tree branches with questions, conditions and hypotheses that must be answered or satisfied. Each answer directs the analysis down a different branch to another set of questions.

The beauty of an expert system is thatโ€” because the rules, also called a knowledge base, are created by true experts, such as foresters or geologistsโ€” the system can be used successfully by non-experts.

In terms of satellite images, the knowledge base identifies features by applying questions and hypotheses that examine pixel values, relationships with other features and spatial conditions, such as altitude, slope, aspect and shape. Most importantly, the know ledge base can accept inputs of multiple data types, such as digital elevation models, digital maps, GIS layers and other pre-processed thematic satellite images, to make the necessary assessments.

In forestry, for example, an expert classification might identify one stand of trees as a specific species because they grow only at certain elevations and on southwest -facing slopes of less than 30 degrees. Another region within the image having similar spectral values might be interpreted as grass because it only occurs next to roadways in suburban areas. And another category may be labeled as an orchard because the trees grow in regular patterns.

Because many of these examples rely on information contained in data other than satellite images, it’s easy to understand that expert system-technology is more of a decision-support tool than merely an image classifier. In fact, a satellite image isn’t even necessary. With the help of expert system-technology, the military already has benefited from cross-country mobility knowledge bases that consider soil type, land cover, elevation data and current weather reports to determine optimal routes for a certain type of vehicle to traverse an area. The beauty of the expert system however is that whenever new sources of information become available, they can be easily incorporated. For example, even though the mobility analysis can be carried out without imagery, the accuracy of the analysis can be affected by the ground conditions. If a satellite image can be used to extract moisture content (i.e. the “mud” factor), then it can be added to the knowledge base and used as part of a rule. One other key element of the experts system is the “traceability” of the process. Figure 3 shows that by simply querying the resultant map, the rule that was used to create the output can be displayed and verified.

ERDAS IMAGINE was the first GIS oriented imaging system to be released with a Knowledge Based Classifier and it is being widely used throughout the world to automate many GIS decision-making processes.

Imagery or Information
The successful usage of imagery in a GIS is dependent upon a number of factors:

  • Bandwidth
  • Accuracy
  • Repeatability

Bandwidth
Images by their very nature tend to hog computer resources, taking up many megabytes and even gigabytes of storage and processing. Getting the data to the user is therefore the first problem, as current digital networks cannot cope with this quantity of data. Image compression is one way of getting around this, but it must be noted tha t the current generation of image compression software, such as MrSID are “lossy”, meaning that spectral information from the original image will not be maintained in the uncompressed image. This is acceptable where the image is to be used as a backdrop or for printing, but is not acceptable if the image has to be further processed. Many processing functions are reliant upon the spectral information in the image and if this has been changed, then the results cannot be guaranteed.

If the imagery is to be us ed for further processing then the options are relatively limited. The remote sensing industry has long been plagued with data delivery problems, in large part based upon the size of the datasets. Delivery over the Internet is not a solution at present except where dedicated lines such as frame relay are available. Satellite based distribution, such as DirectPC (similar to DirectTV commercial television services) is also an option but is currently limited by availability and also it’s one -way nature. It can be used to deliver the data, but the request and searching functions need to be carried out over a different connection to the web. Both these solutions are expensive and are therefore not practical for the everyday GIS user. That leaves writing to media (CDROM/DVD) and surface delivery, which in many instances takes away the “up to date” quality of the data. A different approach may therefore be to change the location of where the processing takes place and instead of sending the data to the users, a better solution may be for the data provider to provide the processing via the web.

ERDAS are currently working on a number of products that will provide processing over the Internet, where users will be able to request processing services from data or servic e providers. This will enable users to request information, potentially based on the expert systems described above, with only the end results delivered to them via the net. Based on the mobility analysis example, a user could request the moisture content for a certain area and just get the basic information to feed into the knowledge base. This “subscription” based approach to processing is some way off but will definitely revolutionize the imaging industry in the medium term, moving the processing overhead from the user to a fee-based service provider.

Accuracy
Aerial or spaceborne imagery by its very nature is distorted in its raw form, either because of the movement of the sensor, the lensing system or terrain height variations of the area that is being imaged. Happily there are many software packages available which provide capabilities to correct for these distortions and make the imagery GIS ready. ERDAS itself provides a range of capabilities for simple geometric transformation (in products like the Image Analysis Extension for ArcView) through single frame orthorectification (in IMAGINE Advantage) to full rigorous photogrammetric block triangulation (in IMAGINE OrthoBASE). In some instances, where the imagery is being used to look for relative changes or to identify simple features, geometric transformation is used. For example, ArcView users can bring imagery in via the Image Analysis extension and simply warp it to fit their vector database. The imagery can then be used to update attribute information in the vector database. In this instance, it is the relative accuracy that is important.

For other more precise mapping application, corrections for the look angle and terrain height variations are critical and in these instances, rigorous photogrammetric processes must be used. In the past, photogrammetry has been viewed as somewhat of a black art by the GIS community, with orthophoto production traditionally being left to specialist service providers. Products such as IMAGINE OrthoBASE are changing this approach however, with wizard based user interfaces enabling many GIS users to create their own orthophoto databases.

Overall, the user, dependent upon their own application, must choose the correct level of accuracy. In general, most data providers now offer highly accurate, orthorectified data as standard with simple geometric rectification products becoming less common.

Whilst orthophotos can be used to digitize 2D features relatively easily, the ability to get accurate 3D data from imagery has been more difficult. If a very high accuracy DEM is already available, then 3D coordinates can be applied to the 2D features captured from the orthophoto. It is however rare that such a DEM exists.

The other alternative has been to use stereo imagery and photogrammetric software to digitize 3D features directly. However, this has always been viewed as a specialist capability and hence was uncommon in the GIS world. With the launch of Stereo Analyst, ERDAS is attempting to bring this 3D capability into the GIS world by hiding the complexity of the process beneath a wizard based user interface. With stereo data becoming more readily available from companies such as Space Imaging (subject to certain restrictions) and with many aerial survey companies already providing stereo air photos, GIS users now have the ability to:

  • Directly create 3D features
  • Build photo realistic buildings using textures from the image
  • Create “real world” flythroughs that can be viewed across the web

This addition of the third dimension into GIS analyses is having an enormous impact upon the proliferation of GIS into non-expert areas. Providing users with the ability to view their landscape just as they would see it, in a natural 3 dimensional environment breaks down many barriers and helps present results to the public in a non-scientific manner. Repeatability

Many GIS applications do not require just a single “snapshot”, but are focused on maintaining an up to date inventory of land cover or land use. Because imagery is taken at different times of the year and with differing weather conditions, it is important that some form of “normalization” is applied to the imagery or taken into account during processing. The expert systems described above allow for this and can take into account the different spectral responses of land cover at different times of the year, the effect of sun angle differences and also the atmospheric corrections needed for haze, fog and airborne pollutant removal.

Future trends
The remote sensing and photogrammetric industries are going through a massive change, becoming more closely integrated with a fast growing and competitive GIS industry. What is clear is that imagery and the technology associated with preparing it for GIS and extracting information from it is becoming a key part of GIS systems worldwide. It is important that GIS software changes to take account of this new and extensive user requirement and that the industry as a whole begins to provide services that match the demands of these new users. What we shall see over the next 2- 3 years is:

  • A much broader range of imagery becoming available, based on new and existing sources of data
  • More regular revisit capabilities, enabling higher frequency change detection and monitoring applications
  • The growth of specialist services using new digital camera/GPS technology to provide targeted, low cost aerial surveys for specific applications
  • The emergence of new “information” providers, focused on generating specific data from imagery for targeted markets and whose business model will be subscription based
  • More internet based “on demand” processing capabilities from data providers
  • The inclusion of more imaging and photogrammetric processing in standard GIS software packages.

All in all, the new millennium will be an exciting time for anyone concerned with imagery and GIS!