Home Articles Land cover mapping: Performance analysis of image-fusion methods

Land cover mapping: Performance analysis of image-fusion methods

10 Minutes Read

T. Vasantha Kumaran and R. Shyamala
University of Madras, India
Lesley Marino, Phil Howarth and David Wood,
University of Waterloo, Canada

Introduction
Remotely sensed data are used for mapping the extent of land degradation and to generate both enhanced visual colour composite and classified images of land cover for the five desertified villages of the Thevaram Basin. Image fusion techniques are used as a means of combining information from sources such as the Indian Remote Sensing Satellite (IRS 1C) data, both multispectral (Linear Imaging and Self-Scanning sensor (LISS III) and Panchromatic (PAN). The fine beam RADARSAT-1 data is also used since the study involves the use of multi-source remotely sensed data.

Varshney (1997:245) has defined Image Fusion as multisensor data fusion refers to the acquisition, processing and synergetic combination of information of information gathered by various knowledge sources and sensors to provide a better understanding of phenomenon under consideration. Image Fusion is also a form of data fusion which has been defined as a combination of two or more different images to form a new image by using a certain algorithm (Phol and Genderen, 1998). According to Pohl and Genderen (1998) and Varshney (1997), image fusion can be performed at three different processing levels: (a) pixel or data level fusion; (b) feature level fusion, and (c) decision or interpretation level fusion. For the study reported here, the pixel or data level fusion, which is also referred to as data in – data out fusion, has been taken into consideration.

Image fusion is a tool. The main goal of it is to take advantage of the complementary nature of various types of imageries (Chavez, 1986: Vrabel, 1996). It is to produce a new, enhanced composite image that has advantages of both data sets, contains more complete and detailed information (Varshney, 1997), and may enhance interpretation capabilities (Pohl and Genderen, 1998). Image fusion, apart from spatial and spectral enhancement, can be used to detect changes when the combined data are acquired at different times (Smara et al., 1998: Bruzzone et al., 1999: Saraf, 1999). Time factor is included in the data fusion, since it is very difficult to acquire simultaneous multi-sensor data.

The Study Area
The Thevaram Basin is located in an inter-montane valley within the Kambam Valley of Theni district, Tamil Nadu, southern India. It covers an area of approximately 400 km2 between 9° 48? N and 10° 2? N latitudes and between 77° 13? E and 77° 27? E longitudes. The basin runs for 30 km in SW-NE direction, bordered by the Theni River in the north, by discontinuous hogback ridges on the east, by the Kombai knolls on the south and by the Western Ghats on the west. The five villages of the study are: Bodi Ammapatti, Maniampatti, Pottipuram, Rasingapuram and Silamalai,, which are located in the northwestern part of the Thevaram Basin. The basin can be divided into three broad physiographic regions: the plains, the uplands, and the hills. The plain lies in the centre of the basin with an elevation of less than 450 m above mean sea level. The upland, which forms the transitional zone between hills and plains, surrounds the plain along the southern and western regions. Eighteen per cent of the basin is occupied by the hills. The Thevaram Basin has semi-arid environment, with a mean annual temperature of 27.2° C and a mean annual relative humidity of 67 per cent. The wind activity has significant effect on climate, vegetation and land use. There is severe wind activity, which has in the last century built a stretch of sand dunes, sands from which are drifting or encroaching upon the agricultural fields. Thus, the winds and the resulting sands cause land degradation. This has caused concern among the people and researchers in the interest of the community.

Fusion Techniques
Image fusion technique is divided into two categories:

  1. Visual display transforms which involves the colour composition of three bands of imagery displayed in Red-Green-blue(RGB) or other colour transformations such as Intensity-Hue-Saturation (IHS); and
  2. Statistical or numerical transforms (Harris et al., 1990; Pohl and Genderen, 1998), are based on channel statistics and includes Principal Component Analysis (PCA) Numerical transforms method uses arithmetic operations such as image difference and band ratios.

The most commonly used fusion techniques are band substitution, arithmetic techniques, IHS and PCA. For numerous studies, these techniques have been used but for land use or land cover mapping of arid or semi-arid environment (for example, Lichtenegger et al., 1991; Smara et al., 1998; Saraf, 1999). IHS and PCA are both criticised as having strong spectral distortions in the resulting imagery (Harris et al., 1990;Chavez et al., 1991; Pellemans et al., 1993; Nunez et al., 1999) and are only good for producing images for visual interpretation (Steinnocher, 1997).

Applications
Remotely sensed data have been successfully used for a variety of applications in arid and semi-arid regions around the world. The includes the detection and monitoring of land-use change in Rajasthan, India (Ram and Kolarkar, 1993; Kumar et al., 1993) and wasteland mapping to identify potential areas of afforestation in Hisar district of Haryana state, India (Shedha et al., 1996). The two most important land cover types in arid environments are sand, or sand features, and vegetation. In the study conducted by Kumar et al.(1993) Landsat MSS data from 1973 and 1986 were used to determine land cover change and monitoring desertification in the Thar Desert, India in which it was concluded that the MSS data were useful for recognizing and mapping different types of dunes and the substantial change in their boundaries.

Pre-Processing
Image rectification and restoration procedures are often termed as preprocessing operations because they normally precede manipulation and analysis of the digital image data in order to extract specific information. Manipulation and interpretation of digital images with the aid of computer forms the component of digital image processing. In order to correct Image data of its distortion or degradation due to image acquisition process, image rectification and restoration are used. Preprocessing involves the correction of both systematic (for example, scan skews, mirror-scan velocity, panoramic distortion, platform velocity, earth rotation and perspective) and non-systematic distortions (for example, altitude and attitude). Non-systematic errors are corrected by performing both image-to-map geometric rectification and image-to-image registration. Geometric correction is usually a two-step process involving polynomial transformation and image re-sampling.

Image Fusion Methods
For the purpose of study, a comparison of five image fusion techniques in terms of their effectiveness for merging both the IRS-1C LISS-III and panchromatic data and the LISS-III and RADARSAT-1 data has been attempted. The major fusion techniques that are applied, but not fully described, include band overlay, high-pass filtering (HPF), intensity-hue-saturation (IHS) transformation, and principal component analysis (PCA), and one new technique, PCI EASI / PACE’s IMGFUSE.

Band Overlay
The band substitution is the simplest image fusion technique (Franklin and Blodgett, 1993; Pohl and Genderen, 1993; Vrabel, 1996; Pohl and Genderen, 1998). It has been used for various applications such as agricultural crop classification, land use mapping, vegetation assessment and monitoring (Marino, 2001). The major advantage of this technique is that there are no changes to the radiometric qualities of the data since there is no radiometric enhancement of the data. This technique is most often used when the two sources are highly correlated. Panchromatic sharpening involves the substitution of the panchromatic band for the multi-spectral band covering the same region as the panchromatic band (Jensen, 1996). The generation of colour composite images is limited to the display of only three bands corresponding to the colour guns of the display device (red-green-blue). As the panchromatic band has a spectral range covering both the green and red channels (PAN 0.50-0.75 mm; green 0.52-0.59 mm; red 0.62-0.68 mm), the panchromatic band could be used as a substitute for either of those bands.

High-Pass Filtering Method
The HPF fusion method is a specific application of arithmetic techniques used to fuse imagery, which involves use of arithmetic operations such as addition, subtraction, multiplication and ratioing (Vrabel, 1996). HPF is an arithmetic technique that applies a spatial enhancement filter to the high-resolution image before the two data sets are merged together on a pixel-by-pixel basis. The HPF fusion combines both spatial and spectral information using the band-addition approach. Chavez et al. (1991) found that when compared to the IHS and PCA, the HPF method exhibits less distortion in the spectral characteristics of the data; and distortions were minimal and difficult to detect. This conclusion was based on statistical, visual and graphical analysis of the spectral characteristics of the data.

Intensity-Hue-Saturation
IHS transformation is one of the most widely used methods for merging complementary, multi-sensor data sets (Chavez et al., 1991; Pellemans et al., 1993; Vrabel, 1996). The IHS transform provides an effective alternative to describing colours by the red-green-blue display co-ordinate system. The possible range of digital numbers (DNs) for each colour component is 0 to 255 for 8-bit data. Each pixel is represented by a three-dimensional coordinate position within the colour cube. Pixels having equal components of red, green and blue lie on the grey line, a line from the cube to the opposite corner (Lillesand and Kiefer, 2000).

The IHS transform is defined by three separate and orthogonal attributes, namely intensity, hue, and saturation (Harris et al., 1990). Intensity represents the total energy or brightness in an image and defines the vertical axis of the cylinder. Hue is the dominant or average wavelength of the colour inputs and defines the circumferential angle of the cylinder. It ranges from blue (0 / 360°) through green, yellow, red, purple, and then back to blue (360 / 0°). Saturation is the purity of a colour or the amount of white light in the image and defines the radius of the cylinder (Harris et al., 1990). (Chavez et al., 1991; Pellemans et al., 1993;) cautioned that of all methods to merge multi-spectral data, the IHS method distorts spectral characteristics the most and should be used with caution if detailed radiometric analysis is to be performed. Although IRS 1C LISS III acquires data in four bands, only three bands are used for the study neglecting the fourth due to the poor spatial resolution. IHS transform is more successful in panchromatic sharpening with true colour composites than when the colour composites include near or mid-infrared bands.

Principal Component Analysis
The PCA is a commonly used tool for image enhancement and the data compression. The original inter-correlated data are mathematically transformed into new, uncorrelated images called components or axes (Chavez and Kwarteng, 1989). The procedure involves a linear transformation so that the original brightness values are re-projected onto a new set of orthogonal axes. PCA is a relevant method for merging remotely sensed imagery because of its ability to reduce the dimensionality of the original data from n to 2 or 3 transformed principal component images, which contain majority of information. For example, PCA can be used to merge several bands of multispectral data (for example, Landsat) with one high spatial resolution band (for example, SPOT Panchromatic). Image fusion can be done in two ways using the PCA: First method is very similar to IHS transformation. Second method involves a forward transformation that is performed on all image channels from the different sensors combined to form one single image file.

IMGFUSE. It is a task within the Image Lock Data Fusion module that is used to enhance the spatial resolution of a low-resolution image using high-resolution image as reference. IMGFUSE task preserves radiometric information on each band individually and maintains statistical integrity. This can be run on geo-coded images or non-geo-coded images. Normally, IMGFUSE is run after non-geo-coded low-and high-resolution images have been “locked” together by running the IMGLOCK task. IMGFUSE can be performed using two images with different spatial resolutions. It can be run separately for both the panchromatic data and the RADARSAT-1 data. KSIZE and MAXGAIN are the other two parameters needed. KSIZE value determines the size of the linear kernels over which the cross correlation modelling is performed (PCI, 1999) on a window size ((2 ´ KSIZE + 1) ´ 3) pixels centred on a pixel. MAXGAIN value controls the sensitivity over relatively flat areas (those areas with similar pixel values, such as water). Higher values increase the sensitivity, while lower values can reduce noise in such regions. A MAXGAIN value that is too low will decrease the amount of detail in the enhanced image.

The Best Solution in Image Fusion
The IHS transformation with the panchromatic stretch produced the best enhanced composite image (Figure 1). All the fusion techniques, with the exception of IMGFUSE, generated composite images with more detailed spatial information than the original multi-spectral data; however, the IHS image is superior. The image provides sharper definition of field boundaries, roads and pathways and is spectrally similar to the original data. The IMGFUSE image has very little spatial detail. The PCA exhibits the most spatial distortion. Based on the comparison of the enhanced composite images generated using the five fusion methods, the best composite image is generated using the IHS technique with the panchromatic stretch, for the whole study area (Figure 2). It is the final classified image for the whole study area.

Conclusion
Based on the comparisons of the enhanced composite images generated using band overlay, the HPF, IHS, PCA and IMGFUSE, the best composite image is generated using IHS technique with the panchromatic stretch. The final spatially enhanced composite image has been presented for perusal and understanding of the nature of fusion. Comparison of the image-fusion techniques in terms of generating a land-cover map revealed that the use of the PCA technique distorted the spectral characteristics of the transformed data set. Results from the IHS technique with the panchromatic stretch and form the IMGFUSE technique were very similar to the band overlay results, which were considered accurate because the characteristics of the data are not altered in any way. In terms of decision to be taken, which technique to apply to generate a land-cover map of the study area, Image Lock Data Fusion (IMGFUSE) technique was rejected.

Acknowledgements
The authors acknowledge with deep gratitude and a sense of thanks the munificent grant from the Shastri Indo-Canadian Institute – Canadian International Development Agency for the research reported here under the Partnership Program Phase II, during 1999-2001.

References

  • Bruzzone, L., D.F. Prieto and S.B. Serpico (1999): A neural-statistical approach to multi-temporal and multi-source remote-sensing image classification, IEEE Transactions on Geoscience and Remote Sensing, 37(3): 1350-1359.
  • Chavez, P.S. Jr. (1986): Digital merging of Landsat TM and digitised NHAP data for 1: 24,000-scale image mapping, Photogrammetric Engineering and Remote Sensing, 52(10): 1637-1646.
  • Chavez, P.S. Jr. and A.Y. Kwarteng (1989): Extracting spectral contrast in Landsat Thematic Mapper image data using selective principal component analysis, Photogrammetric Engineering and Remote Sensing, 55(3): 339-348.
  • Chavez, P.S. Jr. S.C. Sides and J.A. Anderson (1991): Comparison of three different methods to merge multi-resolution and multi-sectoral data: Landsat TM and SPOT Panchromatic, Photogrammetric Engineering and Remote Sensing, 57(3): 295-303.
  • Harris, J.R., R. Murray and T. Hirose (1990): IHS transform for the integration of radar imagery with other remotely sensed data, Photogrammetric Engineering and Remote Sensing, 56(12): 1631-1641.
  • Jensen, J.R. (1996): Introductory Digital Image Processing, New Jersey: Prentice Hall.
  • Kumar, M., E. Goossens and R. Goossens (1993): Assessment of sand dune change detection in Rajasthan (Thar) Desert, India, International Journal of Remote Sensing, 14(9): 1689-1703.
  • Lichtenegger, J., J.F. Dallemand, P. Reichert, P. Rebillard and M. Buchroithner (1991): Multi-sensor analysis for land use mapping in Tunisia, Earth Observation Quarterly, 33: 1-6.
  • Lillesand, T.M.and R.W. Kiefer (2000): Remote Sensing and Image Interpretation, New York: John Wiley.
  • Marino, L.A. (2001): Examining Image-Fusion Methods for Land Cover Mapping in the Thevaram Basin, Southern India, Master’s Dissertation, Faculty of Environmental Studies, University of Waterloo, Waterloo, Canada (unpublished).
  • Nunez, J., X. Otazu, O. Fors, A. Prades, V. Pala and R. Arbiol (1999): multiresolution based image fusion with additive wavelet decomposition, IEEE Transactions on Geoscience and Remote Sensing, 37(3): 1204-1211.
  • PCI (1999): PCI Geomatics Help Gateway, PCI Geomatics, Richmond Hill, Ontario.
  • Pellemans, A.H.J.M., R.W.L. Jordans and R. Allewijn (1993): Merging multispectral and panchromatic SPOT images with respect to radiometric properties of the sensor, Photogrammetric Engineering and Remote Sensing, 59(1): 81-87.
  • Pohl, C. and J.L. van Genderen (1993): Geometric integration of multi-image information, Proceedings of the Second ERS-1 Symposium-Space at the Service of our Environment, Hamburg, Germany, 11-14 October 1255-1259.
  • Phol, C. and J.L. van Genderen (1998): Multisensor image fusion in remote sensing: concepts, methods and applications, International Journal of Remote Sensing, 19(5): 823-854.
  • Ram, B. and A.S. Kolarkar (1993): Remote sensing application in monitoring land use changes in arid Rajasthan, International Journal of Remote Sensing, 14(17): 3191-3200.
  • Saraf, A.K. (1999): IRS-1C LISS-III and PAN data fusion: an approach to improve remote sensing based mapping techniques, International Journal of Remote Sensing, 20(10): 1929-1934.
  • Shedha, M.D., M.L. Manchanda, M. Kudrat and K.P. Sharma (1996): Remote sensing and GIS based approach for greening and ameliorating the environment in semi-arid area, Proceedings of the Twenty-Sixth International Symposium on Remote Sensing of Environment/Eighteenth Annual Symposium of the Canadian Remote Sensing Society, Vancouver, British Columbia, Canada, 25-29 March, 472-475.
  • Smara, Y., A. Belhadj-Aissa, B. Sansal, J. Lichtenegger and A. Bouzenoune (1998): Multi-source ERS 1 and optical data for vegetal cover assessment and monitoring in a semi-arid region of Algeria, International Journal of Remote Sensing, 19(18): 3551-3568.
  • Steinnocher, K. (1997): Applications of adaptive filters for multisensoral image fusion, Proceedings of the International Geoscience and Remote Sensing Symposium (IGARASS ’97), Singapore, August 1997, 910-912.
  • Varshney, P.K. (1997): Multisensor data fusion, Electronics and Communication Engineering Journal, 9(6): 245-253.
  • Vrabel, J. (1996): Multispectral imagery band sharpening study, Photogrammetric Engineering and Remote Sensing, 62(9): 1075-1083.