L. Zhu and R. Tateishi
Center for Environment Remote Sensing (CEReS), Chiba University
1-33, Yayoi-cho, Inage-ku, Chiba, 263-8522 Japan
Tel: (81)-43-290-3850 Fax: (81)-43-290-3857
E-mail:[email protected]
Abstract
Efficient integration of temporal, spectral and spatial resolution information is important for accurate mapping of agricultural areas. In this study, a new temporal fusion classification (TFC) model is presented for classification of agricultural vegetation, based on statistical fusion of single source multitemporal satellite images. In the proposed model, the temporal dependence of multitemporal images is taken into account by estimating transition probabilities from the change pattern of vegetation dynamics indicator (VDI). For integration of multisensor multitemporal satellite data, an extended multisensor temporal fusion classification (MTFC) model is proposed, concerning both temporal attributes and reliability of multiple data sources. The feasibility of the new method is verified by using multitemporal Landsat TM and ERS SAR satellite images, and experimental results show improved performance than the conventional method.
1. Introduction
The effective agricultural mapping and monitoring are required for a variety of applications ranging from general inventory requirements to ecological studies. Remote sensing has shown great potential in agricultural mapping and monitoring due to its advantages over traditional procedures in terms of cost effectiveness and timeliness in the availability of information over larger areas (Murthy et al 1998). Automated interpretation of satellite images for agricultural area mapping is relatively complicated due to spectrum similarity of agricultural crops. Fusion techniques have been adopted for crop discrimination to provide increased interpretation capabilities and more reliable results since data with different characteristics are combined. The efficient fusion measure depends on the better understanding of characteristics of sophisticated multisource data and selecting the optimal interpretation algorithm. Advanced analytical or numerical data fusion techniques are imperative for the integration of temporal, spectral and spatial resolution information.
The aim of this study is to incorporate the temporal dependence of multitemporal image data into the fusion algorithm by estimating transition probabilities theoretically and reasonably from the change pattern of VDI, and consequently enhance the interpretation capabilities; moreover, to integrate multisensor multitemporal satellite data effectively, i.e., both the temporal attribute and the reliability of multiple data sources are concerned.
2. Methodology
2.1 Temporal Data Fusion Based on a Bayesian Formulation
Let us consider that two multispectral remote sensing images acquired at time t1 and t2 on the same area are examined. Let us suppose that a pixel of the multispectral image acquired at time t1 and a spatially corresponding pixel of multispectral image acquired at time t2 . These pixels are characterized by the m-variate observation feature vectors X1 and X2, respectively. Let wi(i=1, 2, …, n ) and Vk(k=1, 2, …, n ) be the set of possible land cover classes at time t1 and t2 respectively, if we classify each couple of pixels independently of any other on the basis only of its feature vectors X1 and X1, based on the Bayes rule, it requires that the couple of classes ( wi,Vk) be selected that provides the maximum likelihood L(wi,Vk) , according to Swain (1978) and by applying some transformations, the likelihood function that will be used in the decision rule takes the following form:
2.2 Determination of Transition Probabilities
According to formula (1), we can see that only transition probabilities P(Vk|wi represent the temporal dependence of multitemporal images. Formerly, when we incorporated temporal aspect to the fusion model, transition probabilities were always decided empirically (Solberg et al 1994). It should be more reasonable to decide transition probabilities using the change pattern of VDI, e.g., Normalized difference vegetation index (NDVI) for optical data and backscattering coefficient for SAR data. We can use the change of VDI to represent crop seasonal differences, or land use change of terrain categories. For the consecutive pair of images, let us define the change index (CI) using VDI derived from multitemporal images like following,
CI=DN2-DN1 (2)
Here, DN2 and DN1 are the pixel digital numbers of VDI at time t1 and t2 , respectively. Let us name the change pattern of VDI, which are calculated from the training data set and decided by the predetermined thresholds, as the estimated change pattern (ECP). Thus, the ECP of VDI can be defined as:
if x2£CI £x1, then ECP=0
if CI>x1, then ECP=1
if CI