Shattri Mansor, Wong Tai Hong and Abdul Rashid Mohamed Shariff
Spatial and Numerical Modeling Laboratory
Institute of Advanced Technology, University Putra Malaysia,
43400 Serdang, Selangor, Malaysia
[email protected]
Introduction
Classification based on pixel-based approaches to image analysis is limited nowadays. Typically, they have considerable difficulties dealing with the rich information content of Very High Resolution (VHR) or moderate resolution such as Landsat TM or Spot data; they produce a characteristic, inconsistent salt-and-pepper classification, and they are far from being capable of extracting objects of interest. Therefore, the vast majority of operational projects can be realized only by means of massive human interaction. Due to this, application of new type supervised classification process is now bringing into polygon base. It is necessary to make their contents manageable, which requires one or more preferably meaningful image segmentations. Additional information such as from criteria, textual or contextual information of the segments then must be describable in an appropriate way to derive improved classification results.
Multiresolution Segmentation
The concept behind eCognition is that important semantic information necessary to interpret an image is not represented in single pixels, but in meaningful image objects and their mutual relationships (Martin Baatz et. al, 2001). The eCognition software performs a first automatical processing – segmentation – of the imagery. This results to a condensing of information and a knowledge-free extraction of image objects. The formation of the objects is carried out in a way that an overall homogeneous resolution is kept. The segmentation algorithm does not only rely on the single pixel value, but also on pixel spatial continuity (texture, topology). The formatted objects have now not only the value and statistic information of the pixels that they consist. They carry also texture, form (spatial features) and topology information in a common attribute table. (Ioannis Manakos, 2001) The organized images objects carry not only the value and statistical information of the pixels of which they consists, but also information on texture and shape as well as their position within the hierarchical network (Ambiente Humano, 2000). The basic difference, especially when compared to pixel-based procedures, is that object oriented analysis does not classify single pixels, but rather image objects which are extracted in a previous image segmentation step.
Supervised Classification
eCognition supports different supervised classification techniques and different methods to train and build up a knowledge base for the classification of image objects. The frame of knowledge base for the analysis and classification of image objects is the so-called class hierarchy. It contains all classes of a classification scheme. The classes can be grouped in a hierarchical manner allowing the passing down of class descriptions to child classes on the one hand, and meaningful semantic grouping of classes on the other. This simple hierarchical grouping offers an astonishing range for the formulation of image semantics and for different analysis strategies. The user interacts with the procedure and based on statistics, texture, form and mutual relations among objects defines training areas. The classification of an object can then follow the “hard” nearest neighbourhood method or the “soft” method using fuzzy membership functions. Multilevel segmentation, context classification and hierarchy rules are also available (Ioannis Manakos, 2001) By classifying “neighborhoods” in a large-segment level, and “forest” or “impervious” in a small-segment level within the “neighborhood” larger segments, classes such as turf-and-tree and residential could be identified (Emily Wilson1 and Dan Civco, 2002). Class descriptions are performed using a fuzzy approach of nearest neighbor or by combinations of fuzzy sets on object features, defined by membership functions. Whereas the first supports an easy click and classify approach based on marking typical objects as representative samples, the later allows inclusion of concepts and expert knowledge to define classification strategies (Martin Baatz et. al, 2001).
Methodology
Figure 1: Schematic diagram illustrates the object oriented image analysis flow chart in eCognition.
Basically, the process can be divided into three simple steps. After bring the image into eCognition, the image will be applied the multiresolution segmentation. After satisfied with the segmentation as shown in level 2 from Figure 2, few general classes are created and applied standard nearest neighbours, this segments are then pick out randomly for training samples, basic supervised classification can be proceeded to obtain an absolute general classified classes (2nd step). For example: urbanization area, vegetation area, water body and etc. Once the classified general classes is acceptable, further classification process (3rd step) can be carried out to generate the desire classes, for example, urban area and clear land from urbanization area main class; rubber, oil palm, scrub and grass land from vegetation area main class. These child classes can be generated by its’ full range of fuzzy logic functions availability. Table 1 shows the general differences between conventional pixel-based classification and new approach of polygon based classification.
Table 1: Differences Between Pixel Based and Object Oriented Classification
Pixel-Based Classification Object Oriented Classification Correct the atmospheric distortion
Require gain and offset values, sun elevation angle, ground visibility and etc. Make segmentation
Directly apply onto the image until the desire object segmentation polygon has appeared. Based on the spectral mean of each band that contain in the image Besides the total bands available in the image, DEM band, brightness or vectors can be undertaken for classification parameters. Classification is made at one time only Step by step classification can be applied onto the image. The classification step can always be carried out from time to time in order to classify the classes from minimum of two classes to more. Apply the mode filter to reduce the distortion. No filter is needed because the image has already in meaningful polygon after segmentation step.
Results and discussion
Figure 2: Hierarchical net of image objects derived from image segmentation level 1 (5 pixels scale parameter), level 2 (15 pixels scale parameter) and level 3 (30 pixels scale parameter)
In this paper, two different areas have been chosen for testing the new technique. The first one is located at urbanization area as seen in part (a), Figure 3. The other one is located at high land, vegetation area, which surrounded by dense typical forest as seen in part (b), Figure 3. The results of pixel based and polygon-based classification has been compared. It is shown in part (c), (d), (e) and (f), Figure 3.
Figure 3: Landsat TM (Band 4, 5, 3) testing area (a) Urbanization Area; (b) High Land Vegetation Area. Comparison between pixel based and polygon based classification (c) pixel-based at urbanization area; (d) pixel-based at vegetation area; (e) polygon-based at urbanization area; (f) polygon-based at vegetation area
The parts (c) and (d) from Figure 4 are the results pixel-based maximum likelihood supervised classification. As a result, the classified image produced salt and pepper image or lot of small clumps (< 10 pixels) appeared in the classified image. Vice versa, the classified image derived from polygon-based classification is closer to human visual interpretation.
The pixel-based classification results, which only based on the spectral mean of the digital number itself, is no way to differentiate cloud with urban and clear land completely [see Figure 3, part (d)]. But if look carefully on high land vegetation area at Figure 3, part (f), it can be clearly seen that cloud is no more mix classified, neither with urban area nor bare or clear land. In e-Cognition, the classification is not only based on the spectral number itself, it can accept other source regardless of its’ data properties (8 bits, 16 bits or 32 bits). In this case, the cloud is clearly classified by using the DEM band (16 bits) as the main parameter where in generally no urban area or clear land can be found at certain height especially at hilly area where its’ surrounding area cover by dense typical forest. Due to this, by setting the parameter at the urbanization area can only be found lower than 440 m from the sea level only as shown in Figure 4.
Figure 4: Setting the DEM parameter to determine the urbanization area
Post Classification Analysis
In order to make a direct comparison of accuracy assessment between the pixel-based and polygon-based classification results, the accuracy assessment has been carried out in the same environment. The program automatically picks out 300 random sample points plus 15 ground truth points for accuracy assessment. The statistic result is shown in Table 3. From the results, the overall accuracy has shown the higher accuracy in polygon-based classification result.
Table 2: Accuracy assessment
Accuracy statistics Pixel-based (%) Polygon-based (%) Forest 81.507 94.972 Water body 94.118 100.00 Urban Area 88.889 86.111 Bare or Clear Land 72.727 83.333 Orchard 75.000 84.000 Rubber and Scrub 84.906 82.353 Cloud 83.333 100.000 Shadow 33.333 100.000 Overall Accuracy 81.667 90.667
Conclusion
In this paper, the object oriented analysis technique has been introduced for classification and the result is satisfied for land cover mapping. The proposed technique was successfully tested with Landsat TM image. The results presented in this paper show the efficiency and higher accuracy for polygon-based classification. This technique is recommended to test on VHR data such as Ikonos image or Aerial photos especially in town area where more details classes can be generated.
References
- Ambiente Humano, 2000. “eCogntition and Change Detection-Integrating Aerial Photos and Satellite Images.” . eCognition Application Notes, Vol. 1, No. 2, September 2000.
- Emily Wilson and Dan Civco, 2002. “Research on Improve Land Use Information Derived from Landsat and IKONOS”. Laboratory for Earth Resources Information Systems, Department of Natural Resources Management and Engineering, The University of Connecticut, USA.
- Ioannis Manakos, 2001, “eCognition and Precision Farming.” . eCognition Application Notes, Vol. 2, No 2, April 2001.
- Martin Baatz, Markus Heynen, Peter Hofmann, Iris Lingenfelder, Martthias Mimier, Amo Schape, Michaela Weber and Gregor Willhauck, 2001. “eCognition User Guide 2.0 : Object Oriented Image Analysis.” Definiens Imaging GmbH, Trappentreustrasse 1, 80339 Mรผnchen, Germany.