Home Articles Automated cartographic line tracking

Automated cartographic line tracking

7 Minutes Read

Pakorn Apaphant, Ph.D.
Researcher, Remote Sensing Division
National Research Council of Thailand
196 Paholyotin Rd., Jatujak, Bangkok 10900
Tel. (66)-2-940-6997 Fax. (66)-2-579-5618
E-mail:[email protected]

Keywords: Line tracking, Line extraction, Skeletonization

Abstract
Line is often regarded as one of the most valuable features in a map. Cartographic line tracking tools found in most commercial software typically are semi-automated process. This research studied and purposed an automatic method. It consists of line thinning, and line following steps. The algorithm has been examined using several scanned contour maps. An experimental result is also included herein.

Introduction
The simplest and perhaps the most useful feature in a map is line. This feature contains some essential structural information which is useful for further analysis. Although the line extraction process requires basic understanding of computer vision, many researchers in cartography have tried to study and applied them to their own discipline. Some researchers are interested in detecting lines directly from an original image (Apaphant and Bethel, 2000), while the others concentrate in tracking this feature from a line image. Except for extracting important information from an image, one of their common goals is to reduce memory space. Recently, there are many line tracking tools available in commercial GIS software. However they are semi-automatic process which can be tardy and expensive. Some tools even create artifacts. Hence the automation of the process can definitely bring considerable benefits to users of GIS system.

To track lines in a line image, two basic steps, i.e., skeletonization and line following, are required. Some research have emphasized developments only on the individual steps. Skeletonization is the process that reduces thick lines into single pixel wide lines. All 256 possible surrounding conditions for the pixel of interest were employed for decision-making of line thinning (Cohan and Landy, 1985). Nevertheless, their methods produced poor results in some specific cases. The medial axis method detected skeletons based on the maximum distance from all edges of the original line (Peuquet, 1981). This method could not be assured that the information of the original medial axis is always held. Line following is the process that identifies the series of coordinates in each individual thinned line. It is categorized into two approaches (Rosenfeld and Kak, 1982). The scan-line based method search for an object by scanning either row-by-row basis or column-by-column basis (Peuquet, 1981). The object-based method typically first detected endpoints of lines (Greenlee, 1987)(Moore, 1992). The individual objects are then followed from the beginning node of the objects through the ending node of the objects. Motivated by these problems, an automated line tracking is proposed in this paper. This method improves and assembles the two basic steps into one suite, without human intervention.

Skeletonization
Thinning , also called Skeletonization, is the process that reduces thick lines into lines with single pixel thickness. It reduces data to be stored while brings out the structure of the pattern. The remaining pixels of each object form a line-string on the medial line of the original object. Since there are many possible applications of thinning, it is difficult to design criteria that will satisfy all applications. In cartography and GIS applications, two criteria are often required. Firstly, the removal must not change continuity of an original objects. This means that the removal operator must not create any discontinuity or holes. Secondly, the shape of an object must be preserved. However, due to the flawed computer decision during thinning, some small holes may be created. This problem can be addressed by a procedure so called Gap filling. It searches for the pixels which appear to be gaps and replaces them with object pixels. For the sake of completeness, this thinning algorithm is therefore itemized into two steps which are, thinning and gap filling.

Figure 1: North border segments (N)
The proposed algorithm is based on the peeling approach. The thickness of a line is reduced by one unit at a time on each border of an object until its skeleton remains. We first define a border point of an object as a point whose adjacent point in its eight neighbors is not an object point. We divide the border points into four types i.e., north, south, west, and east borders. For instance the object points with character “N” in Figure 1 are the north border points. Another terminology often employed in this thinning algorithm is an obese point. A point, p, is a candidate for an obese if originally there is only one region adjacent to p based on the four neighboring definition. If point p is removed and that one region is still preserved then the p is called an obese point. This thinning process begins with iteratively deleting only the boundary segments which are obese points, on the north, south, east, and west borders consecutively. For example, in the first iteration, all north border points categorized as obese points are marked for deletion and then simultaneously removed for the entire image. After that, the thinned image and the original one are compared pixel by pixel for the entire image. If they are exactly the same, the iteration is stopped and the skeleton is obtained from the result of this pass. Otherwise the original image is updated. And the updated image is thinned along the south border and then compared. If the thinning process is still required, the same procedure performs on the east and west directions consecutively. After the updating, if the skeleton image is still not received, the next iteration is then continued with the same procedure. The iteration should stop when there are no further deletions. Note that change of thinning order may cause slightly different on the final result. Figure 2 illustrates the thinning procedure. Figure 2B is the output from thinning the north obese segments of the original one (Figure 2A). Figure 2C is the thinning result after the first iteration. And Figure 2D is the skeletons.

The point removal process may create small gaps in some places. To preserve the continuity of the lines, these gaps must be filled. They are converted to object points, if found. A general problem which often occurs in other algorithms is bridging between adjacent object. It is taken care in this study by not allowing gaps between parallel lines to be filled. It can be noticed that this proposed algorithm produces the reducing object pixels systematically. Using this approach, the number of iterations for thinning an object is approximately on half of the objects thickness measured in pixel units. The resulting figure should be a line drawing retaining continuity and shape information of the original one. These thinned lines are spatially placed along the medial regions of the original objects. They therefore conform closely to the original but thinner (Figure 3).

(A) (B) (C) (D)
Figure 2:Thinning procedure

Line Following
It is possible to express a line as an equation if its geometry is simple and can be defined by simple mathematics equation. Unfortunately, most lines in cartography applications have complex geometry. In addition, database queries in geographic information systems are mostly framed in terms of the coordinates of figures in the map. A curved cartographic line is then often expressed by a string of consecutive head-to-tail vectors. These vectors are normally retrieved using line following technique. It identifies the series of coordinates in the individual lines. This process would be complicate if the ending nodes of each line-string were unknown. Therefore node marking is a required preprocess for the line following.

Figure 3: Skeletons overlaid on the original scanned portion
One method to build up a vector by coordinates is to begin with the endpoints and then find coordinates of the individual lines between these two points. Hence, after lines are thinned, node detection is performed on every object pixel. The decision rule, whether or not a pixel is a node, is based on some characteristics of the 256 possible combinations the eight connected neighboring pixels. Firstly the number of object pixels in the eight neighbors, denoted as N (x), is introduced. The number of object pixel to background pixel transitions in the eight neighbors cyclically, denoted as T(x), is also employed in this rule. Actually there are three types of node pixel. They are the isolated node, the junction node, and the terminated node. Generally, the isolated nodes are black spots in an image. Only the terminated nodes and the junction nodes may be the ending points of the line-string. It should be noted that a junction node may be the ending points of several line-strings. If the pixel of interest is denoted by x, the following rules are applied.

1. If N (x) = 0, x is an isolated node.
2. If N (x) < 3 and T (x) = 2, x is a terminated node.
3. If N (x) > 2 and T(x) = 2, x is a junction node.
4. If T (x) > 5, x is a junction node.

Once the coordinates of all nodes are obtained, the line following may simply launch. From the node list, the first node is retrieved and the line string of this node is then tracked until the other end is reached. The process is iterative for the entire node list. An ambiguity can occur when a line intersection is met. In this research, a decision is automatically made which direction to follow.

Figure 4: A portion of the scanned contour map
Experiments
A portion of a scanned contour map sheet with scale of 1:50,000 was used in this experiment (Figure 4). It covers an area in the northern part of Thailand. The chosen part is compromised of several types of terrain data including rivers, mountains, ridges, and etc. The minimum elevation is 480 meters while the maximum elevation is 941 meters Contour lines are very dense. The original line thickness falls between 5 to 11 pixels. For the sake of simplicity, all contour annotations and other text appeared in the original scanned image were removed before this tracking algorithm performed. It can be noticed that some features in the original image are poorly defined. In addition there are some breaks in lines appearing on the original image. The grayscale scanned image was threshold to black and white image to differentiated the object pixels from the background pixels.

The output image after thinning and gap filling could be accepted in a certain regions. Some breaks in lines still appeared in the output. In general, the thinning operators worked very well on a contour map. All lines become single pixel wide features. The node marking algorithm then was employed for finding all types of nodes in the thinned image. Each thinned line then was followed on a line by line basis from its starting node until its ending nodes was reached. Note that there was no action taken on the isolated nodes.

In order to apply this contour data for use in constructing a digital elevation model or DEM, the elevation attribute data for these contour lines were noted. The individual contour lines were tagged with an elevation attribute. These XYZ contour line points could be used in several ways to build a DEM. Taken as random spot heights they can be processed into a TIN. A constant shade rendering is shown in Figure 5.

Figure 5: A shaded rendering of the terrain model
Conclusions
This paper has described an automatic line tracking technique for contour imageries. It integrates two crucial steps into one suite. The thinning operators can preserve shape of the object while it does not create any artifact or holes. At the end, the algorithm generates the series of coordinates of the thinned lines between two detected endpoints. Using this approach, memory space required to store the data can be reduced while most distinct patterns are retained. Several experimental results imply that this approach offers an alternative method of automatic line tracking.

References

  • Apaphant, P. and Bethel, J., 2000. In: International Archives of Photogrammetry and Remote Sensing, Amsterdam, The Netherlands Vol. XXXIII, Part B3, pp.267-273.
  • Greenlee, D. D., 1987. Raster to vector processing for scanned linework. Photogrammetric Engineering and Remote Sensing, Vol.53, No.10, pp.1383-1387.
  • Landy, M. S. and Cohen, Y., 1985. Vectorgraph coding: Efficient coding of line drawing. Computer Graphics and Image Processing, Vol. 30, pp.331-344.
  • Moore, L. R., 1971. Software for cartographic raster to vector conversion. In: International Archives of Photogrammetry and Remote Sensing, Washington, D.C., Vol. XIV, No.5, pp.335-345.
  • Peuquet D. J., An examination of techniques for remote formatting digital cartographic data/part 1: The raster to vector process. Cartographica, Vol.18, No. 1, 1981, pp.34-38.
  • Rosenfeld, A. and Kak, A. C., 1982. Digital Picture Processing(2nd edition), Academic Press, Vol. 2, pp.55-163.