The Recognition of the Point Symbols in the Scanned Topographic Maps
It is difficult to separate the point symbols from the scanned topographic maps accurately, which brings challenges for the recognition of the point symbols. In this paper, based on the framework of generalized Hough transform (GHT), we propose a new algorithm, which is named shear line segment GHT...
Saved in:
Published in | IEEE transactions on image processing Vol. 26; no. 6; pp. 2751 - 2766 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.06.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | It is difficult to separate the point symbols from the scanned topographic maps accurately, which brings challenges for the recognition of the point symbols. In this paper, based on the framework of generalized Hough transform (GHT), we propose a new algorithm, which is named shear line segment GHT (SLS-GHT), to recognize the point symbols directly in the scanned topographic maps. SLS-GHT combines the line segment GHT (LS-GHT) and the shear transformation. On the one hand, LS-GHT is proposed to represent the features of the point symbols more completely. Its R-table has double level indices, the first one is the color information of the point symbols, and the other is the slope of the line segment connected a pair of the skeleton points. On the other hand, the shear transformation is introduced to increase the directional features of the point symbols; it can make up for the directional limitation of LS-GHT indirectly. In this way, the point symbols are detected in a series of the sheared maps by LS-GHT, and the final optimal coordinates of the setpoints are gotten from a series of the recognition results. SLS-GHT detects the point symbols directly in the scanned topographic maps, totally different from the traditional pattern of extraction before recognition. Moreover, several experiments demonstrate that the proposed method allows improved recognition in complex scenes than the existing methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1057-7149 1941-0042 |
DOI: | 10.1109/TIP.2016.2613409 |