Geodesic Invariant Feature: A Local Descriptor in Depth

Different from the photometric images, depth images resolve the distance ambiguity of the scene, while the properties, such as weak texture, high noise, and low resolution, may limit the representation ability of the well-developed descriptors, which are elaborately designed for the photometric imag...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 24; no. 1; pp. 236 - 248
Main Authors Yazhou Liu, Lasang, Pongsak, Siegel, Mel, Quansen Sun
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2015
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Different from the photometric images, depth images resolve the distance ambiguity of the scene, while the properties, such as weak texture, high noise, and low resolution, may limit the representation ability of the well-developed descriptors, which are elaborately designed for the photometric images. In this paper, a novel depth descriptor, geodesic invariant feature (GIF), is presented for representing the parts of the articulate objects in depth images. GIF is a multilevel feature representation framework, which is proposed based on the nature of depth images. Low-level, geodesic gradient is introduced to obtain the invariance to the articulate motion, such as scale and rotation variation. Midlevel, superpixel clustering is applied to reduce depth image redundancy, resulting in faster processing speed and better robustness to noise. High-level, deep network is used to exploit the nonlinearity of the data, which further improves the classification accuracy. The proposed descriptor is capable of encoding the local structures in the depth data effectively and efficiently. Comparisons with the state-of-the-art methods reveal the superiority of the proposed method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2014.2378019