FGPNet: A weakly supervised fine-grained 3D point clouds classification network

•In view of the necessity and research value, we are the first to specialize in studying fine-grained classification under the 3D point clouds representation, providing a new perspective for 3D shape classification.•Through in-depth analysis of the characteristics of the target object (3D point clou...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition Vol. 139; p. 109509
Main Authors Shao, Huihui, Bai, Jing, Wu, Rusong, Jiang, Jinzhe, Liang, Hongbo
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.07.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•In view of the necessity and research value, we are the first to specialize in studying fine-grained classification under the 3D point clouds representation, providing a new perspective for 3D shape classification.•Through in-depth analysis of the characteristics of the target object (3D point clouds) and the key to fine-grained classification tasks, design a feature extraction model to effectively learn the discriminative features.•To highlight discriminative local regions and captures spatial differences between 3D point clouds from different sub-categories, propose a module to capture spatial structure feature and aggerate local features. 3D point clouds classification has been a hot research topic and received great progress in recent years. However, due to the similar data distributions and subtle differences among various sub-categories in a meta-category, the 3D point clouds classification at a fine-grained level is still very challenging, especially without the annotations of part locations or attributes. In this paper, we propose a novel weakly supervised network for fine-grained 3D point clouds classification, namely FGPNet. Different from the previous supervised fine-grained classification methods that use class labels and other manual annotation information, FGPNet develops a unified framework to address both local geometric details and global spatial structures only using the class labels as input. Specifically, FGPNet firstly employs a context-aware discriminative feature extraction (CDFE) module, which extract contextual contrasted information across differential receptive fields hierarchically, and further capture discriminative local details from point clouds. Subsequently, an SimAM-Capsule Aggregation (SCA) module is introduced to highlight the significant local features and capture their spatial relationships. Quantitative and qualitative experimental results on fine-grained dataset including three categories Airplane, Chair and Car demonstrate that FGPNet outperforms the state-of-the-art methods on fine-grained 3D point clouds classification tasks.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2023.109509