Joint Pedestrian and Body Part Detection via Semantic Relationship Learning

While remarkable progress has been made to pedestrian detection in recent years, robust pedestrian detection in the wild e.g., under surveillance scenarios with occlusions, remains a challenging problem. In this paper, we present a novel approach for joint pedestrian and body part detection via sema...

Full description

Saved in:
Bibliographic Details
Published inApplied sciences Vol. 9; no. 4; p. 752
Main Authors Gu, Junhua, Lan, Chuanxin, Chen, Wenbai, Han, Hu
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 21.02.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:While remarkable progress has been made to pedestrian detection in recent years, robust pedestrian detection in the wild e.g., under surveillance scenarios with occlusions, remains a challenging problem. In this paper, we present a novel approach for joint pedestrian and body part detection via semantic relationship learning under unconstrained scenarios. Specifically, we propose a Body Part Indexed Feature (BPIF) representation to encode the semantic relationship between individual body parts (i.e., head, head-shoulder, upper body, and whole body) and highlight per body part features, providing robustness against partial occlusions to the whole body. We also propose an Adaptive Joint Non-Maximum Suppression (AJ-NMS) to replace the original NMS algorithm widely used in object detection, leading to higher precision and recall for detecting overlapped pedestrians. Experimental results on the public-domain CUHK-SYSU Person Search Dataset show that the proposed approach outperforms the state-of-the-art methods for joint pedestrian and body part detection in the wild.
ISSN:2076-3417
2076-3417
DOI:10.3390/app9040752