CapeX: Category-Agnostic Pose Estimation from Textual Point Explanation
Conventional 2D pose estimation models are constrained by their design to specific object categories. This limits their applicability to predefined objects. To overcome these limitations, category-agnostic pose estimation (CAPE) emerged as a solution. CAPE aims to facilitate keypoint localization fo...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
01.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Conventional 2D pose estimation models are constrained by their design to
specific object categories. This limits their applicability to predefined
objects. To overcome these limitations, category-agnostic pose estimation
(CAPE) emerged as a solution. CAPE aims to facilitate keypoint localization for
diverse object categories using a unified model, which can generalize from
minimal annotated support images. Recent CAPE works have produced object poses
based on arbitrary keypoint definitions annotated on a user-provided support
image. Our work departs from conventional CAPE methods, which require a support
image, by adopting a text-based approach instead of the support image.
Specifically, we use a pose-graph, where nodes represent keypoints that are
described with text. This representation takes advantage of the abstraction of
text descriptions and the structure imposed by the graph.
Our approach effectively breaks symmetry, preserves structure, and improves
occlusion handling. We validate our novel approach using the MP-100 benchmark,
a comprehensive dataset spanning over 100 categories and 18,000 images. Under a
1-shot setting, our solution achieves a notable performance boost of 1.07\%,
establishing a new state-of-the-art for CAPE. Additionally, we enrich the
dataset by providing text description annotations, further enhancing its
utility for future research. |
---|---|
DOI: | 10.48550/arxiv.2406.00384 |