LiftPose3D, a deep learning-based approach for transforming two-dimensional to three-dimensional poses in laboratory animals

Markerless three-dimensional (3D) pose estimation has become an indispensable tool for kinematic studies of laboratory animals. Most current methods recover 3D poses by multi-view triangulation of deep network-based two-dimensional (2D) pose estimates. However, triangulation requires multiple synchr...

Full description

Saved in:
Bibliographic Details
Published inNature methods Vol. 18; no. 8; pp. 975 - 981
Main Authors Gosztolai, Adam, Günel, Semih, Lobato-Ríos, Victor, Pietro Abrate, Marco, Morales, Daniel, Rhodin, Helge, Fua, Pascal, Ramdya, Pavan
Format Journal Article
LanguageEnglish
Published New York Nature Publishing Group 01.08.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Markerless three-dimensional (3D) pose estimation has become an indispensable tool for kinematic studies of laboratory animals. Most current methods recover 3D poses by multi-view triangulation of deep network-based two-dimensional (2D) pose estimates. However, triangulation requires multiple synchronized cameras and elaborate calibration protocols that hinder its widespread adoption in laboratory studies. Here we describe LiftPose3D, a deep network-based method that overcomes these barriers by reconstructing 3D poses from a single 2D camera view. We illustrate LiftPose3D's versatility by applying it to multiple experimental systems using flies, mice, rats and macaques, and in circumstances where 3D triangulation is impractical or impossible. Our framework achieves accurate lifting for stereotypical and nonstereotypical behaviors from different camera angles. Thus, LiftPose3D permits high-quality 3D pose estimation in the absence of complex camera arrays and tedious calibration procedures and despite occluded body parts in freely behaving animals. LiftPose3D infers three-dimensional poses from two-dimensional data or from limited three-dimensional data. The approach is illustrated for videos of behaving Drosophila, mice, rats and macaques.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1548-7091
1548-7105
DOI:10.1038/s41592-021-01226-z