LiftPose3D, a deep learning-based approach for transforming 2D to 3D pose in laboratory animals

Published: Sept. 20, 2020, 11:01 p.m.

Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.18.292680v1?rss=1 Authors: Gosztolai, A., Gunel, S., Abrate, M. P., Morales, D., Rios, V. L., Rhodin, H., Fua, P., Ramdya, P. Abstract: Markerless 3D pose estimation has become an indispensable tool for kinematic studies of laboratory animals. Most current methods recover 3D pose by multi-view triangulation of deep network-based 2D pose estimates. However, triangulation requires multiple, synchronised cameras per keypoint and elaborate calibration protocols that hinder its widespread adoption in laboratory studies. Here, we describe LiftPose3D, a deep network-based method that overcomes these barriers by reconstructing 3D poses from a single 2D camera view. We illustrate LiftPose3D's versatility by applying it to multiple experimental systems using flies, mice, and macaque monkeys and in circumstances where 3D triangulation is impractical or impossible. Thus, LiftPose3D permits high-quality 3D pose estimation in the absence of complex camera arrays, tedious calibration procedures, and despite occluded keypoints in freely behaving animals. Copy rights belong to original authors. Visit the link for more info