DeepEthogram: a machine learning pipeline for supervised behavior classification from raw pixels

Published: Sept. 25, 2020, 6:01 p.m.

Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.24.312504v1?rss=1 Authors: Bohnslav, J. P., Wimalasena, N. K., Clausing, K. J., Yarmolinsky, D., Cruz, T., Chiappe, E., Orefice, L. L., Woolf, C. J., Harvey, C. D. Abstract: Researchers commonly acquire videos of animal behavior and quantify the prevalence of behaviors of interest to study nervous system function, the effects of gene mutations, and the efficacy of pharmacological therapies. This analysis is typically performed manually and is therefore immensely time consuming, often limited to a small number of behaviors, and variable across researchers. Here, we created DeepEthogram: software that takes raw pixel values of videos as input and uses machine learning to output an ethogram, the set of user-defined behaviors of interest present in each frame of a video. We used convolutional neural network models that compute motion in a video, extract features from motion and single frames, and classify these features into behaviors. These models classified behaviors with greater than 90% accuracy on single frames in videos of flies and mice, matching expert-level human performance. The models accurately predicted even extremely rare behaviors, required little training data, and generalized to new videos and subjects. DeepEthogram runs rapidly on common scientific computer hardware and has a graphical user interface that does not require programming by the end-user. We anticipate DeepEthogram will enable the rapid, automated, and reproducible assignment of behavior labels to every frame of a video, thus accelerating all those studies that quantify behaviors of interest. Code is available at: https://github.com/jbohnslav/deepethogram Copy rights belong to original authors. Visit the link for more info