# QuaterNet **Repository Path**: facebookresearch/QuaterNet ## Basic Information - **Project Name**: QuaterNet - **Description**: Proposes neural networks that can generate animation of virtual characters for different actions. - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2023-07-23 - **Last Updated**: 2024-10-21 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # QuaterNet: A Quaternion-based Recurrent Model for Human Motion This is the implementation of the approach described in the paper: > Dario Pavllo, David Grangier, and Michael Auli. [QuaterNet: A Quaternion-based Recurrent Model for Human Motion](https://arxiv.org/abs/1805.06485). In *British Machine Vision Conference (BMVC)*, 2018. We provide the code for reproducing our results (*short-term prediction*) and generating/rendering locomotion animations (*long-term generation*), as well as pre-trained models. ### Abstract Deep learning for predicting or generating 3D human pose sequences is an active research area. Previous work regresses either joint rotations or joint positions. The former strategy is prone to error accumulation along the kinematic chain, as well as discontinuities when using Euler angle or exponential map parameterizations. The latter requires re-projection onto skeleton constraints to avoid bone stretching and invalid configurations. This work addresses both limitations. Our recurrent network, *QuaterNet*, represents rotations with quaternions and our loss function performs forward kinematics on a skeleton to penalize absolute position errors instead of angle errors. On short-term predictions, *QuaterNet* improves the state-of-the-art quantitatively. For long-term generation, our approach is qualitatively judged as realistic as recent neural strategies from the graphics literature. ## Dependencies - Python 3+ distribution - PyTorch >= 0.4.0 - NumPy and SciPy Optional: - Matplotlib, if you want to render and display interactive animations. Additionally, you need *ffmpeg* to export MP4 videos, and *imagemagick* to export GIFs. - A decent GPU, if you want to train the models in reasonable time. If you plan on testing the pretrained models, the CPU is fine. The scripts detect automatically if a GPU is available. If you want to force CPU training/generation, you can set the `CUDA_VISIBLE_DEVICES` environment variable to an empty value. Within bash, you can limit its scope to a single command by calling `CUDA_VISIBLE_DEVICES= python