Code and models used in "MUSS Multilingual Unsupervised Sentence Simplification by Mining Paraphrases".
The Compositional Perturbation Autoencoder (CPA) is a deep generative framework to learn effects of perturbations at the single-cell level. CPA performs OOD predictions of unseen combinations of drugs, learns interpretable embeddings, estimates dose-response curves, and provides uncertainty estimates.
Code to accompany "Human-Level Performance in No-Press Diplomacy via Equilibrium Search", published at ICLR 2021
Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"
Library for accessing capacitive measurements with the FDC1004 (TI) with Arduino.
PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
LabGraph is a Python framework for rapidly prototyping experimental systems for real-time streaming applications. It is particularly well-suited to real-time neuroscience, physiology and psychology experiments.
ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).
Code for CVPR 2021 oral paper "Exploring Data-Efficient 3D Scene Understanding with Contrastive Scene Contexts"
A method to estimate 3D hand gestures from 3D body motion input in body languages.
We create D3D-HOI a dataset of monocular videos with ground truth annotations of 3D object pose and part motion during human-object interaction.
Trains Transformer model variants. Data isn't shuffled between batches.
A repository for benchmarking neural vocoders by their quality and speed.
On the model-based stochastic value gradient for continuous reinforcement learning
Code for the paper "UnNatural Language Inference" to appear at ACL 2021 (Long Paper)