Ben Poole

I'm a research scientist at Google Brain, where I work on deep generative models and understanding neural networks.

I did my PhD at Stanford University advised by Surya Ganguli in the Neural Dynamics and Computation lab. My thesis was on computational tools to develop a better understanding of both biological and aritficial neural networks. I did my undergrad at Carnegie Mellon University, where I was advised by Tai Sing Lee. I've worked at DeepMind, Google Research, Intel Research Pittsburgh, and the NYU Center for Neural Science.

Email  /  Twitter  /  CV  /  Scholar  /  GitHub  /  LinkedIn


I'm interested in computational neuroscience, machine learning, computer vision, optimization, cycling, and deep learning. My current focus is on improving algorithms for unsupervised and semi-supervised learning using generative models.

On variational bounds of mutual information
Ben Poole, Sherjil Ozair, Aäron van den Oord, Alex Alemi, George Tucker
ICML 2019 
arXiv / colab / video / slides / poster

Old, new, and improved estimators of mutual information w/neural nets.

Discrete Flows: Invertible Generative Models of Discrete Data
Dustin Tran, Keyon Vafa, Kumar Krishna Agrawal, Laurent Dinh, Ben Poole
ICLR Deep GenStruct Workshop 2019  

Fast sampling generative models for discrete data.

Preventing posterior collapse with delta-VAEs
Ali Razavi, Aäron van den Oord, Ben Poole, Oriol Vinyals
ICLR 2019  
OpenReview / arXiv / poster

Avoid posterior collapse by lower bounding the rate.

Neuronal Dynamics Regulating Brain and Behavioral State Transitions
Aaron Andalman, Vanessa Burns, Matthew Lovett-Barron, Michael Broxton, Ben Poole, Samuel Yang, Logan Grosenick, Talia Lerner, Ritchie Chen, Tyler Benster, Philippe Mourrai, Marc Levoy, Kanaka Rajan, Karl Deisseroth
Cell 2019

Fixing a Broken ELBO
Alex Alemi, Ben Poole, Ian Fischer, Joshua V. Dillon, Rif A. Saurous, Kevin Murphy
ICML, 2018  

Understanding tradeoffs in VAE models through the lens of information theory.

Continuous relaxation training of discrete latent-variable image models
Casper Kaae Sønderby*, Ben Poole*, Andriy Mnih
NIPS Bayesian Deep Learning Workshop, 2017  

Continuous relaxation training of discrete latent-variable models can flexibly capture both continuous and discrete aspects of natural data.

Identification Of Cellular-Activity Dynamics Across Large Tissue Volumes In The Mammalian Brain
Logan Grosenick*, Michael Broxton*, Christina Kim*, Conor Liston*,
Ben Poole, Samuel Yang, Aaron Andalman, Edward Scharff, Noy Cohen, Ofer Yizhar, Charu Ramakrishnan, Surya Ganguli, Patrick Suppes, Marc Levoy, Karl Deisseroth
*equal contribution

Large-scale cellular-level imaging in the mammalian brain using lightfield microscopy. 1x1x0.5mm3 @ 100Hz.

Continual Learning through Synaptic Intelligence
Friedemann Zenke*, Ben Poole*, Surya Ganguli
*equal contribution
ICML, 2017  
arXiv / code

Learns to solve tasks sequentially without forgetting by learning which weights are important.

On the expressive power of deep neural networks
Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha Sohl-Dickstein
ICML, 2017  

Random neural networks show exponential growth in activation patterns and more.

Time-warped PCA: simultaneous alignment and dimensionality reduction of neural data
Ben Poole, Alex Williams, Niru Maheswaranathan, Byron Yu, Gopal Santhanam, Stephen Ryu, Stephen Baccus, Krishna Shenoy, Surya Ganguli
Computational Systems Neuroscience (COSYNE), 2017  
abstract / poster / code

Extends dimensionality reduction techniques to account for trial-to-trial variability in timing.

Categorical Reparameterization with Gumbel-Softmax
Eric Jang, Shane Gu, Ben Poole
ICLR, 2017  
arXiv / blog post

Efficient gradient estimator for categorical variables.

Unrolled Generative Adversarial Networks
Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
ICLR, 2017  
arXiv / code

Stabilize GANs by defining the generator objective with respect to an unrolled optimization of the discriminator.

Adversarially learned inference
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, Aaron Courville
ICLR, 2017  
arXiv / project page

Jointly learn a generative model and an inference network through an adversarial process.

Exponential expressivity in deep neural networks through
transient chaos

Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, Surya Ganguli
Neural Information Processing Systems (NIPS), 2016  
arXiv / code / poster

Random neural networks have curvature that grows exponentially with depth.

Improved generator objectives for GANs
Ben Poole, Alex Alemi, Jascha Sohl-Dickstein, Anelia Angelova
NIPS Workshop on Adversarial Training, 2016  
arXiv / poster

A new take on the Generative Adversarial Network training procedure.

Direction Selectivity in Drosophila Emerges from Preferred-Direction Enhancement and Null-Direction Suppression
Jonathan Leong*, Jennifer Esch*, Ben Poole*, Surya Ganguli, Thomas Clandinin
* equal contribution
The Journal of Neuroscience, 2016

Fruit flies detect motion using a very similar algorithm to humans.

The Fast Bilateral Solver
Jonathan T. Barron, Ben Poole
European Conference on Computer Vision (ECCV), 2016   (oral presentation)
arXiv / supplement / code / bibtex

Fast and accurate edge-aware smoothing. Differentiable for all your deep learning needs.

Fast large-scale optimization by unifying stochastic gradient and quasi-newton methods
Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli
International Conference on Machine Learning (ICML), 2014
arXiv / code

Speed up quasi-newton methods by maintaining a low-dimensional approximation of the Hessian for each minibatch.

Analyzing noise in auotoencoders and deep networks
Ben Poole, Jascha Sohl-Dickstein, Surya Ganguli
NIPS Workshop on Deep Learning, 2013

Derives analytic regularizers for different forms of noise injection, and shows how alternative types of additive noise can improve over dropout.

Brain Regions Engaged by Part- and Whole-task Performance in a Video Game: A Model-based Test of the Decomposition Hypothesis
John Anderson, Daniel Bothell, Jon Fincham, Abraham Anderson, Ben Poole, Yulin Qin
Journal of Cognitive Neuroscience, 2011

Complex tasks, like the Space Fortress video game, can be decomposed into a set of independent reusable components.


Robust non-rigid alignment of volumetric calcium imaging data
Ben Poole, Logan Grosenick, Michael Broxton, Karl Deisseroth, Surya Ganguli
Computational Systems Neuroscience (COSYNE), 2015

Correct for translations and rotations of noisy volumetric data without a clean reference volume.

Connecting scene statistics to probabilistic population codes
and tuning properties of V1 neurons

Ben Poole, Ian Lenz, Grace Linsday, Jason Samonds, Tai Sing Lee
Society for Neuroscience (SFN), 2010 (oral presentation)

cloned from clone