Emmanuel Bengio

Senior ML Scientist at Recursion, working on GFlowNet and drug-discovery from Mila, Montreal.

I got my PhD and CS Master's at McGill, as a part of Mila, under the supervision of Joelle Pineau and Doina Precup.
My Master's thesis was on conditional computation, and PhD thesis on generalization and optimization in deep RL.
I have a Computer Science B.Sc. from the DIRO at Université de Montréal.

I'm mainly interested in Machine Learning, especially Deep Learning and Reinforcement Learning and mixing both, I'm also into compiler design and implementation, and generally programming as a topic. Lately, I've been working at the intersection of ML and drug design using our all new GFlowNet framework.

Here is my curriculum vitae, my Google scholar page. Reach me on gmail, my username is bengioe.

Research [Google scholar]

Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation,
[blog post], [code]
Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, Yoshua Bengio

Correcting Momentum in Temporal Difference Learning, [blog post], [virtual poster], [code]
Emmanuel Bengio, Joelle Pineau, Doina Precup, NeurIPS 2020 Deep RL Workshop

Interference and Generalization in Temporal Difference Learning, [video summary], [virtual poster]
Emmanuel Bengio, Joelle Pineau, Doina Precup, ICML 2020

Attack and Defense in Cellular Decision-Making: Lessons from Machine Learning
Thomas J. Rademaker, Emmanuel Bengio, and Paul François, Phys. Rev. X 9
Disentangling the independently controllable factors of variation by interacting with the world
Valentin Thomas, Emmanuel Bengio, William Fedus, Jules Pondard, Philippe Beaudoin, Hugo Larochelle, Joelle Pineau, Doina Precup, Yoshua Bengio
World Knowledge for Reading Comprehension: Rare Entity Prediction with Hierarchical LSTMs Using External Descriptions
Teng Long, Emmanuel Bengio, Ryan Lowe, Jackie Chi Kit Cheung, Doina Precup, EMNLP 2017
A Closer Look at Memorization in Deep Networks
Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, Simon Lacoste-Julien, ICML 2017
Independently Controllable Features
Emmanuel Bengio, Valentin Thomas, Joelle Pineau, Doina Precup, Yoshua Bengio, RLDM 2017
Longer version:
Independently Controllable Factors
Valentin Thomas, Jules Pondard, Emmanuel Bengio, Marc Sarfati, Philippe Beaudoin, Marie-Jean Meurs, Joelle Pineau, Doina Precup, Yoshua Bengio

Deep Nets Don't Learn via Memorization
David Krueger, Nicolas Ballas, Stanislaw Jastrzebski, Devansh Arpit, Maxinder S. Kanwal, Tegan Maharaj, Emmanuel Bengio, Asja Fischer, Aaron Courville. Workshop Track, ICLR 2017

On Reinforcement Learning for Deep Neural Architectures: conditional computation with stochastic computation policies
Emmanuel Bengio - Master's thesis, 2016

Conditional Computation in Neural Networks for faster models
Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup. International Conference on Learning Representations, Workshop Track, ICLR 2016
github code, [Youtube] ICML Abstraction in RL talk

Conditional computation in neural networks using a decision-theoretic approach
Pierre-Luc Bacon, Emmanuel Bengio, Joelle Pineau, Doina Precup. 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making (RLDM 2015)
github code

Combining modality specific deep neural networks for emotion recognition in video
Samira Ebrahimi Kahou et al. Proceedings of the 15th ACM on International conference on multimodal interaction. ACM 2013.


On weight initialization of deep networks (Dec 2016)
On happiness, meaning, PhD, and a bit of metaphysics (Sept 2020)
Thoughts on meditation (Nov 2020)
Staleness-Corrected Momentum SGD (Dec 2020)
Generative Flow Networks (June 2021)
My list of paper writing and research tips (living doc)

Code & old stuff

github implementations, dependency-minimal implementations of various algorithms.
theano_tools, boilerplate theano code and tools I use for my experiments.
condnet, policy driven sparse neural net, using explicit sparse dot products for speed.
Continuous component autoencoders, an exploration of activation functions for neural nets (2014).
Ekanz, attempt at a Python-like JIT compiler with some ML (2014).
Rogue, a Scheme Scheme-to-x64 compiler (2014).
PyVariable, a C++ class that wraps PyObject (2.x) (2011).
Pirate Story Online, the first big game (attempt) I ever made (age 13-15, ca. 2006).

Theano, a short practical guide, presentation I made to introduce Theano (2015).

Things worth reading

This is water (the late David Foster Wallace)
How to be perfectly unhappy (the Oatmeal)

Camus's Le Mythe de Sisyphe
Mark Fischer's Capitalist Realism
Isaac Asimov's Foundation


Things I have composed and recorded, on Soundcloud.

A selection of things I listen to (in no particular order):
Stimpy Lockjaw, Exivious, Floral, King Gizzard & The Lizard Wizard, The Sword, Plini, Weedpecker, Elder, Stone From The Sky, Mastodon, Uriah Heep, Gentle Giant, Dream Theater, Sloche, Maneige, Led Zeppelin, Pollen (Qc), Liquid Tension Experiment, Smooth McGroove, Don Ross, Rush, Ewan Dobson, Harmonium, Black Sabbath.