I'm a PhD candidate in the MIT Department of Brain and Cognitive Sciences, supervised by Josh Tenenbaum. My research has focussed on structured Bayesian modelling for concept learning and perception, both to develop better computational models of human cognition and to develop more interpretable and robust machine intelligence. My technical interests include probabilistic programming (for specifying knowledge), deep learning (for making inference tractable) and meta-evolutionary algorithms (for searching massive structured spaces).
That said, I am increasingly worried about the implications of machine intelligence and the ways it's likely be used by the society it comes of age in. I think it's well-justified to be very concerned about things like data privacy, propaganda, autonomous weapons, mass unemployment, and particularly the interaction of these risks against a growing backdrop of authoritarian politics, economic inequality, and ecological crises. If you're thinking about how to increase the probability that AI ends up being a net-positive for the world, I'd really like to hear from you.
SketchAdapt: paper (in submission, ICML 2019)
with Max Nye, Armando Solar-Lezama
Wake-Sleep-Remember: paper (in submission, ICLR 2019)
Tutorial on Bayesian Inference In Generative Models: talk (CBMM, 2018)
with Maddie Cusimano
Writing on AI: [thinking machines]