Aaron Mishkin

Detailed About

Interests

My research interests are in machine learning and continuous optimization. I am particularly interested in designing robust, tuning-free algorithms for stochastic optimization with applications to fitting neural networks. Such algorithms should be fast and practical, while still possessing rigorous convergence guarantees. This balancing act combines theory and practice and makes optimization for machine learning a very fun space.

I am also interested in optimization-based methods for scalable Bayesian inference, such as variational inference. Gaussian processes were my first introduction to machine learning and I still find this model class fascinating.

Education

I received my bachelor’s degree in computer science from UBC in 2018. During my batchelor’s, I worked with David Poole and Giuseppe Carenini on preference elicitation. We built Web ValueCharts, a visualization system for multi-criteria decision making [code].

The last six months of my undergraduate degree were spent with Emtiyaz Khan at the RIKEN Center for Advanced Intelligence Project (AIP), where I worked on low-rank approaches to Gaussian variational inference in Bayesian neural networks [arXiv].

I completed my master's at UBC in 2020, where I was fortunate to be supervised by Mark Schmidt. My master's research was on first-order optimization under interpolation conditions. I am very proud of the work we did on (mostly) parameter-free stochastic optimization using the Armjio line-search [arXiv].

Service

I review for NeurIPS, ICML, ICLR, AISTATS, TMLR, and OPT-ML. I also review for JMLR when requested. You can look at my Orcid and OpenReview profiles for additional information. At Stanford, I volunteer for the Student-Applicant Support Program (SASP), the CS Undergraduate Mentorship Program, and am a mentor for SERIO.

Outside of Work

I can often be found climbing, hiking, and spending time outside. I'm on Mountain Project and sometimes I post trip reports to Peak Bagger.