MLSys Seminar: Towards Transparent Foundations -- Building Accessible Infrastructure for Training Large-Scale Language Models, Laurel Orr

MLSys Seminar

Title: Towards Transparent Foundations -- Building Accessible Infrastructure for Training Large-Scale Language Models
Speaker: Laurel Orr
Date: September 23
Time: 1:30pm
Event link: https://www.youtube.com/watch?v=g-OjU4uzWqE

Abstract: 
“Foundation models” — large-scale self-supervised models that can be adapted to a wide range of downstream tasks - are changing how machine learning systems are constructed and deployed. Due to their extreme resource demands, training and developing a science behind these models has remained difficult. In this talk, I'll introduce and describe the journey behind Mistral, an infrastructure for accessible, easy-to-use foundation model training. I'll describe some of the hurdles we encountered with stable, reproducible training and how we see Mistral as a crucial step to facilitate open foundation model research. 

Bio: 
Laurel Orr is currently a postdoc at Stanford working with Chris Ré in the Hazy Research Lab. In August of 2019, she graduated with a PhD from Paul G Allen School for Computer Science and Engineering at the University of Washington in Seattle. She was part of the Database Group and advised by Dan Suciu and Magdalena Balazinska. Her research interests are broadly at the intersection of machine learning and data management. She focuses on how to manage the end-to-end lifecycle of self-supervised embedding pipelines. This includes problems of how to better train, maintain, monitor, and patch the embedding models and their use downstream.

Date: 
Thursday, September 23, 2021 - 1:30pm to 2:30pm