About me
I am a final-year PhD student in the Department of Computer Science at Stanford University, advised by Prof. Matei Zaharia. I also work closely with Amar Phanishayee. I am affiliated with Stanford DAWN and supported by a National Science Foundation Graduate Research Fellowship.

My broad research interests include distributed systems and cloud computing -- in particular, I am interested in the Systems problems associated with learning and deploying machine learning models at scale.

I graduated from MIT in 2015 with a SB in Computer Science and Mathematics and a MEng in EECS. My CV is here.
Publications
Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads
Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, Matei Zaharia.
OSDI 2020.

Analysis and Exploitation of Dynamic Pricing in the Public Cloud for ML Training
Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, Matei Zaharia.
DISPA 2020.

Offload Annotations: Bringing Heterogeneous Computing to Existing Libraries and Workloads
Gina Yuan, Shoumik Palkar, Deepak Narayanan, Matei Zaharia.
USENIX ATC 2020.

Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference
Peter Kraft, Daniel Kang, Deepak Narayanan, Shoumik Palkar, Peter Bailis, Matei Zaharia.
MLSys 2020.

MLPerf Training Benchmark
Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojyoti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Bill Jia, Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Deepak Narayanan, Tayo Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St. John, Carole-Jean Wu, Lingjie Xu, Cliff Young, Matei Zaharia.
MLSys 2020.

PipeDream: Generalized Pipeline Parallelism for DNN Training
Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, Matei Zaharia.
SOSP 2019.

Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark
Cody Coleman*, Daniel Kang*, Deepak Narayanan*, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Ré, Matei Zaharia.
SIGOPS Operating Systems Review July 2019.

Accelerating Deep Learning Workloads through Efficient Multi-Model Execution
Deepak Narayanan, Keshav Santhanam, Amar Phanishayee, Matei Zaharia.
NeurIPS Systems for ML Workshop 2018.

Analysis of the Time-To-Accuracy Metric and Entries in the DAWNBench Deep Learning Benchmark
Cody Coleman*, Daniel Kang*, Deepak Narayanan*, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Ré, Matei Zaharia.
NeurIPS Systems for ML Workshop 2018.

Evaluating End-to-End Optimization for Data Analytics Applications in Weld
Shoumik Palkar, James Thomas, Deepak Narayanan, Pratiksha Thaker, Parimarjan Negi, Rahul Palamuttam, Anil Shanbhag, Holger Pirk, Malte Schwarzkopf, Saman Amarasinghe, Samuel Madden, Matei Zaharia.
VLDB 2018.

DAWNBench: An End-to-End Deep Learning Benchmark and Competition
Cody Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Christopher Re, Matei Zaharia.
NeurIPS Systems for ML Workshop 2017.

MacroBase: Prioritizing Attention in Fast Data
Peter Bailis, Edward Gan, Samuel Madden, Deepak Narayanan, Kexin Rong, Sahaana Suri.
SIGMOD 2017.

Weld: A Common Runtime for High Performance Data Analytics
Shoumik Palkar, James J. Thomas, Anil Shanbhag, Deepak Narayanan, Holger Pirk, Malte Schwarzkopf, Saman Amarasinghe, Matei Zaharia.
CIDR 2017.
Preprints
Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads
Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, Matei Zaharia.
arXiv:2008.09213.

Memory-Efficient Pipeline-Parallel DNN Training
Deepak Narayanan, Amar Phanishayee, Kaiyu Shi, Xie Chen, Matei Zaharia.
arXiv:2006.09503.

MLPerf Training Benchmark
Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debojyoti Dutta, Udit Gupta, Kim Hazelwood, Andrew Hock, Xinyuan Huang, Bill Jia, Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Deepak Narayanan, Tayo Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St. John, Carole-Jean Wu, Lingjie Xu, Cliff Young, Matei Zaharia.
arXiv:1910.01500.

Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference
Peter Kraft, Daniel Kang, Deepak Narayanan, Shoumik Palkar, Peter Bailis, Matei Zaharia.
arXiv:1906.01974.

PipeDream: Fast and Efficient Pipeline Parallel DNN Training
Aaron Harlap, Deepak Narayanan, Amar Phanishayee, Vivek Seshadri, Nikhil Devanur, Greg Ganger, Phil Gibbons.
arXiv:1806.03377.

Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark.
Cody Coleman*, Daniel Kang*, Deepak Narayanan*, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Re, Matei Zaharia.
arXiv:1806.01427.

Weld: Rethinking the Interface Between Data-Intensive Libraries
Shoumik Palkar, James Thomas, Deepak Narayanan, Anil Shanbhag, Holger Pirk, Malte Schwarzkopf, Saman Amarasinghe, Samuel Madden, Matei Zaharia.
arXiv:1709.06416.
Teaching
At Stanford, I have TAed Design and Analysis of Algorithms (CS 161), Principles of Data-Intensive Systems (CS 245), and Parallel Computing (CS 149).

At MIT, I TAed Introduction to Algorithms (6.006), and Design and Analysis of Algorithms (6.046). Before that, I was a Lab Assistant for Elements of Software Construction (6.005) and Introduction to EECS I (6.01).
Contact me