I am a fourth-year PhD student at Stanford University advised by Professor Matei Zaharia. I am a member of the FutureData Systems research group and the Stanford DAWN group. My research is
focused on building systems and infrastructure to accelerate machine learning workloads. I earned a BS in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2017 and an MS in Computer Science from Stanford in 2019 (dual concentration in Systems and Artificial Intelligence). At UIUC I worked with Professor Indranil Gupta in the Distributed Protocols Research Group.
- Moving Beyond Downstream Task Accuracy for Information Retrieval Benchmarking
Keshav Santhanam*, Jon Saad-Falcon*, Martin Franz, Omar Khattab, Avirup Sil, Radu Florian, Md Arafat Sultan, Salim Roukos, Matei Zaharia, and Christopher Potts. To appear in ACL Findings 2023.
- PLAID: An Efficient Engine for Late Interaction Retrieval
Keshav Santhanam*, Omar Khattab*, Christopher Potts, and Matei Zaharia. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management (CIKM 22).
Effective and Efficient Retrieval via Lightweight Late Interaction.
Keshav Santhanam*, Omar Khattab*, Christopher Potts, Matei Zaharia. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 22), July 2022.
- Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads.
Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, and Matei Zaharia. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), Banff, Alberta, Nov. 2020. USENIX Association.
- ROLA: A New Distributed Transaction Protocol and Its Formal Analysis.
Si Liu, Peter Csaba Olveczky, Keshav Santhanam, Qi Wang, Indranil Gupta, and José
Meseguer. International Conference on Fundamental Approaches to Software Engineering, Springer, 2018, pp. 77-93.
- DistIR: An Intermediate Representation for Optimizing Distributed Neural Networks.
Keshav Santhanam, Siddharth Krishna, Ryota Tomioka, Andrew Fitzgibbon, and Tim Harris. In 1st Workshop on Machine Learning and Systems (EuroMLSys) at EuroSys 2021. (Talk)
- Analysis and Exploitation of Dynamic Pricing in the Public Cloud for ML Training.
Deepak Narayanan, Keshav Santhanam, Fiodar Kazhamiaka, Amar Phanishayee, and Matei Zaharia. In 1st Workshop on Distributed Infrastructure, Systems, Programming and AI (DISPA) at VLDB 2020.
- Efficient Scheduling of DNN Training on Multitenant Clusters.
Deepak Narayanan, Keshav Santhanam, Amar Phanishayee, and Matei Zaharia. Workshop on MLOps Systems at MLSys 2020.
- Accelerating Deep Learning Workloads through Efficient Multi-Model Execution.
Deepak Narayanan, Keshav Santhanam, Amar Phanishayee, and Matei Zaharia. Workshop on Systems for ML and Open Source Software at NeurIPS 2018.
- Cheaply Evaluating Inference Efficiency Metrics for Autoregressive Transformer APIs
Deepak Narayanan, Keshav Santhanam, Peter Henderson, Rishi Bommasani, Tony Lee, Percy Liang.
- UDAPDR: Unsupervised Domain Adaptation via LLM Prompting and Distillation of Rerankers
Jon Saad-Falcon, Omar Khattab, Keshav Santhanam, Radu Florian, Martin Franz, Salim Roukos, Avirup Sil, Md Arafat Sultan, and Christopher Potts.
- Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP
Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia.
- Holistic Evaluation of Language Models
Percy Liang, et al. arXiv:2211.09110.
- DistIR: An Intermediate Representation and Simulator for Efficient Neural Network Distribution
Keshav Santhanam, Siddharth Krishna, Ryota Tomioka, Tim Harris, and Matei Zaharia. arXiv.2111.05426.
- On the Opportunities and Risks of Foundation Models.
Rishi Bommasani, et al. arXiv.2108.07258.
Deepak Narayanan, Trevor Gale, Keshav Santhanam, Omar Khattab, Tianyi Zhang, and Matei Zaharia.