Thanks for stopping by!

I'm a fourth-year PhD student in Computer Science at Stanford University, working on machine learning and natural language processing. I am fortunate to be advised by Percy Liang and Jure Leskovec. Previously, I received B.S. from Yale University, where I worked with Dragomir Radev (LILY lab) and John Lafferty.

My primary research interests are in machine learning and natural language processing. In particular, I aspire to develop systems that can robustly reason about language and knowledge, and generalize to new complex tasks like or even better than humans do. Topics that I focus on include:

  • Knowledge: methods to model and integrate world/domain knowledge for tasks such as question answering. This includes fusing heterogeneous knowledge sources, e.g. text, images, language models, and knowledge bases (QAGNN, DRAGON, UnifiedSKG, RACM3).
  • Reasoning: methods to learn correct and explainable reasoning steps to use knowledge and solve tasks (DrRepair, LEGO).
  • Self-supervised learning and adaptation: methods to learn generalizable language and knowledge representations from raw unlabeled data (LinkBERT, BIFI, LMCritic, WILDS).
  • Biomedical applications: machine learning for overcoming the knowledge and reasoning bottlenecks in biomedical tasks, e.g. drug discovery, personalized medicine, and clinical trials (CaML).

I also have broad interests in topics around language: text summarization (GraphMDS, ScisummNet, CL-Scisumm), semantic parsing (Spider, SyntaxSQLNet, SParC), multilingual NLP (mPOS), and modeling programs/mathematics (TopicEq, DrRepair).



  • Retrieval-Augmented Multimodal Language Modeling
    Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih
    ICML 2023.   [paper] [blog] [slides]
  • REPLUG: Retrieval-Augmented Black-Box Language Models
    Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih
    arXiv.   [paper]
  • HEIM: Holistic Evaluation of Text-To-Image Models
    Tony Lee*, Michihiro Yasunaga*, Chenlin Meng*, Yifan Mai, Joon Sung Park, Agrim Gupta, Yunzhi Zhang, Deepak Narayanan, Hannah Benita Teufel, Marco Bellagente, Minguk Kang, Taesung Park, Jure Leskovec, Jun-Yan Zhu, Li Fei-Fei, Jiajun Wu, Stefano Ermon, Percy Liang
    arXiv 2023 coming soon.   [paper] [website]
  • Med-Flamingo: a Multimodal Medical Few-shot Learner
    Michael Moor, Qian Huang, Shirley Wu, Michihiro Yasunaga, Cyril Zakka, Yash Dalmia, Eduardo Pontes Reis, Pranav Rajpurkar, Jure Leskovec
    arXiv 2023.   [paper] [code] [model]
  • Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models
    Yuhui Zhang*, Michihiro Yasunaga*, Zhengping Zhou*, Jeff Z. HaoChen*, James Zou, Percy Liang, Serena Yeung
    ACL Findings 2023.   [paper] [code]
  • Is ChatGPT a General-Purpose Natural Language Processing Task Solver?
    Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, Diyi Yang
    arXiv.   [paper]
  • Holistic Evaluation of Language Models
    Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, ... (50 authors). Michihiro Yasunaga: Lead author of Knowledge section.
    TMLR 2023.   [paper] [project page]
  • Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
    with the BIG-bench team (442 authors)
    TMLR 2023.   [paper] [project page]
  • VQA-GNN: Reasoning with Multimodal Knowledge for Visual Question Answering
    Yanan Wang, Michihiro Yasunaga, Hongyu Ren, Shinya Wada and Jure Leskovec.
    ICCV 2023.   [paper]
  • Med-EASi: Finely Annotated Dataset and Models for Controllable Simplification of Medical Texts
    Chandrayee Basu, Rosni Vasu, Michihiro Yasunaga, Qian Yang
    AAAI 2023.   [paper]
  • Zero-shot causal learning
    Hamed Nilforoshan, Michael Moor, Yusuf Roohani, Yining Chen, Anja Šurina, Michihiro Yasunaga, Sara Oblak, Jure Leskovec
    arXiv.   [paper]


  • DRAGON: Deep Bidirectional Language-Knowledge Graph Pretraining
    Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang* and Jure Leskovec*
    NeurIPS 2022.  
    AAAI 2023 Deep Learning on Graphs Workshop (Best Paper Award).  
    [paper] [model & code & data] [codalab] [slides] [blog]
  • LinkBERT: Pretraining Language Models with Document Links
    Michihiro Yasunaga, Jure Leskovec* and Percy Liang*
    ACL 2022.   [paper] [model & code & data] [HuggingFace] [codalab] [slides] [Stanford AI blog]
  • UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
    Tianbao Xie*, Chen Wu*, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, ..., Rui Zhang, Noah A. Smith, Luke Zettlemoyer and Tao Yu.
    EMNLP 2022.   [paper] [project page] [code & data]
  • GreaseLM: Graph Reasoning Enhanced Language Models for Question Answering
    Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning and Jure Leskovec.
    ICLR 2022.   [paper] [slides] [code]
  • Extending the WILDS Benchmark for Unsupervised Adaptation
    Shiori Sagawa*, Pang Wei Koh*, Tony Lee*, Irena Gao*, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn and Percy Liang
    ICLR 2022.   [paper] [project page] [code]


  • LM-Critic: Language Models for Unsupervised Grammatical Error Correction
    Michihiro Yasunaga, Jure Leskovec and Percy Liang.
    EMNLP 2021.   [paper] [project page] [code] [codalab]
  • QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
    Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang and Jure Leskovec.
    NAACL 2021.   [paper] [project page] [code] [codalab] [Stanford AI blog] [slides] [video by Antoine]
  • On the Opportunities and Risks of Foundation Models
    Rishi Bommasani, ..., Percy Liang (116 authors). Michihiro Yasunaga: Lead author of Healthcare & Biomedicine section.
    arXiv 2021.   [paper] [project page]
  • Break-It-Fix-It: Unsupervised Learning for Program Repair
    Michihiro Yasunaga and Percy Liang.
    ICML 2021.   [paper] [code & data] [codalab] [Stanford AI blog]
  • WILDS: A benchmark of in-the-wild distribution shifts
    Pang Wei Koh*, Shiori Sagawa*, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang
    ICML 2021.   [paper] [project page] [code] [Stanford AI blog]
  • LEGO: Latent Execution-Guided Reasoning for Multi-Hop Question Answering on Knowledge Graphs
    Hongyu Ren, Hanjun Dai, Bo Dai, Xinyun Chen, Michihiro Yasunaga, Haitian Sun, Dale Schuurmans, Jure Leskovec and Denny Zhou.
    ICML 2021.   [paper] [code]



  • A Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation
    Irene Li, Michihiro Yasunaga, Muhammed Yavuz Nuzumlalı, Cesar Caraballo, Shiwani Mahajan, Harlan Krumholz and Dragomir Radev
    NeurIPS 2019, Machine Learning for Health Workshop.   [paper] [bib] [code]
  • CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases
    with Tao Yu, Rui Zhang, Caiming Xiong, Richard Socher, Dragomir Radev and many authors.
    EMNLP 2019.   [paper] [bib] [slides] [dataset & leaderboard]
  • SParC: Cross-Domain Semantic Parsing in Context
    with Tao Yu, Rui Zhang, Caiming Xiong, Richard Socher, Dragomir Radev and many authors.
    ACL 2019.   [paper] [bib] [dataset & leaderboard]
  • TopicEq: A Joint Topic and Mathematical Equation Model for Scientific Texts
    Michihiro Yasunaga and John Lafferty
    AAAI 2019.   [paper] [bib] [dataset (170MB)]
  • ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks
    Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander Fabbri, Irene Li, Dan Friedman and Dragomir Radev
    AAAI 2019.   [paper] [bib] [dataset]
  • Overview and Results of CL-SciSumm Shared Task 2019
    Muthu Kumar Chandrasekaran, Michihiro Yasunaga, Dragomir Radev, Dayne Freitag and Min-Yen Kan
    SIGIR 2019, BIRNDL Workshop.   [paper] [bib] [project page]


  • SyntaxSQLNet: Syntax Tree Networks for Complex and Cross-Domain Text-to-SQL Task
    Tao Yu, Michihiro Yasunaga, Kai Yang, Rui Zhang, Dongxu Wang, Zifan Li and Dragomir Radev
    EMNLP 2018.   [paper] [bib] [code]
  • Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task
    Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang and Dragomir Radev
    EMNLP 2018.   [paper] [bib] [blog] [dataset & leaderboard]
  • Neural Coreference Resolution with Deep Biaffine Attention by Joint Mention Detection and Mention Clustering
    Rui Zhang, Cicero Nogueira dos Santos, Michihiro Yasunaga, Bing Xiang and Dragomir Radev
    ACL 2018.   [paper] [bib]
  • Robust Multilingual Part-of-Speech Tagging via Adversarial Training
    Michihiro Yasunaga, Jungo Kasai and Dragomir Radev
    NAACL 2018.   [paper] [bib] [slides] [code]


  • Graph-based Neural Multi-Document Summarization
    Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan and Dragomir Radev
    CoNLL 2017.   [paper] [bib]

Other Projects

  • Named Entity Recognition for Academic Advising
    Developed systems to recognize and link academic named entities to university database. Part of the Sapphire Project with University of Michigan and IBM Research.
  • Biomedical NLP
    Developed NLP systems to analyze electronic health records (EHR). Collaboration with Yale School of Medicine.