Thanks for stopping by!

I'm a fourth-year PhD student in Computer Science at Stanford University, working on machine learning and natural language processing. I am fortunate to be advised by Percy Liang and Jure Leskovec. Previously, I received B.S. from Yale University, where I worked with Dragomir Radev (LILY lab) and John Lafferty.

My primary research interests are in machine learning and natural language processing. In particular, I aspire to develop systems that can robustly reason about language and knowledge, and generalize to new complex tasks like or even better than humans do. Topics that I focus on include:

  • Knowledge: methods to model and integrate world/domain knowledge for tasks such as question answering. This includes fusing heterogeneous knowledge sources, e.g. text, images, language models, and knowledge bases (QAGNN, DRAGON, UnifiedSKG).
  • Reasoning: methods to learn correct and explainable reasoning steps to use knowledge and solve tasks (DrRepair, LEGO).
  • Self-supervised learning and adaptation: methods to learn generalizable language and knowledge representations from raw unlabeled data (LinkBERT, BIFI, LMCritic, WILDS).
  • Biomedical applications: machine learning for overcoming the knowledge and reasoning bottlenecks in biomedical tasks and discovery, e.g. diagnosis, clinical trials, and drug repurposing.

I also have broad interests in topics around language: text summarization (GraphMDS, ScisummNet, CL-Scisumm), semantic parsing (Spider, SyntaxSQLNet, SParC), multilingual NLP (mPOS), and modeling programs/mathematics (TopicEq, DrRepair).


Publications

2022

  • DRAGON: Deep Bidirectional Language-Knowledge Graph Pretraining
    Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang* and Jure Leskovec*
    NeurIPS 2022.   [paper] [model & code & data] [codalab] [slides]
  • LinkBERT: Pretraining Language Models with Document Links
    Michihiro Yasunaga, Jure Leskovec* and Percy Liang*
    ACL 2022.   [paper] [model & code & data] [HuggingFace] [codalab] [slides] [Stanford AI blog]
  • Retrieval-Augmented Multimodal Language Modeling
    Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer and Wen-tau Yih
    arXiv 2022.   [paper]
  • VQA-GNN: Reasoning with Multimodal Semantic Graph for Visual Question Answering
    Yanan Wang, Michihiro Yasunaga, Hongyu Ren, Shinya Wada and Jure Leskovec.
    arXiv 2022.   [paper]
  • UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
    Tianbao Xie*, Chen Wu*, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, ..., Rui Zhang, Noah A. Smith, Luke Zettlemoyer and Tao Yu.
    EMNLP 2022.   [paper] [project page] [code & data]
  • GreaseLM: Graph Reasoning Enhanced Language Models for Question Answering
    Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D Manning and Jure Leskovec.
    ICLR 2022.   [paper] [slides] [code]
  • Extending the WILDS Benchmark for Unsupervised Adaptation
    Shiori Sagawa*, Pang Wei Koh*, Tony Lee*, Irena Gao*, Sang Michael Xie, Kendrick Shen, Ananya Kumar, Weihua Hu, Michihiro Yasunaga, Henrik Marklund, Sara Beery, Etienne David, Ian Stavness, Wei Guo, Jure Leskovec, Kate Saenko, Tatsunori Hashimoto, Sergey Levine, Chelsea Finn and Percy Liang
    ICLR 2022.   [paper] [project page] [code]
  • Holistic Evaluation of Language Models
    Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, ... (50 authors). Michihiro Yasunaga: Lead author of Knowledge section.
    arXiv 2022.   [paper] [project page]
  • Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
    with the BIG-bench team (442 authors)
    arXiv 2022.   [paper] [project page]

2021

  • LM-Critic: Language Models for Unsupervised Grammatical Error Correction
    Michihiro Yasunaga, Jure Leskovec and Percy Liang.
    EMNLP 2021.   [paper] [project page] [code] [codalab]
  • QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
    Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang and Jure Leskovec.
    NAACL 2021.   [paper] [project page] [code] [codalab] [Stanford AI blog] [slides] [video by Antoine]
  • On the Opportunities and Risks of Foundation Models
    Rishi Bommasani, ..., Percy Liang (116 authors). Michihiro Yasunaga: Lead author of Healthcare & Biomedicine section.
    arXiv 2021.   [paper] [project page]
  • Break-It-Fix-It: Unsupervised Learning for Program Repair
    Michihiro Yasunaga and Percy Liang.
    ICML 2021.   [paper] [code & data] [codalab] [Stanford AI blog]
  • WILDS: A benchmark of in-the-wild distribution shifts
    Pang Wei Koh*, Shiori Sagawa*, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang
    ICML 2021.   [paper] [project page] [code] [Stanford AI blog]
  • LEGO: Latent Execution-Guided Reasoning for Multi-Hop Question Answering on Knowledge Graphs
    Hongyu Ren, Hanjun Dai, Bo Dai, Xinyun Chen, Michihiro Yasunaga, Haitian Sun, Dale Schuurmans, Jure Leskovec and Denny Zhou.
    ICML 2021.   [paper] [code]

2020

2019

  • A Neural Topic-Attention Model for Medical Term Abbreviation Disambiguation
    Irene Li, Michihiro Yasunaga, Muhammed Yavuz Nuzumlalı, Cesar Caraballo, Shiwani Mahajan, Harlan Krumholz and Dragomir Radev
    NeurIPS 2019, Machine Learning for Health Workshop.   [paper] [bib] [code]
  • CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases
    with Tao Yu, Rui Zhang, Caiming Xiong, Richard Socher, Dragomir Radev and many authors.
    EMNLP 2019.   [paper] [bib] [slides] [dataset & leaderboard]
  • SParC: Cross-Domain Semantic Parsing in Context
    with Tao Yu, Rui Zhang, Caiming Xiong, Richard Socher, Dragomir Radev and many authors.
    ACL 2019.   [paper] [bib] [dataset & leaderboard]
  • TopicEq: A Joint Topic and Mathematical Equation Model for Scientific Texts
    Michihiro Yasunaga and John Lafferty
    AAAI 2019.   [paper] [bib] [dataset (170MB)]
  • ScisummNet: A Large Annotated Corpus and Content-Impact Models for Scientific Paper Summarization with Citation Networks
    Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander Fabbri, Irene Li, Dan Friedman and Dragomir Radev
    AAAI 2019.   [paper] [bib] [dataset]
  • Overview and Results of CL-SciSumm Shared Task 2019
    Muthu Kumar Chandrasekaran, Michihiro Yasunaga, Dragomir Radev, Dayne Freitag and Min-Yen Kan
    SIGIR 2019, BIRNDL Workshop.   [paper] [bib] [project page]

2018

  • SyntaxSQLNet: Syntax Tree Networks for Complex and Cross-Domain Text-to-SQL Task
    Tao Yu, Michihiro Yasunaga, Kai Yang, Rui Zhang, Dongxu Wang, Zifan Li and Dragomir Radev
    EMNLP 2018.   [paper] [bib] [code]
  • Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task
    Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang and Dragomir Radev
    EMNLP 2018.   [paper] [bib] [blog] [dataset & leaderboard]
  • Neural Coreference Resolution with Deep Biaffine Attention by Joint Mention Detection and Mention Clustering
    Rui Zhang, Cicero Nogueira dos Santos, Michihiro Yasunaga, Bing Xiang and Dragomir Radev
    ACL 2018.   [paper] [bib]
  • Robust Multilingual Part-of-Speech Tagging via Adversarial Training
    Michihiro Yasunaga, Jungo Kasai and Dragomir Radev
    NAACL 2018.   [paper] [bib] [slides] [code]

2017

  • Graph-based Neural Multi-Document Summarization
    Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan and Dragomir Radev
    CoNLL 2017.   [paper] [bib]

Other Projects

  • Named Entity Recognition for Academic Advising
    Developed systems to recognize and link academic named entities to university database. Part of the Sapphire Project with University of Michigan and IBM Research.
     
  • Biomedical NLP
    Developed NLP systems to analyze electronic health records (EHR). Collaboration with Yale School of Medicine.