Kevin Clark

Hi! I recently joined Google Brain as a research scientist. Before then I obtained my PhD from Stanford's Natural Language Processing Group advised by Chris Manning. My current research focuses on applying self-supervised, semi-supervised, and multi-task learning to NLP.


Email: kevclark(at)cs.stanford.edu
[(often out-of-date) CV] [github]



Publications

  • Pre-Training Transformers as Energy-Based Cloze Models
    Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning.
    Empirical Methods in Natural Language Processing (EMNLP), 2020.
    [paper] [bib] [code]
  • Emergent linguistic structure in artificial neural networks trained by self-supervision
    Christopher D. Manning, Kevin Clark, John Hewitt, Urvashi Khandelwal, and Omer Levy
    Proceedings of the National Academy of Sciences, 2020.
    [paper] [bib]
  • ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
    Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning.
    International Conference on Learning Representations (ICLR), 2020.
    [paper] [bib] [code]
  • What Does BERT Look At? An Analysis of BERT's Attention.
    Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning.
    BlackBoxNLP@ACL, 2019, Best Paper Award
    [paper] [bib] [code]
  • BAM! Born-Again Multi-Task Networks for Natural Language Understanding.
    Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le.
    Association for Computational Linguistics (ACL), 2019.
    [paper] [bib] [code]
  • Semi-Supervised Sequence Modeling with Cross-View Training.
    Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc V. Le.
    Empirical Methods in Natural Language Processing (EMNLP), 2018.
    [paper] [bib] [code]
  • Deep Reinforcement Learning for Mention-Ranking Coreference Models.
    Kevin Clark and Christopher D. Manning.
    Empirical Methods in Natural Language Processing (EMNLP), 2016.
    [paper] [bib] [code]
  • Inducing Domain-Specific Sentiment Lexicons from Unlabeled Corpora.
    William L. Hamilton, Kevin Clark, Jure Leskovec, and Dan Jurafsky.
    Empirical Methods in Natural Language Processing (EMNLP), 2016.
    [paper] [bib] [project website (code + data)]
  • Improving Coreference Resolution by Learning Entity-Level Distributed Representations.
    Kevin Clark and Christopher D. Manning.
    Association for Computational Linguistics (ACL), 2016.
    [paper] [bib] [code]
  • Large-scale Analysis of Counseling Conversations: An Application of Natural Language Processing to Mental Health
    Tim Althoff*, Kevin Clark*, and Jure Leskovec.
    Transactions of the Association for Computational Linguistics (TACL), 2016.
    [paper] [bib] [project website] [dataset]
        *equal contribution
  • Entity-Centric Coreference Resolution with Model Stacking.
    Kevin Clark and Christopher D. Manning.
    Association for Computational Linguistics (ACL), 2015.
    [paper] [bib] [code]
  • RevMiner: an Extractive Interface for Navigating Reviews on a Smartphone.
    Jeff Huang, Oren Etzioni, Luke Zettlemoyer, Kevin Clark, and Christian Lee.
    User Interface Software and Technology (UIST), 2012.
    [paper] [bib]