My research focuses on understanding and improving Deep Learning techniques for Natural Language Generation (NLG). In particular, I'm focused on improving the controllability, interpretablility and coherence of neural NLG in open-ended settings such as story generation and chitchat dialogue.
For the 2019-2020 academic year, I am co-leading a Stanford NLP team competing in the Alexa Prize.
Do Massively Pretrained Language Models Make Better Storytellers?
Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, Christopher D. Manning
Computational Natural Language Learning (CoNLL). 2019.
[code | poster]
What makes a good conversation? How controllable attributes affect human judgments
Abigail See, Stephen Roller, Douwe Kiela, Jason Weston
North American Chapter of the Association for Computational Linguistics (NAACL). 2019.
[blog post | code | slides]
Get To The Point: Summarization with Pointer-Generator Networks
Abigail See, Peter J. Liu, Christopher D. Manning
Association for Computational Linguistics (ACL). 2017.
[blog post | code | attention visualization code | poster (PDF | Keynote) | slides]
Compression of Neural Machine Translation Models via Pruning
Abigail See, Minh-Thang Luong, Christopher D. Manning
Computational Natural Language Learning (CoNLL). 2016.
[poster | spotlight slides]
The Cost of Principles: Analyzing Power in Compatibility Weighted Voting Games
Abigail See, Yoram Bachrach, Pushmeet Kohli
Autonomous Agents and Multi-Agent Systems (AAMAS). 2014.
I am the co-instructor and Head TA of CS224n: Natural Language Processing with Deep Learning. In 2019 I gave four lectures (which are on YouTube), and in 2018 I designed the starter code for the SQuAD class project.
As an instructor at SAILORS 2017 (now known as AI4ALL), I guided eight high-schoolers to build a Naive Bayes classifier to classify tweets in a disaster relief setting [teaching materials]. In 2016 I also gave a tutorial on Graph Search [slides] and Nearest Neighbors [slides | code]. The latter tutorial was also given at Girls Teaching Girls To Code.
By building community and facilitating discussion, I aim to make AI more accessible and easier to understand.
- During 2017-2018, I was the organizer of AI Salon, a regular forum within the Stanford AI Lab to discuss high-level ideas in AI. In particular, I moderated a debate between Yann LeCun and Chris Manning on deep learning, structure and innate priors.
- During 2017-2018, I was also the organizer of AI Women, a regular casual meetup event to build community within the Stanford AI Lab.
- I am a contributing editor of Skynet Today, which is dedicated to providing accurate and accessible coverage of AI news. For example, we have published an overview of Neural Machine Translation.
- After attending ACL 2017, I wrote a summary of current Deep Learning research trends.
- I have spoken to Melinda Gates about the importance of women in AI, both on a personal level, and to society at large.
- I have also spoken to Speevr about the basics of NLP and Deep Learning.
I'm originally from Cambridge in the UK, though I've also lived in Singapore. In 2014 I graduated with a MMath from Cambridge University's Mathematical Tripos (to read about the many peculiarities of the Tripos, see here). While at Cambridge my interests were Pure Mathematics — particularly Combinatorics, Logic and Operational Research. For my Part III essay, I wrote about Smoothed Analysis with Applications in Machine Learning.
During my undergraduate degree I became interested in Computer Science while interning twice at Microsoft Research Cambridge. In 2012 I worked with the Programming Principles and Tools group on the T2 project, and in 2013 I worked on co-operative Game Theory. Since beginning my PhD, I have interned at Google Brain in Mountain View and Facebook AI Research in New York City.
In my spare time I enjoy social dance, watching and discussing films, and writing.
Here is my CV (usually outdated).