My research focuses on understanding and improving neural text generation techniques for open-ended tasks, such as story generation and chitchat dialogue. In particular, I'm interested in improving the controllability, interpretablility and coherence of neural text generation, and applying it to real-world scenarios, such as our award-winning Alexa Prize chatbot.
Teaching & Outreach
I was the co-instructor and Head TA of CS224n: Natural Language Processing with Deep Learning in 2018 and 2019. Alongside giving lectures on RNNs, Language Models, Machine Translation and Natural Language Generation, I led a redesign of the assignments, including the starter code for the SQuAD class project.
I've also developed AI teaching materials for high schoolers, including a two-week project to build a Naive Bayes tweet classifier, a tutorial on Graph Search, and an interactive coding tutorial on Nearest Neighbors. I've delivered these materials at AI4ALL and Girls Teaching Girls To Code.
I've been the organizer of AI Salon, a forum for the Stanford AI Lab to discuss high-level ideas in AI, often with expert guests. In particular, I moderated a debate between Yann LeCun and Chris Manning about structure in deep learning. I've also been the the organizer of AI Women, an inclusivity group to build community in the Stanford AI Lab.
I've been featured in some interviews: with Speevr about NLP basics, with Skynet Today about the Alexa Prize, with DeepLearning.AI about my AI journey, and with Melinda Gates about the importance of women in AI.
* = equal contribution
Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations
Ashwin Paranjape*, Abigail See*, Kathleen Kenealy, Haojun Li, Amelia Hardy, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, Christopher D. Manning
3rd Proceedings of Alexa Prize (Alexa Prize 2019). 2020.
[video | project website]
Do Massively Pretrained Language Models Make Better Storytellers?
Abigail See, Aneesh Pappu*, Rohun Saxena*, Akhila Yerukola*, Christopher D. Manning
Computational Natural Language Learning (CoNLL). 2019.
[code | poster]
What makes a good conversation? How controllable attributes affect human judgments
Abigail See, Stephen Roller, Douwe Kiela, Jason Weston
North American Chapter of the Association for Computational Linguistics (NAACL). 2019.
[blog post | code | slides]
Get To The Point: Summarization with Pointer-Generator Networks
Abigail See, Peter J. Liu, Christopher D. Manning
Association for Computational Linguistics (ACL). 2017.
[blog post | code | attention visualization code | poster (PDF | Keynote) | slides]
Compression of Neural Machine Translation Models via Pruning
Abigail See*, Minh-Thang Luong*, Christopher D. Manning
Computational Natural Language Learning (CoNLL). 2016.
[poster | spotlight slides]
The Cost of Principles: Analyzing Power in Compatibility Weighted Voting Games
Abigail See, Yoram Bachrach, Pushmeet Kohli
Autonomous Agents and Multi-Agent Systems (AAMAS). 2014.
I primarily grew up in Cambridge, UK, though I've also lived in Singapore. From 2010-14 I studied Pure Mathematics at Cambridge University's Mathematical Tripos. For my Part III essay, I wrote about Smoothed Analysis with Applications in Machine Learning.
During my undergrad degree I became interested in Computer Science while interning at Microsoft Research Cambridge, working on termination analysis. During my PhD, I've interned at Google Brain in Mountain View and Facebook AI Research in New York City.
In my free time, I enjoy creating music and comedy, and I've been a member of the Stanford Viennese Ball Opening Committee.
Here is my CV (last updated March 2021).