Welcome! I'm a fourth-year PhD student in Computer Science at Stanford University, advised by Professor Chris Manning in the Natural Language Processing group.

My research focuses on understanding and improving Deep Learning techniques for Natural Language Generation (NLG). In particular, I'm focused on improving the controllability, interpretablility and coherence of neural NLG in open-ended settings.

I have a blog, where I write about mine and others' research.

Recent News

  • June-September 2018 — During my internship at Facebook AI Research in New York, I worked with Jason Weston and Douwe Kiela on controllable dialogue agents.
  • May 2018 — I won a Lieberman Fellowship from Stanford University.
  • April 2018 — I spoke to Speevr about the basics of NLP and Deep Learning.
  • February 2018 — I moderated a debate between Yann LeCun and Chris Manning on deep learning, structure and innate priors.
  • January-March 2018 — I was a head TA for CS224n: Natural Language Processing with Deep Learning. I gave two lectures: one on RNN Language Models and one on Machine Translation. I also designed the starter code for the SQuAD class project.
  • August 2017 — I attended ACL 2017 in Vancouver. Read my thoughts on the conference here.
  • July 2017 — At SAILORS 2017, I instructed eight high-schoolers to build a Naive Bayes classifier to classify tweets in a disaster relief setting (materials here).
  • May 2017 — I received an NVIDIA Graduate Fellowship. Thank you NVIDIA!
  • April 2017 — Our paper on summarization has been accepted to ACL — check out the blog post! I started this project during my Google Brain internship, then continued it at Stanford.
  • November 2016 — I spoke to Melinda Gates about the importance of women in AI, both on a personal level, and to society at large.
  • August 2016 — Attended ACL and presented my poster at CoNLL.
  • July 2016 — I gave two tutorials — one on graph search algorithms and one on nearest neighbor movie recommendations — at SAILORS, Stanford AI's outreach program for high school girls. Materials here.
  • June-September 2016 — During my internship at Google Brain, I worked with Peter J Liu on abstractive text summarization.
  • June 2016 — Our paper has been accepted to CoNLL. See you in Berlin!
  • April 2016 — I was a mentor for the AI track of Girls Teaching Girls To Code.
  • September 2015 — I began the PhD program at Stanford University.

Publications

2017

Get To The Point: Summarization with Pointer-Generator Networks
Abigail See, Peter J. Liu, Christopher D. Manning
Association for Computational Linguistics (ACL). 2017.
[blog post | poster (PDF | Keynote) | slides]

2016

Compression of Neural Machine Translation Models via Pruning
Abigail See, Minh-Thang Luong, Christopher D. Manning
Computational Natural Language Learning (CoNLL). 2016.
[poster | spotlight slides]

2014

The Cost of Principles: Analyzing Power in Compatibility Weighted Voting Games
Abigail See, Yoram Bachrach, Pushmeet Kohli
Autonomous Agents and Multi-Agent Systems (AAMAS). 2014.

2013

Ramsey vs. Lexicographic Termination Proving
Byron Cook, Abigail See, Florian Zuleger
Tools and Algorithms for the Construction and Analysis of Systems (TACAS). 2013.
[slides]

Other Projects

Bio

I'm originally from Cambridge in the UK, though I've also lived in Singapore. In 2014 I graduated with a MMath from Cambridge University's Mathematical Tripos (to read about the many peculiarities of the Tripos, see here). While at Cambridge my interests were Pure Mathematics — particularly Combinatorics, Logic and Operational Research.

During my undergraduate degree I became interested in Computer Science while interning twice at Microsoft Research Cambridge. In 2012 I worked with the Programming Principles and Tools group on the T2 project, and in 2013 I worked on co-operative Game Theory.

In my spare time I enjoy social dance, watching and discussing films, and writing.

CV

Here is my CV.