Drew Arad Hudson

dorarad [at] cs.stanford.edu   [scholar] [twitter] [github] [linkedin]

I am a 4th year PhD student at Stanford University, Computer Science. I am fortunate to be advised by Prof. Christopher Manning and am a member of the NLP group. My research focuses on reasoning, compositionality, and representation learning, at the intersection of vision and language.

I explore structural principles and inductive biases for making neural networks more interpretable, robust and data-efficient, and allow them to generalize effectively and systematically from a few samples only. I believe in the importance of multi-disciplinary both within the AI field and across domains, and draw high-level inspiration from the feats of the human mind, including its structural properties as well as cognitive capabilities.

I believe that compositionality is a key ingredient that, if incorporated successfully into neural models, may help bridging the gap between machine intelligence and natural intelligence. I explore ways to achieve compositionality both in terms of computation and representation.

Towards the former, I introduced, together with my advisor, models such as MAC and the Neural State Machine that perform transparent step-by-step reasoning, as well as the GQA dataset for real-world visual question answering.
Towards the latter, I began more recently to explore ways to learn compositional scene representations, and along with my research collaborator from Facebook AI Research, presented the Generative Adversarial Transformers, for fast, data-efficient and high-resolution image synthesis. I am actively researching this subject further and hope to present new findings on this exciting direction in the near future!

Papers

spotlight_nscl
Generative Adversarial Transformers
Drew A. Hudson, C. Lawrence Zitnick Special Thanks to Christopher D. Manning
In submission [Abstract] [Paper] [Code]
We introduce the Generative Adversarial Transformer model, a linearly efficient bipartite transformer, and combine it with the GAN framework for high-resolution scene generation.
SLM: Learning a Discourse Language Representation with Sentence Unshuffling
Haejun Lee, Drew A. Hudson, Kangwook Lee, Christopher D. Manning
We introduce a hierarchical transformer that is aware both semantics at the word and the sentence levels, allowing it to acquire better understanding of global properties and discourse relations.
spotlight_nscl
Learning by Abstraction: The Neural State Machine
Drew A. Hudson, Christopher D. Manning
Spotlight presentation, top 3%
We introduce a Neuro-Symbolic model that represent semantic knowledge in the form of scene graph to support iterative reasoning for the task of compositional visual question answering.
spotlight_nscl
GQA: A new Dataset for Real-World Visual Reasoning and Compositional Question Answering
Drew A. Hudson, Christopher D. Manning
Oral Presentation, top 5%
We introduce GQA, a large-scale dataset for real-world visual reasoning and compositional question answering, that focuses on biases reduction and full grounding of each object and entity in a provided scene graph.
spotlight_nscl
Compositional Attention Networks for Machine Reasoning
Drew A. Hudson, Christopher D. Manning
We present the MAC network, a fully differentiable neural network for compositional reasoning, that achieved state-of-the-art 98.9% accuracy on the CLEVR dataset.
spotlight_nscl
Tighter Bounds for Makespan Minimization on Unrelated Machines
Dor Arad, Yael Mordechai, Hadas Shachnai
We obtain tight bounds for the problem of scheduling n jobs to minimize the makespan on m unrelated machines.

Videos


Selected Talks

  • Generative Adversarial Transformers Stanford, April 2021
  • Compositional Generative Networks for Scene Representation Stanford, August 2020
  • From Machine Learning to Machine Reasoning Evolution AI London, October 2020
  • Learning by Abstraction: The Neural State Machine Microsoft Redmond, September 2019
  • Compositional & Relational Visual Reasoning ICLR Representation Learning on Graphs and Manifolds Workshop, May 2019
  • Minimizing Rosenthal Potential in Multicast Games Technion, March 2014
  • Exact and Approximate Bandwidth Technion, Feb 2014

Activities, Associations & Community

Internships, Work and Awards
  • I received the Google Anita Borg Scholarship (2013, EMEA) for leading women in Computer Science.
  • I am an alumni of the Chais Scholars Program for Excellence that allowed me a wonderful opportunity to explore research for the first time in the early stages of my academic experience and connect with an amazing group of student peers.
  • I interned at Facebook AI Research, Menlo Park in Summer 2019.
  • I worked at Google in 2012-2013, where I developed and refined NLP models to improve search quality and created tools and infrastructure for more robust and scalable data processing.
  • Recipient of the Stanford SoE fellowship for 1st year graduate students.
  • Valedictorian of the class of 2014 in the Technion, Institute of Technology (GPA: 97.4 / 100, Ranked 1st / 224).
  • Finalist of the 2020 Facebook Fellowship Awards and the The Open Phil AI Fellowship.
  • Technion President's list of honors (top 3%): Fall 2009/10 – Fall 2013/14.
  • Received the Excellent CS Students Program (SAMBA) award for academic excellence, 2013.
Workshops and Conferences
  • Recently, together with Prof. Tatsunori Hashimoto and friends from Berkeley and CMU, we are working on the 1st CtrlNLG workshop for Controlled Language Generation. In submission, so hopefully more details soon!
  • Co-organizer of the the ViGIL workshop (2019 and NAACL 2021) for multimodal grounding and interaction, the ALVR workshop at NAACL 2021 for connections between vision and language, and the VQA workshop at CVPR 2019 and 2020.
  • Organized the GQA challenge at CVPR 2019 for compositional reasoning over real-world images, which attracted more than 50 participating groups.
  • Reviewer for NeurIPS (2019, 2020), ICML (2020, 2021) and ICLR (2021).
Teaching and Mentorship
  • Over the recent years I mentor student teams in the CS224N class (Win 2019, Win 2021) about Deep Learning and NLP, at Stanford ACM, and in the independent study class.
  • A Teaching Assistant CS229: Machine Learning (Spr 2020) and CS230: Deep Learning (Win 2021, Spr 2021) and in particular responsible for the class projects.
  • Organizer of the Job Talk Practice Session series at Stanford CS.
  • Participated in a mentoring program where I tutored freshmen and sophomores in STEM classes, 2012-2014.
Hobbies and Fun Facts
  • I began studying towards a B.Sc. degree in Computer Science as a full-time student when I was 14 years old.
  • I studied piano for 8 years in the Dunie Weizman Conservatory of Music.
  • In my free time I like learning new languages (currently studying French and Japanese!), listening to music, and pencil drawing.