Shirley Wu

Shirley is a second-year Ph.D. student in Stanford CS, advised by Prof. Jure Leskovec and Prof. James Zou. Previously, she obtained her B.S. degree in the School of Data Science at University of Science and Technology of China (USTC), advised by Prof. Xiangnan He.

Her current research goal is to understand and further improve the "magic" of foundation and multimodal models, focusing on their generalization, adaptation, and interpretations, which guarantee their applicability from a practitioner's perspective.

   GitHub     Scholar     Twitter     Linkedin     Email: shir{last_name}@cs.stanford.edu

What's
New

[Sep 2023] Our paper about diffusion for GNN explaination was accepted to NeurIPS.
[Jun 2023] Pleasure to give a talk about our Discover and Cure paper for UP lab.
[Apr 2023] Discover and Cure: Concept-aware Mitigation of Spurious Correlation is accepted to ICML 2023!
[Feb 2023] Pleasure to give a talk (Youtube) about Discovering Invariant Rationales for GNNs (ICLR 2022) for DEFirst - MILA x Vector.

Research
Topics

Excited to explore more!
  • Discover and Cure: Concept-aware Mitigation of Spurious Correlation (DISC)

    ICML 2023.
    Shirley Wu, Mert Yuksekgonul, Linjun Zhang, James Zou.

    Task: Image classification.
    What: An algorithm which adaptively mitigates spurious correlations during model training.
    Benefits: Less spurious bias, better generalization, and unambiguous interpretations.
    How: Using concept images generated by Stable Diffusion, in each iteration, DISC computes a metric called concept sensitivity to indicate each concept's spuriousness. Guided by it, DISC creates a balanced dataset (where spurious correlations are removed) to update model.

  • Discovering Invariant Rationales for Graph Neural Networks

    ICLR 2022.
    Ying-Xin Wu, Xiang Wang, An Zhang, Xiangnan He, Tat-Seng Chua.

    Task: Graph classification.
    What: An invariant learning algorithm for GNNs.
    Motivation: GNNs mostly fail to generalize to OOD datasets and provide interpretations.
    Insight: We construct interventional distributions as "multiple eyes" to discover the features that make the label invarian (i.e., causal features).
    Benefits: Intrinsic interpretable GNNs that are robust and generalizable to OOD datasets.

  • Let Invariant Rationale Discovery Inspire Graph Contrastive Learning

    ICML 2022.
    Sihang Li, Xiang Wang, An Zhang, Ying-Xin Wu, Xiangnan He and Tat-Seng Chua

    Task: Graph classification.
    What: A graph contrastive learning (GCL) method with model interpretations.
    How: We generate rationale-aware graphs for contrastive learning to achieve better transferability.

  • Knowledge-Aware Meta-learning for Low-Resource Text Classification

    EMNLP (Oral) 2021. Short Paper.
    Huaxiu Yao, Ying-Xin Wu, Maruan Al-Shedivat, Eric P. Xing.

    Task: Text classification.
    What: A meta-learning algorithm for low-resource text classification problem.
    How: We extract sentence-specific subgraphs from a knowledge graph for training.
    Benefits: Better generalization between meta-training and meta-testing tasks.

  • Med-Flamingo: a Multimodal Medical Few-shot Learner

    Preprint.
    Michael Moor*, Qian Huang*, Shirley Wu, Michihiro Yasunaga, Cyril Zakka,
    Yash Dalmia, Eduardo Pontes Reis, Pranav Rajpurkar, Jure Leskovec

    Task: Visual question answering, rationale generation etc.
    What: A new multimodal few-shot learner specialized for the medical domain.
    How: Based on OpenFlamingo-9B, we continue pre-training on paired and interleaved medical image-text data from publications and textbooks.
    Benefits: Few-shot generative medical VQA abilities in the medical domain.

  • Deconfounding to Explanation Evaluation in Graph Neural Networks

    Preprint.
    Ying-Xin Wu, Xiang Wang, An Zhang, Xia Hu, Fuli Feng, Xiangnan He, Tat-Seng Chua

    Task: Explanation evaluation.
    What: A new paradigm to evaluate GNN explanations.
    Motivation: Explanations evaluation fundamentally guides the diretion of GNNs explainability.
    Insight: Removal-based evaluation hardly reflects the true importance of explanations.
    Benefits: More faithful ranking of different explanations and explanatory methods.

  • Towards Multi-Grained Explainability for Graph Neural Networks

    NeurIPS 2021.
    Xiang Wang, Ying-Xin Wu, An Zhang, Xiangnan He, Tat-Seng Chua.

    Task: Explanation generation for GNNs.
    What: ReFine, a two-step explainer.
    How: It generates multi-grained explanations via pre-training and fine-tuning.
    Benefits: Obtain both global (for a group) and local explanations (for an instance).

  • Reinforced Causal Explainer for Graph Neural Networks

    TPAMI. May 2022.
    Xiang Wang, Ying-Xin Wu, An Zhang, Fuli Feng, Xiangnan He & Tat-Seng Chua.

    Task: Explanation generation for GNNs.
    What: Reinforced Causal Explainer (RC-Explainer).
    How: It frames an explanatory subgraph via successively adding edges using a policy network.
    Benefits: Faithful and concise explanations.

  • Efficient Automatic Machine Learning via Design Graphs

    NeurIPS 2022 GLFrontiers Workshop.
    Shirley Wu, Jiaxuan You, Jure Leskovec, Rex Ying

    What: An efficient AutoML method, FALCON, that searches for the optimal model design for graph and image datasets.
    How: We build a design graph over the design space of architecture and hyper-parameter choices, and search the best node on the design graph.

Services

Reviewer: ICML'22 '23, NeurIPS'22 '23, NeurIPS'22 Workshop GLFrontiers.

Research
Thoughts

Some personal thoughts about the current trend of the AI research:

Intelligence goes for general knowledge: An agent is intelligent if it is knowledgeable on multiple tasks, i.e., general intelligence.

Effectiveness under scale becomes the real effectiveness: The algorithm’s effectiveness shrinks as the dataset size goes infinite, where the performance gain mainly comes from the data quality. How to test algorithms efficiently under large scale will become necessary.

Adaptation becomes polarized — large adaption relies on collecting more data; small adaptation focuses on user-based context: For tasks with distinct data sources, e.g., pathology analysis, it requires collecting more domain-specific data. For small adaptation, e.g., personalized query, direct fine-tuning is infeasible because (1) Data from a user is limited. (2) In an online setting, it is impossible even train an adaptation layer, thus small adaptation may rely on in-context learning.

Miscellaneous

I play drum set on weekends, sometimes.

I learn chinese calligraphy from Mr. Congming Zhang.

I enjoy extreme sports like bungee jumping. I've never tried skydiving but I am always seeking opportunities.

I skydived at Hawaii this July!! Pic 1, Pic 2.

A link to my BF: Zhanghan Wang.