Shirley Wu

Shirley (Ying-Xin) is a first-year Ph.D. student in Stanford CS and Stanford Artificial Intelligence Laboratory (SAIL). Previously, she obtained her B.S. degree in the School of Data Science at University of Science and Technology of China (USTC), advised by Prof. Xiangnan He.

Her goal is to build reliable models with a focus on their generalization ability (to unseen data or domain shifts), robustness (to data noise and bias), and explainability (to facilitate human understanding), which guarantee their trustworthiness in critical applications.

   GitHub     Scholar     Twitter     Linkedin     Email: shir{last_name}


[Feb 2023] Pleasure to give a talk (Youtube) about "Discovering Invariant Rationales for GNNs" in DEFirst - MILA x Vector.
[Oct 2022] FALCON is accepted by NeurIPS 2022 GLFrontiers Workshop!
[June 2022] Awarded as Outstanding Graduate in USTC & Anhui Province (< 3%)!!
[May 2022] One paper is accepted by TPAMI and another is accepted by ICML!


Excited to explore more!
  • Discovering Invariant Rationales for Graph Neural Networks

    ICLR 2022.
    Ying-Xin Wu, Xiang Wang, An Zhang, Xiangnan He, Tat-Seng Chua.

    What: An invariant learning algorithm for GNNs.
    Motivation: GNNs mostly fail to generalize to OOD datasets and provide interpretations.
    Insight: We construct interventional distributions as "multiple eyes" to discover the features that make the label invarian (i.e., causal features).
    Benefits: Intrinsic interpretable GNNs that are robust and generalizable to OOD datasets.

  • Let Invariant Rationale Discovery Inspire Graph Contrastive Learning

    ICML 2022.
    Sihang Li, Xiang Wang, An Zhang, Ying-Xin Wu, Xiangnan He and Tat-Seng Chua

    What: A graph contrastive learning (GCL) method with model interpretations.
    How: We generate rationale-aware graphs for contrastive learning to achieve better transferability.

  • Deconfounding to Explanation Evaluation in Graph Neural Networks

    Ying-Xin Wu, Xiang Wang, An Zhang, Xia Hu, Fuli Feng, Xiangnan He, Tat-Seng Chua

    What: A new paradigm to evaluate GNN explanations.
    Motivation: Explanations evaluation fundamentally guides the diretion of GNNs explainability.
    Insight: Removal-based evaluation hardly reflects the true importance of explanations.
    Benefits: More faithful ranking of different explanations and explanatory methods.

  • Towards Multi-Grained Explainability for Graph Neural Networks

    NeurIPS 2021.
    Xiang Wang, Ying-Xin Wu, An Zhang, Xiangnan He, Tat-Seng Chua.

    What: ReFine, a two-step explainer.
    How: It generates multi-grained explanations via pre-training and fine-tuning.
    Benefits: Obtain both global (for a group) and local explanations (for an instance).

  • Reinforced Causal Explainer for Graph Neural Networks

    TPAMI. May 2022.
    Xiang Wang, Ying-Xin Wu, An Zhang, Fuli Feng, Xiangnan He & Tat-Seng Chua.

    What: Reinforced Causal Explainer (RC-Explainer).
    How: It frames an explanatory subgraph via successively adding edges using a policy network.
    Benefits: Faithful and concise explanations.

  • Efficient Automatic Machine Learning via Design Graphs

    NeurIPS 2022 GLFrontiers Workshop.
    Shirley Wu, Jiaxuan You, Jure Leskovec, Rex Ying

    What: An efficient AutoML method, FALCON, that searches for the optimal model design for graph and image datasets.
    How: We build a design graph over the design space of architecture and hyper-parameter choices, and search the best node on the design graph.

  • Knowledge-Aware Meta-learning for Low-Resource Text Classification

    EMNLP (Oral) 2021. Short Paper.
    Huaxiu Yao, Ying-Xin Wu, Maruan Al-Shedivat, Eric P. Xing.

    What: A meta-learning algorithm for low-resource text classification problem.
    How: We extract sentence-specific graphs from knowledge graph to boost represnetation learning.
    Benefits: Better generalization between meta-training and meta-testing tasks.


Reviewer: ICML'22, NeurIPS'22, NeurIPS'22 Workshop GLFrontiers.


Sometimes I write some casual thoughts here. Feel free to email for any discussions!

Explanations can make deep models more trustworthy. (x)
Explanations can make deep models more transparent, unbiased model can make themselves more trustworthy. (√)

Explanations using concepts are less ambiguous, since they require more accurate specification of salient features, e.g., texture/color/shape for images, than visualization, which has multiple ways to interpret.


I learn chinese calligraphy following Mr. Congming Zhang.

I enjoy extreme sports like rock climbing and bungee jumping. I've never tried skydive but I am always seeking opportunities.

A link to my BF: Hank Wang.