Shirley Wu
Shirley (Ying-Xin) is a first-year Ph.D. student in Stanford CS and Stanford Artificial Intelligence Laboratory (SAIL). Previously, she obtained her B.S. degree in the School of Data Science at University of Science and Technology of China (USTC), advised by Prof. Xiangnan He.
Her goal is to build reliable models with a focus on their generalization ability (to unseen data or domain shifts), robustness (to data noise and bias), and explainability (to facilitate human understanding), which guarantee their trustworthiness in critical applications.
GitHub Scholar Twitter Linkedin Email: shir{last_name}@cs.stanford.edu
What's
New
[Feb 2023] Pleasure to give a talk (Youtube) about "Discovering Invariant Rationales for GNNs" in DEFirst - MILA x Vector.
[Oct 2022] FALCON is accepted by NeurIPS 2022 GLFrontiers Workshop!
[June 2022] Awarded as Outstanding Graduate in USTC & Anhui Province (< 3%)!!
[May 2022] One paper is accepted by TPAMI and another is accepted by ICML!
Research
Topics
-
Discovering Invariant Rationales for Graph Neural Networks
ICLR 2022.
Ying-Xin Wu, Xiang Wang, An Zhang, Xiangnan He, Tat-Seng Chua. What: An invariant learning algorithm for GNNs.
Motivation: GNNs mostly fail to generalize to OOD datasets and provide interpretations.
Insight: We construct interventional distributions as "multiple eyes" to discover the features that make the label invarian (i.e., causal features).
Benefits: Intrinsic interpretable GNNs that are robust and generalizable to OOD datasets. -
Let Invariant Rationale Discovery Inspire Graph Contrastive Learning
ICML 2022.
Sihang Li, Xiang Wang, An Zhang, Ying-Xin Wu, Xiangnan He and Tat-Seng Chua What: A graph contrastive learning (GCL) method with model interpretations.
How: We generate rationale-aware graphs for contrastive learning to achieve better transferability.
-
Deconfounding to Explanation Evaluation in Graph Neural Networks
Preprint.
Ying-Xin Wu, Xiang Wang, An Zhang, Xia Hu, Fuli Feng, Xiangnan He, Tat-Seng Chua What: A new paradigm to evaluate GNN explanations.
Motivation: Explanations evaluation fundamentally guides the diretion of GNNs explainability.
Insight: Removal-based evaluation hardly reflects the true importance of explanations.
Benefits: More faithful ranking of different explanations and explanatory methods. -
Towards Multi-Grained Explainability for Graph Neural Networks
NeurIPS 2021.
Xiang Wang, Ying-Xin Wu, An Zhang, Xiangnan He, Tat-Seng Chua. What: ReFine, a two-step explainer.
How: It generates multi-grained explanations via pre-training and fine-tuning.
Benefits: Obtain both global (for a group) and local explanations (for an instance). -
Reinforced Causal Explainer for Graph Neural Networks
TPAMI. May 2022.
Xiang Wang, Ying-Xin Wu, An Zhang, Fuli Feng, Xiangnan He & Tat-Seng Chua. What: Reinforced Causal Explainer (RC-Explainer).
How: It frames an explanatory subgraph via successively adding edges using a policy network.
Benefits: Faithful and concise explanations.
-
Efficient Automatic Machine Learning via Design Graphs
NeurIPS 2022 GLFrontiers Workshop.
Shirley Wu, Jiaxuan You, Jure Leskovec, Rex Ying What: An efficient AutoML method, FALCON, that searches for the optimal model design for graph and image datasets.
How: We build a design graph over the design space of architecture and hyper-parameter choices, and search the best node on the design graph.
-
Knowledge-Aware Meta-learning for Low-Resource Text Classification
EMNLP (Oral) 2021. Short Paper.
Huaxiu Yao, Ying-Xin Wu, Maruan Al-Shedivat, Eric P. Xing. What: A meta-learning algorithm for low-resource text classification problem.
How: We extract sentence-specific graphs from knowledge graph to boost represnetation learning.
Benefits: Better generalization between meta-training and meta-testing tasks.
Personal
Experiences

Stanford University
2022.9 - PresentI am now rotating with Prof. Chelsea Finn, started from Apr 2023.
I had great rotation experience working with Prof. Jure Leskovec (from Jan 2023 to Mar 2023) and with Prof. James Zou (from Sep 2022 to Dec 2022).

Univ. of Sci & Tech of China
2018.9 - 2022.7Advisor: Prof. Xiangnan He
I am lucky to follow Xiangnan who highly promotes me to go for professional competence.

National University of Singapore
2020.3 - 2021.12 Advisor: Dr. Xiang Wang & Prof. Tat-Seng ChuaThis experience strengthens my problem-solving and communication skills and broadens my research horizons.

Stanford University
2021.3 - 2021.8Advisor: Dr. Huaxiu Yao
I also collaborated with Huaxiu about Meta-learning and Knowledge Graph.
Services
Research
Thoughts
Explanations can make deep models more trustworthy. (x)
Explanations can make deep models more transparent, unbiased model can make themselves more trustworthy. (√)
Explanations using concepts are less ambiguous, since they require more accurate specification of salient features, e.g., texture/color/shape for images, than visualization, which has multiple ways to interpret.
Miscellaneous
I learn chinese calligraphy following Mr. Congming Zhang.
I enjoy extreme sports like rock climbing and bungee jumping. I've never tried skydive but I am always seeking opportunities.
A link to my BF: Hank Wang.