Tianlang Chen

tianlang.jpg

I’m a second-year PhD student in Computer Science at Stanford, advised by Prof. Jure Leskovec. During the first-year rotation program, I had the pleasure of working with Prof. James Zou and Prof. Stefano Ermon. Previously, I received my Bachelor’s degree in Computer Science from Peking University, where I worked with Prof. Liwei Wang and Prof. Di He. I had a research internship at Berkeley AI Research hosted by Prof. Aditi Krishnapriyan.

My research focuses on large language models and their foundations, broadly covering reasoning, controllability, and efficient computation. I also develop principled, structure aware neural architectures for relational and geometric deep learning that enable effective generalization by respecting underlying data structure.

Research Topics

We study the foundations of LLMs, developing principled methods that improve model's reasoning capability.

Highlights:

Relational Deep Learning (RDL) studies predictive modeling over relational databases by encoding entities and relations as graphs, enabling graph neural networks (GNNs) to leverage relational structure for improved performance.

Highlights:

We design neural architectures for geometric data that explicitly encode symmetry and physical constraints, improving generalization and sample efficiency.

Highlights:

Selected Publications & Preprints

(*: Equal Contribution)

  1. RFG: Test Time Scaling for Diffusion Large Language Model Reasoning with Reward Free Guidance
    Tianlang Chen*, Minkai Xu*, Jure Leskovec, Stefano Ermon
    Preprint, 2025
  2. Fractional Reasoning via Latent Steering Vectors Improves Inference Time Compute
    Sheng Liu*, Tianlang Chen*, Pan Lu, Haotian Ye, Yizheng Chen, Lei Xing, James Zou
    Preprint, 2025
  3. RelGNN: Composite Message Passing for Relational Deep Learning
    Tianlang Chen, Charilaos Kanatsoulis, Jure Leskovec
    International Conference on Machine Learning (ICML), 2025
  4. GeoMFormer: A General Architecture for Geometric Molecular Representation Learning
    Tianlang Chen*, Shengjie Luo*, Di He, Shuxin Zheng, Tie-Yan Liu, Liwei Wang
    International Conference on Machine Learning (ICML), 2024
  5. Enabling Efficient Equivariant Operations in the Fourier Basis via Gaunt Tensor Products
    Shengjie Luo*, Tianlang Chen*†, Aditi S Krishnapriyan
    International Conference on Learning Representations(ICLR), 2024
    Spotlight (Top 5%)
  6. One Transformer Can Understand Both 2D and 3D Molecular Data
    Shengjie Luo, Tianlang Chen*, Yixian Xu, Shuxin Zheng, Tie-Yan Liu, Liwei Wang, Di He
    International Conference on Learning Representations(ICLR), 2023

Selected Awards

  • Stanford School of Engineering Fellowship
  • National Scholarship
  • Merit Student Pacesetter Award