Tianlang Chen
I’m a second-year PhD student in Computer Science at Stanford, advised by Prof. Jure Leskovec. During the first-year rotation program, I had the pleasure of working with Prof. James Zou and Prof. Stefano Ermon. Previously, I received my Bachelor’s degree in Computer Science from Peking University, where I worked with Prof. Liwei Wang and Prof. Di He. I had a research internship at Berkeley AI Research hosted by Prof. Aditi Krishnapriyan.
My research focuses on large language models and their foundations, broadly covering reasoning, controllability, and efficient computation. I also develop principled, structure aware neural architectures for relational and geometric deep learning that enable effective generalization by respecting underlying data structure.
Research Topics
We study the foundations of LLMs, developing principled methods that improve model's reasoning capability.
Highlights:-
RFG: Test Time Scaling for Diffusion Large Language Model Reasoning with Reward Free Guidance
We introduce a general framework to guide and scale reasoning in diffusion LLMs without requiring external reward models. -
Fractional Reasoning via Latent Steering Vectors Improves Inference Time Compute
We propose a latent space steering approach that enables continuous control over reasoning intensity, allowing models to adapt reasoning depth to input difficulty.
Relational Deep Learning (RDL) studies predictive modeling over relational databases by encoding entities and relations as graphs, enabling graph neural networks (GNNs) to leverage relational structure for improved performance.
Highlights:-
RelGNN: Composite Message Passing for Relational Deep Learning
ICML 2025
We introduce the first GNN architecture tailored for learning on relational databases, featuring a novel design of structure aware message passing to improve both efficiency and predictive performance.
We design neural architectures for geometric data that explicitly encode symmetry and physical constraints, improving generalization and sample efficiency.
Highlights:-
Enabling Efficient Equivariant Operations in the Fourier Basis via Gaunt Tensor Products
ICLR 2024 Spotlight (Top 5%)
We introduce a novel formulation of equivariant tensor products that significantly reduces computational complexity for E(3)-equivariant neural networks. -
GeoMFormer: A General Architecture for Geometric Molecular Representation Learning
ICML 2024
We propose a unified geometric learning framework that jointly learns invariant and equivariant representations through a novel attention mechanism between geometric streams. -
One Transformer Can Understand Both 2D and 3D Molecular Data
ICLR 2023
We develop a unified architecture capable of modeling molecular data of both 2D and 3D formats.
Selected Publications & Preprints
(*: Equal Contribution)
Selected Awards
- Stanford School of Engineering Fellowship
- National Scholarship
- Merit Student Pacesetter Award