Anjiang Wei
I'm Anjiang Wei (魏安江 in Chinese), a PhD student in Computer Science at Stanford University, working under the guidance of Alex Aiken.
My research focuses on Large Language Models (LLMs), including areas such as LLM for Code (program understanding and performance optimization), LLM Evaluation and Benchmarking, Reasoning, and LLM Agents. I have a background in high-performance computing, software engineering, and compilers.
During my undergraduate studies at Peking University, where I was a part of the Turing class, I collaborated with Darko Marinov, Tao Xie, Lingming Zhang, and Yun (Eric) Liang.
If you are a Stanford student interested in my research direction, feel free to email me. Happy to chat about potential collaboration opportunity.
Email /
CV /
Scholar /
Twitter /
Github
|
|
Research
-
-
-
-
AMOS: Enabling Automatic Mapping for Tensor Computations On Spatial Accelerators with Hardware Abstraction
Size Zheng, Renze Chen, Anjiang Wei, Yicheng Jin, Qin Han, Liqiang Lu, Bingyang Wu, Xiuhong Li, Shengen Yan, Yun Liang
International Symposium on Computer Architecture
ISCA 2022, New York City, NY, June 2022
code / slides
-
-
-
-
-
-
-
A Large-Scale Longitudinal Study of Flaky Tests
Wing Lam, Stefan Winter, Anjiang Wei, Tao Xie, Darko Marinov, Jonathan Bell
ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications
OOPSLA 2020, Virtual Conference, Nov. 2020
slides / video
|
|