Min Wu PhD

minwu_pic.jpg

I am a Postdoctoral Scholar with Prof. Clark Barrett at the Department of Computer Science, Stanford University. I am also affiliated with Stanford Center for AI Safety and Center for Automated Reasoning.

Previously, I completed my PhD (DPhil) in Computer Science under the supervision of Prof. Marta Kwiatkowska at the University of Oxford.

My research focuses on safe and trustworthy AI, positioned at the intersection of AI and formal methods. The grand vision of my work is to develop AI systems, particularly those deployed in high-stakes applications, that are verifiably reliable and transparent.

Email: minwu[at]stanford.edu
Office: CoDa W312

Research Highlights

Formal Explainable AI to Promote Trustworthiness
  1. NeurIPS
    VeriX: Towards Verified Explainability of Deep Neural Networks
    Min Wu, Haoze Wu, and Clark Barrett
    In Proceedings of the 37th International Conference on Neural Information Processing Systems (NeurIPS). Keynote at Stanford Center for AI Safety 2023 Annual Meeting , 2023
  2. Under Review
    Better Verified Explanations with Applications to Incorrectness and Out-of-Distribution Detection
    Min Wu, Xiaofu Li, Haoze Wu, and Clark Barrett
    In arXiv:2409.03060, 2024
Robustness Guarantees to Ensure AI Safety
  1. CVPR
    Robustness Guarantees for Deep Neural Networks on Videos
    Min Wu, and Marta Kwiatkowska
    In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Oral Presentation , 2020
  2. AISTATS
    Convex Bounds on the Softmax Function with Applications to Robustness Verification
    Dennis Wei, Haoze Wu, Min Wu, Pin-Yu Chen, Clark Barrett, and Eitan Farchi
    In International Conference on Artificial Intelligence and Statistics (AISTATS), 2023
  3. EMNLP
    Assessing Robustness of Text Classification through Maximal Safe Radius Computation
    Emanuele La Malfa, Min Wu, Luca Laurenti, Benjie Wang, Anthony Hartshorn, and Marta Kwiatkowska
    In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP): Findings, 2020
  4. IJCAI
    Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance
    Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, and Marta Kwiatkowska
    In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), 2019
Deep Neural Network Verification
  1. Theor. Comput. Sci.
    A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees
    Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, and Marta Kwiatkowska
    Theoretical Computer Science. Invited Paper , 2020
  2. CAV
    Safety Verification of Deep Neural Networks
    Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu*
    In Proceedings of the 29th International Conference on Computer Aided Verification (CAV). Keynote Paper , 2017
  3. CAV
    Marabou 2.0: A Versatile Formal Analyzer of Neural Networks
    Haoze Wu, Omri Isac, Aleksandar Zeljić, Teruhiro Tagomori, Matthew Daggitt, Wen Kokke, Idan Refaeli, Guy Amir, Kyle Julian, Shahaf Bassan, Pei Huang, Ori Lahav, Min Wu, Min Zhang, Ekaterina Komendantskaya, Guy Katz, and Clark Barrett
    In Proceedings of the 36th International Conference on Computer Aided Verification (CAV), 2024
  4. AAAI
    Towards Efficient Verification of Quantized Neural Networks
    Pei Huang, Haoze Wu, Yuting Yang, Ieva Daukantas, Min Wu, Yedi Zhang, and Clark Barrett
    In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2024

Teaching Highlights

Stanford University