About Me

I am a third-year Ph.D. student at Stanford CS, advised by Professor James Zou and Stefano Ermon. Before that, I received my B.S. in Math and Computer Science at Yuanpei College, Peking University, under the advice of Professor Liwei Wang and Di He. I focus on advancing generative AI (both for transformer and diffusion) through post-training and inference-time algorithm design. Feel free to reach out if you are interested in my work oy my talk.

News

  • (Dec. 2025) We release DDRL, a data-regularized RL algorithm for diffusion! It has been successfully used in NVIDIA Cosmos-Predict2.5. Check our post!
  • (Dec. 2025) Data Attribution for RL was accepted by NeurIPS 2025 (Oral). See you in San Diego!
  • (Nov. 2025) We released NVIDIA Cosmos-Predict2.5, the Cosmos World Foundation Models specialized for video generation. I am responsible for designing the RL algorithm and performing large-scale post-training (1K+ GPUs) for the release.
  • (Oct. 2025) Diffusion Inference-Time Acceleration was accepted by ICCV 2025. See you in Hawaii!

Selected Publications

  • (In submission) Data-regularized Reinforcement Learning for Diffusion Models at Scale
    Haotian Ye, Kaiwen Zheng, Jiashu Xu, Puheng Li, Huayu Chen, Jiaqi Han, Sheng Liu, Qinsheng Zhang, Hanzi Mao, Zekun Hao, Prithvijit Chattopadhyay, Dinghao Yang, Liang Feng, Maosheng Liao, Junjie Bai, Ming-Yu Liu, James Zou, Stefano Ermon
    [Paper] [Website] [Twitter]
  • (In submission) Can Language Models Discover Scaling Laws?
    Haowei Lin*, Haotian Ye*, Quzhe Huang, Wenzheng Feng, Yujun Li, Xiangyu Wang, Hubert Lim, Zhengrui Li, Jianzhu Ma, Yitao Liang, James Zou
    [Paper] [Blog]
  • (In submission) InfoTok: Adaptive Discrete Video Tokenizer via Information-Theoretic Compression
    Haotian Ye*, Qiyuan He*, Jiaqi Han, Puheng Li, Jiaojiao Fan, Zekun Hao, Fitsum Reda, Yogesh Balaji, Huayu Chen, Sheng Liu, Angela Yao, James Zou, Stefano Ermon, Haoxiang Wang, Ming-Yu Liu
  • (NeurIPS 2025, Oral) A Snapshot of Influence: A Local Data Attribution Framework for Online Reinforcement Learning
    Yuzheng Hu, Fan Wu, Haotian Ye, David Forsyth, James Zou, Nan Jiang, Jiaqi W. Ma, Han Zhao
    [Paper]
  • (ICLR 2025, Spotlight) Reducing Hallucinations in Vision-Language Models via Latent Space Steering
    Sheng Liu, Haotian Ye, Lei Xing, James Zou
    [Paper]
  • (AISTATS 2025) Efficient and Asymptotically Unbiased Constrained Decoding for Large Language Models
    Haotian Ye, Himanshu Jain, Chong You, Ananda Theertha Suresh, Haowei Lin, James Zou, Felix Yu
    [Paper]
  • (NeurIPS 2024, Spotlight) TFG: Unified Training-Free Guidance for Diffusion Models
    Haotian Ye*, Haowei Lin*, Jiaqi Han*, Minkai Xu, Sheng Liu, Yitao Liang, Jianzhu Ma, James Zou, Stefano Ermon
    [Paper]
  • (Nature Machine Intelligence) A computational framework for neural network-based variational Monte Carlo with Forward Laplacian
    Ruichen Li*, Haotian Ye*, Du Jiang, Xuelan Wen, Chuwei Wang, Zhe Li, Xiang Li, Di He, Ji Chen, Weiluo Ren, Liwei Wang
    [Paper]
  • (NeurIPS 2023, Oral) Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective
    Guhao Feng*, Bohang Zhang*, Yuntian Gu*, Haotian Ye*, Di He, Liwei Wang
    [Paper] [Video] [Slides]
  • (ICML 2023, Oral) On the Power of Pre-training for Generalization in RL: Provable Benefits and Hardness
    Haotian Ye*, Xiaoyu Chen*, Liwei Wang, Simon Shaolei Du
    [Paper]
  • (AISTATS 2023) Freeze then Train: Towards Provable Representation Learning under Spurious Correlations and Feature Noise
    Haotian Ye, James Zou, Linjun Zhang
    [Paper] [Code] [Video] [Slides]
  • (ICLR 2023) Discovering Latent Knowledge in Language Models Without Supervision
    Collin Burns*, Haotian Ye*, Dan Klein, Jacob Steinhardt
    [Paper]
  • (J. Chem. Phys. Aug 2023) DeePMD-kit v2: A software package for Deep Potential models
    Jinzhe Zeng, Duo Zhang, …, Haotian Ye, …, Weinan E, Roberto Car, Linfeng Zhang, Han Wang
    [Paper]
  • (NeurIPS 2021) Towards a Theoretical Framework of Out-of-Distribution Generalization
    Haotian Ye*, Chuanlong Xie, Tianle Cai, Ruichen Li, Zhenguo Li, Liwei Wang
    [Paper] [Code] [Video] [Slides]

Selected Awards

  • Weiming Scholar of Peking University (1%), 2023
  • Person of the Year of Peking University (10 people/year), 2021
  • May 4 scholarship (1%, Rank 1), 2021
  • National scholarship (1%, Rank 2), 2019
  • Leo Koguan scholarship (1%), 2020
  • Merit student pacesetter (2%), 2019
  • Chinese Mathematical Olympiad (First Prize, Rank 7 in China), 2017