llm,

No.26-02 Simulating Real Users with State Alignment

Follow Mar 24, 2026 · 1 min read
No.26-02 Simulating Real Users with State Alignment
Share this

User simulation offers a scalable path toward more human-centered AI systems. However, current approaches largely rely on response imitation, capturing surface-level language patterns rather than the deeper user states, such as beliefs, emotions, and goals, that drive human behavior. In this talk, I will present HumanLM, a framework that aligns user simulators on deeper user states through reinforcement learning, outperforming supervised fine-tuning and RL baselines.

Project page: https://humanlm.stanford.edu/

Speaker Bio

Evelyn Choi is a master’s student in Computer Science at Stanford University, where she received her B.S. in Computer Science. She works in Prof. James Zou’s lab, and her research interests include machine learning, NLP, and human-centered AI.

More Details

llm
Join Newsletter
Get the latest news right in your inbox. We never spam!