User simulation offers a scalable path toward more human-centered AI systems. However, current approaches largely rely on response imitation, capturing surface-level language patterns rather than the deeper user states, such as beliefs, emotions, and goals, that drive human behavior. In this talk, I will present HumanLM, a framework that aligns user simulators on deeper user states through reinforcement learning, outperforming supervised fine-tuning and RL baselines.
Project page: https://humanlm.stanford.edu/
Speaker Bio
Evelyn Choi is a master’s student in Computer Science at Stanford University, where she received her B.S. in Computer Science. She works in Prof. James Zou’s lab, and her research interests include machine learning, NLP, and human-centered AI.
More Details
- When: Tue 24 March 2026, at 1 - 2 pm (Brisbane time)
- Speaker: Evelyn Hejin Choi (Stanford)
- Host: Ruihong Qiu
- Coordinator: Zijian Wang
- Zoom: https://uqz.zoom.us/j/84122868001 [Recording]
No.26-01 Structured Representation Learning for Latent Thinking in LLMs