Language models are systems that can predict upcoming words” - this classical definition of NLP models forms the basis of LLMs becoming responsive text completion models. However, such “responsive” capability is not good enough for LLMs to serve as “compliant AI assistants”. This talk will address the gap between current LLMs and “compliant AI assistants” that limits the broad real-world applications of LLMs. The speaker will introduce a systematic research path to develop trustworthy LLMs as compliant AI assistants. The talk will mainly cover the speaker’s research on controllable generation, context faithfulness, and safety of LLMs. The research methodology will cover advanced techniques in training, data synthesis, prompt engineering, decoding, and meta-generation that comprehensively develop trustworthy LLM-based AI systems.
Speaker Bio
Yiwei Wang is currently an Assistant Professor at the Computer Science Department of University of California, Merced. He leads the UC Merced NLP Lab. Yiwei Wang worked as a Postdoc in UCLA NLP Lab, an Applied Scientist in Amazon Inc. (Seattle). He received his Ph.D. in Computer Science at National University of Singapore in 2023, where he was fortunate to be advised by Prof. Bryan Hooi. His current research is focused on natural language processing, and especially interested in building trustworthy AI assistants to provide responsible services to humans in real-world applications. Yiwei Wang is actively recruiting strong and motivated students as Ph.D. students or research interns. Please feel free to email him at yiweiwang2@ucmerced.edu if you are interested.
More Details
- When: Fri. 28 Feb 2025, at 10 - 11 am (Brisbane time)
- Speaker: Prof Yiwei Wang (UC Merced)
- Host: Dr Yujun Cai
- Venue: 78-421
- Zoom: https://uqz.zoom.us/j/81297131414 [Recording will only be available internally by request.]