HCI + AI Researcher
I work on Human-AI Interaction. I am broadly interested in designing interfaces that support disambiguation process in human communication with Large Language Models (LLMs) and others.
  • 1 We express ourselves more than writing long paragraphs. What are some non-language expressions and interaction techniques that can effectively align participants (human and/or LLM) in collaborative tasks?
  • 2 It is rare that long-form LLM outputs align with our preferences in the first try. However, the misalignments are obsure and hard to detect. What are some good metrics and methods that can support evaluation? How can users interactively show their preferences to LLMs?

I am a CS PhD student at The University of Texas at Austin advised by Amy Pavel. I obtained my Bachelor's degree in Computer Science and Philosophy from the University of Notre Dame where I worked closely with Toby Jia-jun Li.
Featured Project
Luminate: Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-Creation

Luminate: Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-Creation

Sangho Suh*, Meng Chen*, Bryan Min, Toby Jia-Jun Li, Haijun Xia
News