HCI + AI Researcher
-
Execution Humans and LLMs now collaborate on complex tasks where user agency and personal goals define success (e.g., creativity, accessibility). How can we design interactions that complement their strengths rather than replace one another? -
Evaluation Long-form LLM outputs are rarely perfect on the first try. How can users express preferences and evaluate results when misalignments are subtle and errors are hard to detect?
I'm actively seeking research internship positions for Summer 2026,
especially in Human-LLM (agent) Interaction.
You can check out my publications or CV for more details.
I'm always down to chat about potential opportunities or collaborations! Just send me an email.
Featured Projects
Surfacing Variations to Calibrate Perceived Reliability of MLLM-Generated Image Descriptions
Meng Chen, Akhil Iyer, Amy Pavel
ASSETS 2025
Luminate: Structured Generation and Exploration of Design Space with Large Language Models for Human-AI Co-Creation
Sangho Suh*, Meng Chen*, Bryan Min, Toby Jia-Jun Li, Haijun Xia
CHI 2024
News
- Oct 26 - 29, 2025
To present Surfacing Variations to Calibrate Perceived Reliability of MLLM-Generated Image Descriptions in ASSETS 2025 @ Denver, CO, USA. - Sep 28 - Oct 1, 2025
Attended UIST 2025 @ Busan, Korea and presented a poster for TaskArtisan. - Jan 2, 2025
Our paper Lotus: Creating Short Videos From Long Videos With Abstractive and Extractive Summarization is accepted to IUI 2025!