Project
Research Project
Synthetic Diversity in Academic Research with LLMs
Diversity in viewpoint is not only welcomed but beneficial across online communities and other contexts, particularly in interdisciplinary research spaces such as HCI. This project explores the potential of Large Language Models (LLMs) in offering diverse perspectives to researchers, enabling them to view problems from different angles, anticipate potential controversies, uncover blind spots, and improve paper framing. The inherent limitations within academic and professional circles often restrict researchers to a narrow set of perspectives based on their background and community. Traditional review processes, while aiming to introduce broader viewpoints, can be time-consuming and unreliable. LLMs, known for their ideation capabilities, are utilized here to generate diverse viewpoints while adhering to good feedback practices and guidelines specific to research domains. This approach aims at providing a more reliable and efficient method for researchers to enhance their work with insights from a wider array of perspectives.
Ongoing
Advised by Gary Hsieh, Chirag Shah, David McDonald
Research Project
LLMs for Supporting Technical Interviews
This research project leverages the capabilities of Large Language Models (LLMs) to explore provide qualitative feedback in the context of technical interviews. While traditional developer support tools usually focus on technical aspets of programming, we aim to develop a soft skill-focused assistive tool for programmers to effectively communicate their thinking process and articulate their intent. Collecting annotated data from seasoned interviewers, we employ few-shot learning techniques to train LLMs to provide specific, contextual, and actionable feedback that transcends generic advice. The ultimate goal is to create an AI-driven coaching system that mimics expert-level guidance, enhancing the candidate's holistic performance in the competitive tech job market.
Ongoing
Advised by David McDonald, Gary Hsieh, Colin Clement
Research Project
Measuring Affective and Cognitive Trust in AI
As a first step towards designing for AI systems that build appropriate trust through affective and cognitive routes, we seek to develop a valid and generalizable set of scales for this 2-dimensional construct of trust. Through a survey over 32 scenarios across 5 dimension and an exploratory factor analysis on the data collected, we established the scale with 27 items and demonstrated its validity. We then conducted another survey study, using the scale to explore a conversational agent's capability to build affective trust when the user is seeking for emotional support.
In Submission
Advised by Gary Hsieh, Chirag Shah
Research Project
Chatbot-assisted Collaboration
Designing a chatbot to help strangers get familiarized with each other and studying the effects on collaboration performance.
Feb 2022 - Jan 2023
Collaborated with Donghoon Shin, Soomin Kim
Advised by Gary Hsieh, Joonhwan Lee
UXR Summer Internship Projectat TruEra
How data scientists diagnose ML models
With the support from designers and ML engineers in the company, I conducted internal research and interview study with data scientists to understand how people approach performance debugging of ML models in the field. Based on the interview findings, I challenged and validated the team’s internal assumptions, identified needs and difficulties people have with debugging ML models, and proposed ways to redesign and automate features of the product.
Jul 2022 - Sep 2022
Advised by Mantas Lilis, Joshua Noble, Justin Lawyer
UXR Summer Internship Projectat TruEra
Value Sensitive Design for Recommender Systems
We adopted value sensitive design approach to explore the research question: How can designers of Recommender Systems adopt a Value-Sensitive approach? We conducted conceptual and technical investigation, analyzing the how everyday recommender systems uphold or violate stateholders’ values. We proposed design recommendations with respect to algorithmic awareness, profiling transparency, and user control.
Winter 2022
Collaborated with Mrudali Birla, Sourojit Ghosh, Lubna Razaq
Advised by Batya Friedman