Pair AI

Designed an AI agent & mobile application to foster self-discovery and enable users to form genuine connections.
Designed an AI agent & mobile application to foster self-discovery and enable users to form genuine connections.

Role

Product Design Intern

Tools/Skills

Figma, Motion Graphics

Team

2 Product Designers, 1 PM & 2 Developers

Duration

8 Weeks, Summer 2025

OVERVIEW

OVERVIEW

Rethinking digital connections
Gen Z has more ways than ever to meet people online, yet many feel their connections lack depth. Most social and dating apps focus on appearances and quick interactions, leaving little room for users to express who they truly are. At Pair AI, we set out to rethink how people form digital connections.

OBJECTIVES

Our overarching goal
This was a highly collaborative process, as I navigated the AI space with 2 other developers and designers. Our goal was to rapidly create an MVP for this concept, with my role entailing the following…

MOODBOARDING

Understanding the space
To better understand how to approach this problem space, I looked at prior precedents across conversational UIs, motion language for voice assistants, and visual systems that translate emotions into imagery.
To better understand how to approach this problem space, I looked at prior precedents across conversational UIs, motion language for voice assistants, and visual systems that translate emotions into imagery.

LO-FI PROTOTYPING

LO-FI PROTOTYPING

Rapid prototyping
Iterating through prototypes, we created initial versions of how a user might navigate their way through a conversation and obtain personalized collages based on the interaction.

USER FLOW

USER FLOW

Decoding conversation patterns
A major challenge I faced during ideation was my unfamiliarity with voice interfaces. To bridge that gap, I mapped out a diagram of the interaction between the user and the AI, which helped me better understand the nuances of speech and the specific moments I needed to design for.
A major challenge I faced during ideation was my unfamiliarity with voice interfaces. To bridge that gap, I mapped out a diagram of the interaction between the user and the AI, which helped me better understand the nuances of speech and the specific moments I needed to design for.

EXPERIMENTATION

Experimenting with motion
Designing the voice interface required extensive motion experimentation. I iterated through numerous animations in AfterEffects and tested them with users to assess intuitiveness. Because this happened before today’s widespread use of AI, creating this system independently was a steep but rewarding learning curve.
Designing the voice interface required extensive motion experimentation. I iterated through numerous animations in AfterEffects and tested them with users to assess intuitiveness. Because this happened before today’s widespread use of AI, creating this system independently was a steep but rewarding learning curve.

VOICE ITERATIONS

VOICE ITERATIONS

Iterating through prototypes
Beyond the motion visuals, I also had to consider how text appeared on screen and what types of feedback users needed throughout the conversation, another process that required extensive iteration and testing.

COLLAGES

Creating the collages
After each conversation, the AI generates a personalized personality collage. To make this dynamic to each input, I collaborated closely with engineers to define how text and elements would populate the template, developing a coordinate-based visual system they could easily implement as shown below.

FINAL DELIVERABLES

FINAL DELIVERABLES

1. Phases of Speech
1. Phases of Speech
These are a few of the motion graphics designed for each phase of speech, with additional variations created to account for edge cases.
These are a few of the motion graphics designed for each phase of speech, with additional variations created to account for edge cases.
2. An Intuitive AI-Human Voice Interface
The full AI voice interface combined visualized speech and responsive feedback to create a more intuitive and accessible experience. Users could also personalize aspects of the interaction to better suit their preferences.
3. Personality Collages
3. Personality Collages
Another contribution was the personality collages generated from conversations. I created several dynamic templates that adapted to each user, along with an onboarding flow to guide them through the process.
Another contribution was the personality collages generated from conversations. I created several dynamic templates that adapted to each user, along with an onboarding flow to guide them through the process.
4. Web Tutorial
4. Web Tutorial
Lastly, I built a full website with tutorials and product specifications, showcasing the voice interface and attracting a substantial group of early alpha testers.

OUTCOME

OUTCOME

Testing & results
Although the product hasn't moved beyond early testing and alpha users, our continuous efforts still produced promising outcomes and clear potential for future development.
Although the product hasn't moved beyond early testing and alpha users, our continuous efforts still produced promising outcomes and clear potential for future development.

REFLECTION

REFLECTION

Working in a remote startup environment
Working in a remote startup environment

While the process was incredibly rewarding, working at such an early-stage startup reminded me that designers often have to wear many hats and learn new skills on the fly. Stepping into unfamiliar territory also strengthened my communication skills, especially when collaborating closely with engineers.

While the process was incredibly rewarding, working at such an early-stage startup reminded me that designers often have to wear many hats and learn new skills on the fly. Stepping into unfamiliar territory also strengthened my communication skills, especially when collaborating closely with engineers.