Role & Team
UX designer & product manager on 3 person team.
2 day design hackathon, competed against 16 other teams.
Key Skills
Using AI as a design medium.
Applying behavioral science to UX design.
My Impact
Since our idea centered around biases, my knowledge of behavioral principles drove many of our design decisions. I also helped the team work efficiently & collaboratively by keeping the team aligned, choosing the right design tool for each phase, and setting up critiques with other teams.
TLDR
Our objective was to produce a convincing product idea & pitch in 2 days responding to the prompt, “improve any user group’s workflow using AI”.
We chose to focus on human, social, and institutional biases in design research because we believed it to be a critical topic to speak on.
By prioritizing team alignment, applying my behavioral psychology knowledge, and maintaining a robust & collaborative design process, we successfully presented and were one of the finalists.
The Challenge
Prompt: "design an AI-powered tool that improves the workflow of any group".
Research
Identifying designer pain points & researching AI capabilities.
We conducted primary & secondary research with designers of a variety of roles, keeping our breadth wide at first so as to not miss any opportunities.
Pain Point #1
Across industries, designers struggle when collaborating with non-designers.
Whether it be balancing client vs. user needs, having to back up every single decision with data, or dealing with lack of design maturity, designers struggle to effectively communicate & collaborate with non-designers.
Pain Point #2
Time & budget constraints often make it difficult to perform a robust research process.
In companies that don't prioritize design research, designers don't have sufficient time and money to recruit participants, prototype effectively, and conduct sufficient testing.
Pain Point #3
When faced with strict timelines, fixed mindset clients, or just cognitive biases, it can be difficult to ideate.
It's difficult to quickly land on a successful idea when there are blockers; clients refusing to budge, technical constraints, or getting too personally attached to one's idea.
Pain Point #4
There isn't sufficient time & energy to keep up with all the new design tools.
As design & technology continuously evolve, it's difficult for designers to stay up to date with trends, terms, and updates.
To better understand the capabilities of AI, we researched its strengths & weaknesses relative to those of human designers and conducted a competitive analysis on existing AI-powered design work tools.
Ideation
Matching designer pain points & AI capabilities.
Rather than follow a traditional user centered approach, we employed matchmaking to find the most low-effort & high-impact ideas. We generated as many ideas as we could that matched existing AI capabilities with designer pain points, then ranked them by technical, financial, risk, and desire factors.
We chose this analysis method to help us make decisions as quickly as possible while ensuring consideration of a wide range of ideas. This also allowed decision making to be as unbiased as possible, as it limited the effects of personal attachment to one's ideas.
Ideation
Choosing a low risk & flexible solution.
After seeking out critique on our top ranked ideas, we plotted them based on AI power required vs. value generated to identify which idea was most feasible.
Chosen Problem Space
Using AI to identify biases & assumptions in research files during the user research process.
We chose this because there could be a lot of value generated with relatively low AI power. We also wanted to focus on biases because they're critical for designers to address and developing robust ways to check bias is increasingly important as AI becomes ingrained in all of our workflows.
Fleshing out the idea
Understanding the holistic experience.
To better understand where in a designer & business's workflows our solution would fit, we mapped out the user journey and ideated what the partnership between our tool & designers would look like over time.
Product vs. Service
Rather than be a one-time bias checker, we wanted our tool to partner with designers throughout their entire design & business processes.
Not only would this ensure that the designer grows over time, this would also be more profitable as a service, since the plugin would provide little value from one-time usage but could be a large selling point for designers & companies in truly committing to more ethical design.
A glimpse of our ideation process.
Fleshing out the idea
Interactions & relationship between ConsiderMe & users.
Having decided that we wanted the use pattern of our product to be a long term relationship, it was critical for us to design around the nature of such a long term & personal relationship.
Product-User Relationship
To build trust, we used a supportive and non-hostile tone to help facilitate growth & willingness in the user.
Biases are a case in which it’s hard to separate work vs. personal identity, making it easy to get emotional and defensive.
To address this, we decided on a supportive, non-hostile tone to facilitate growth in the user and help them feel safe in being called out. This is key for people to be willing to use Consider Me and risk exposing their own biases.
Product-User Relationship
Since all AI models fail, we embraced imperfection & growth in both our users & our AI model.
We knew that all AI models make mistakes, and rather than purporting that ours didn’t, we reframed it as an imperfect model that collaborates with the user to improve one another, building a mutually beneficial relationship.
Solution
ConsiderMe: an AI-powered Figma plugin to help identify assumptions & biases in user research.
A Figma/Figjam plugin available on individual, team, or company levels, ConsiderMe uses inputted project context, team demographics, and parsed info from research files to generate considerations & potential biases that a design team should watch out for.
Adjusting to real-world constraints & needs.
User-determined data input process for flexibility, privacy, and retaining user autonomy.
To save time, the plugin parses through the user’s prior research and pulls relevant data, which can then be edited as needed. All input data was optional & editable, creating the space for human checks. Giving the user flexibility & autonomy was something we kept in mind for all decisions.
Actionable steps for growth in the user.
Generated list of potential biases, scientific evidence, and actionable steps (and what data it came from).
To avoid provoking defensiveness from the user and to frame our service as a two-way partnership, we used “consideration” rather than “bias”, framing the results as suggestions open to change (via a feedback system) rather accusing the user of being biased.
Two-way feedback & improvement.
Feedback mechanisms for the user through their de-biasing journey and for the AI model to help improve itself.
Since de-biasing requires consistent, long-term effort, ConsiderMe tracks progress & suggests feedback alongside the user.
Rather than rejecting, we embraced how every AI system has flaws by integrating feedback mechanisms allowing users to propose other insights, make revisions to current ones, or point out system inaccuracies
Demo Day!
7 presentations & becoming a finalist!
After presenting one-after-another to 6 judges, we were ranked as one of the finalists. The best part was getting to see what all the other teams had come up with!
Chosen Problem Space
Iterating after each round round to help improve our idea & presentation.
After each of the 6 rounds, we analyzed & incorporated each judge’s questions & feedback into our presentation.
This allowed for a robust final presentation in which we directly addressed most of the potential questions.