The product design interview process

Update: As of February 2022 Gem’s design interview process no longer includes a design exercise or design critique. We believe expedited and superficial test environments are no longer the best ways to get signal on how someone thinks through problems or collaborates. We instead rely on a values conversation and past collaborations.

What does a modern interview process look like for digital designers? Design leaders building their design team, and individual designers looking to join one, can get incredible value from knowing what a typical interview process looks like today.

Knowing what to avoid (like bias in the process or toxic culture signals) and what to invest a lot into (like interviewing for thinking rather than solely outputs), can help make the interview process more rewarding and effective.

There aren't many readily-available resources out there to help new design leaders and designers get a peek into the hiring process. When I first built the design team at Gem back in 2020, I had to connect with other design leaders and hiring managers at companies like Dropbox, Airbnb, Slack, Outreach, Stripe, and more to understand their hiring process.

Today our design interview process takes shape over six stages, spanning anywhere from two to six weeks depending on candidate availability.

  1. Application review
  2. Recruiter phone interview
  3. Optional coffee chat with the hiring manager
  4. Hiring manager phone screen
  5. Past work review
  6. Onsite product demo, past work presentation, design exercise, design critique, and often a lunch break with some of the team

1. Application review

The hiring manager and recruiter will evaluate applications for the job once it's open. Up until this point, the job has existed as a series of conversations and documents internally with the finance team, product and engineering leaders, and more.

Collectively the team will have created a set of needs in the business and mapped them to the job requisition. These needs are hopefully apparent in the job post where candidates submit their applications and resume.

The hiring manager and recruiter will look through resumes and portfolios to see if signals indicate the candidate may match the needs of the business. If there seems to be a good fit, the recruiter or hiring manager will reach out to the candidate to set up a phone call.

2. Recruiter phone interview

Once the recruiter identifies a potential candidate, they will initiate a phone call to chat with that candidate.

The initial recruiter interview is casual in nature, creating space for everyone involved to get any critical questions out of the way. This 30-minute video or phone call with the recruiter ensures the candidate aligns with the role as needed. It's also a great chance to build relationships and start the process of getting on each other's radar.

The candidate and recruiter will typically discuss location (remote or location-dependent roles), career ambitions, and interviewing timelines.

3. Optional coffee chat with the hiring manager

In some cases, the hiring manager will initiate the first call with a candidate. A hiring manager will initiate contact with a candidate when the role is senior or a level of connection exists between the team and the candidate.

This optional, 30-minute "coffee" chat is an informal part of the interview process and is not considered an interview in the strict sense. Instead, the coffee chat is a chance for the hiring manager and candidate to connect more casually with each other on a personal level.

What we look for during a coffee chat:

  • Company experience: does the candidate have relevant experience in similar companies of our size and shape? Do they have a known, established design process? Have they worked on a cross-functional team? Are they interested in doing so?
  • Types of product experience: has the candidate worked on business tools or complex platforms like ours? Are they familiar with products of similar shapes and sizes as to what we're building? Do they want to?
  • Candidate ambitions: how does the candidate want to learn and grow in their career? Will this opportunity give them a chance to do that or not? If not, it might not be worth continuing the process.
  • Visible design skills related to craft and impact. Does the candidate demonstrate clear articulation of design abilities? Do they have a clear portfolio of work?

4. Hiring manager phone screen

If the hiring manager's coffee chat goes well, the candidate enters the first part of the formal interview process: the phone screen.

In this 45-minute conversation, candidates will discuss experience, projects, and processes with the hiring manager or another design team member (depending on scheduling). This interview requires no screens, as the candidate should focus entirely on higher-level reflection and process. The conversation can be casual but must still be "on the books." The interviewer will ask four or five questions related to what the team is looking for in the role.

What we look for in the phone screen:

  • Motivations: what is the candidate looking for in their next job? Do their incentives align with our company and design team?
  • Overall design experience and level: does the candidate show vital skill in one or more areas of our design interviewing rubric? How do they work with other designers? Do their skills meet our quality bar?
  • Passion for types of work: does the candidate demonstrate an ability to work independently? Can they tackle small tasks as well as complete goal-driven projects? Will they thrive in a startup environment where they will wear multiple hats on the design team?
  • Prioritization: How does the candidate weigh trade-offs when it comes to their work? How do they ensure their work is essential for the business but also customers? Do they have a knack for work that will scale design's impact in the industry? Are they able to create momentum through their work?

5. Past work review

If the candidate has made it this far, it's time to dig deeper into their experiences and abilities. A past work review with a hiring team member can spotlight the actual work the candidate has done and how they speak to the job.

Candidates should expect to show one project from their portfolio of work, walking through a high-level overview of their process and outcomes.

The hour-long past work review can take shape as a website walkthrough (or Dribbble/Behance) or other presentation, including a live product demo if applicable. However, preparing a presentation is not encouraged as we want to discuss the work organically, not through prepared notes.

What we look for in the past work review:

  • Design quality and candidate behaviors: evidence matches aspects of our internal design rubric with what the candidate shares.
  • The material output of the design work as it rates across visual design, interaction design, and product thinking expectations.
  • The behaviors of the designer as they speak to experiences: how they talk about past collaboration, communication, and aligning with the business values.
  • Attention to detail and top-tier design craft, visual design, and interaction design best practices.

6. Onsite

Finally, we invite candidates to join us for a three-to-four-hour virtual "onsite" interview. The onsite stage is made up of four to five individual steps, starting with a product demo.

6.1 Product demo

A 30-minute overview of our business and product. Before we start the day of interviews, we want to help the candidate become more comfortable and familiar with things like our business model, product vocabulary, and value propositions. The product demo will be the candidate's first honest look at the actual product for many people, outside of marketing materials and conversations.

6.2 Past work presentation

After the product demo, the candidate goes straight into a 60-minute review of portfolio work. A panel of cross-functional peers at the company will attend the presentation, including an engineer or engineering manager, a product manager, and two to three design team members.

The past work presentation, or portfolio review, is time for the candidate to present two or three projects they feel demonstrate their work best. Presentations should highlight the core competencies as well as highlight the strengths and weaknesses of the candidate.

The candidate should make a formal presentation for this stage of the process and plan to present for 45 minutes, making space for 15 minutes of questions from and to the interview panel.

6.3 Design exercise

After the past work presentation, the candidate will move on to a 60-minute exercise.

I’ve written more in-depth details about design exercises here.

In the design exercise part of our onsite interview, the candidate will work through a made-up design problem to work toward a solution. The goal of the design exercise is not to solve the problem presented. The candidate should demonstrate how they might work toward a solution in collaboration with a partner (the interviewer), and that's it.

Candidates are encouraged to use a virtual whiteboard or sheet of paper as they work through the problem. The interviewer's role in the design exercise is not to solve the problem for the candidate, nor is it to serve as a moderator or strict evaluator. Instead, the interview is there to help the candidate stay on track and make progress in the allotted time. The interviewer can ask questions or chime in with their ideas and feedback as the exercise progresses. The interviewer is a collaborator for this exercise, not just an interviewer.

What we look for from the design exercise:

  • The candidate's ambition and process for approaching and tackling an ambiguous problem. Do they dive right into solving the problem or hesitate? Are they optimistic about coming to the problem? Do they have energy surrounding the process of solving it? Or do they start paralyzed with indecision or doubt?
  • Collaboration: does the candidate engage the interviewer as a partner in ideation and feedback? Do they employ the interviewer with questions, prompts, and ideas? Do they solicit feedback actively at each step of the process? Do they ask for clarity, or do they specifically act as though they need coaching to progress?
  • Design process (if you need help here, refer to this guide by Discover Design): does the candidate start by identifying the problem that needs to be solved, or do they jump immediately to solutions? Do they speak to or document open questions? Are they transparent in their intentionality as they move through decisions? Do they try to innovate and push boundaries or go toward the most straightforward solution?
  • Time management and prioritization, is the candidate able to manage their time and prioritize their work? Do they make progress in the exercise or dilly-dally in any specific step?

6.4 Lunch or other breaks

At Gem, we always like to include a mid-day break in the formal part of our interview process. A good break is typically lunch or a light snack, anything where the candidate can sit down with a few company members outside the interview panel.

These casual conversations over lunch allow the candidate to know more about the company and team while also giving them a break from doing most of the talking. Ensure every candidate has a lunch break or multiple breaks scattered across the day if they participate in an onsite interview.

6.5 Design critique

The last part of the onsite interview is a design critique.

Critique is a 30-minute product discussion, where the candidate is presented with a mobile or desktop application (always the same app) to review with an interviewer. The interviewer is typically a member of the design team, though a member of the product management or engineering team could also fill in.

The critique is another type of casual conversation wherein the candidate and interviewer talk about the design of the presented app.

Interviewers should engage the candidate by asking what they like and don't like about the product. Let them know it's an open discussion with no right or wrong answers.

What we look for from the design critique:

  • Levels of thought: how does the candidate think about the product at-a-glance? What do they speak to, user experience, visual design, brand presence, usability, marketing? Are they passionate about one element of design more than others?
  • Does the candidate speak about how the product might benefit others? Or do they talk exclusively in terms of their preferences? Do they say things like: "I could see how this would be helpful for someone…" or "This might be designed for people who..."?
  • Does the candidate speak about visual design in terms of consistency, hierarchy, weight, balance, contrast, legibility, accessibility?
  • Does the candidate understand what problems the product is trying to solve?
  • Does the candidate understand how product decisions might impact a business?
  • Is the candidate aware of their intrinsic biases and assumptions during the critique process?

Time with teammates is as important as time with users

Designers need to invest just as much time in getting to know their teammates and cross-functional peers as they do getting to know their customers.

To be productive on a team, you need strong relationships with those you work with every day. After all, the point of being on a team is to work together. You work together to fill in one another's knowledge and skill gaps and develop solid products and individuals. It's hard to work together when you are doing so based on assumptions of how others like to do their job, their expectations for the work, and how you communicate.

You don't need to become best friends with your teammates to work effectively with them, but you need to know:

  • What motivates your teammates?
  • How do they measure their success?
  • How does your job make theirs easier (or more challenging)?
  • How do they like to work together?
  • What is their day-to-day "journey"?

Today the design industry regularly praises the importance of working with users in mind and the value of focusing on the user experience of what you design .

Focusing on users and their experience is undoubtedly vital to the success of what you design. Still, less talked about and often equally important as the end-user is the experience teammates have working with you.

Why don't designers invest in their team more often?

Humans are complicated. We each come into situations with life-long perspectives of how things work, what we're responsible for, how we contribute, and what we expect from others. No two people approach the same situation with the same perspective.

So when it comes to working with other people, the most significant barrier to overcome is the false belief that our work and contributions are the same as how others see them.

Everything you learn in school, in previous jobs, or from other relationships in your life informs how you collaborate with your teams at work. Therefore, you assume that you already know how to work well with others and don't need to invest any more time than the occasional one-on-one to do it.

You fail to invest the same research approaches you take with customers to that of your collaborators. Even if you realize the value peers play in your job, you might ignore the work required to understand their perspective of how things should get done.

As a result of your assumptions on how work gets done, you don't want to expose yourself to uncomfortable situations around how you work. You shy away from constructive feedback about how you do or do not communicate. You don't want to feel guilty for not prioritizing more time with your peers than you spend on customers.

Each of these concerns is valid and highlights why it's so important to spend just as much time talking with your teammates as customers: you don't know what you don't know. That's true for both things you design and the processes you use to create them. How can you do your best work if you are operating on limited assumptions? You can't, so you need to spend time on research and explorations.

It's not just the manager's job to build strong working relationships. It's everyones.

Managers often serve teams by helping to unblock obstacles, provide guidance and support, and ensure the quality of the team's output can fulfill customer and business needs.

Often your manager is responsible for bridging gaps between cross-functional groups and working with other managers to ensure you and your team are supported and doing quality work. There are times when managers need to step in and help with interpersonal or team-wide problems too.

Unfortunately, the more time a manager spends resolving conflicts and negotiating things like prioritization or goals, the more you and your team's foundations and maturity erode. Think of manager intervention like a parent resolving a problem on the playground: everyone might walk away content, but the children themselves won't learn how to resolve conflicts.

As a growing designer, you need to find ways to manage cross-functional and stakeholder relationships yourself. Scheduling quarterly check-ins or one-on-ones to discuss prioritization, processes, and any roadblocks is an excellent place to start. Conducting research "studies" with other teammates in and outside of your team is a good exercise as well, where you can get your teammates to share feedback on the work and design decisions directly as if they were a user of the final product themselves.

Designers are often experts at talking with customers, evaluating their feedback, and letting that feedback inform design decisions and product direction. But unless you take that same approach to understand your work to your teammates, too, odds are you aren't doing the best job you could be.

Ensure you prioritize your work in ways that fulfill not only customer demands but also team ones. Find ways to dig into your teammates' modes of operating, their perspective of your work, how they like to work, and how they measure success.

Three ways to more effectively present your designs

Designers can avoid wasting time and deliver their work more effectively by:

  • Focusing on the audience's needs
  • Speaking to specific details of the work
  • Spending more time listening than presenting

Easier said than done. There's considerable knowledge in designing something, and a designer develops awareness of the customer, business, constraints, potential solutions, and tradeoffs through diligent exploration and experimentation. The work is often time-consuming and exhausting as a result.

It can be tempting to share all the knowledge related to a design when presenting to an audience. The problem is you can only say so much in a meeting or Slack thread. Nobody will look through every document, conversation thread, and past explorations to make an informed decision about a design. It's up to the designer to present the work to convey the most important information for the audience, the customer, and themselves.

Learning to better present designs will make your life easier by getting critical feedback or support quickly and succinctly. The lives of your team and your customers will be made more convenient, too, as a result.

1. Focus on the audience's needs

Before presenting work, ask yourself: what does the audience need to know? What will they do with the information I give them?

Without focusing on the audience's needs, design presentations can be a disastrous scenario. The designer wants to educate the audience on every bit of context relevant to their work, so they present as much as they can cram into the allotted time. The lengthy and complicated presentation or messaging causes the audience to lose interest and attention. The designer is left with a mess of notes to make sense of as feedback is unfocused and scattered around both the designer's process and outcomes.

Not only does focusing your presentation on all the work itself lose audience concentration, but it also takes up much of the audience's valuable time. There's a better way to present, and it's as simple as focusing on the bare minimum information the audience needs to know.

Start by asking yourself what will the audience need to do with the designs you're sharing. Will they need to decide on what to adjust for the project to meet its deadline? Is their job to provide approval or feedback? What type of feedback will they need to provide specifically? Are you providing a status update or convincing the team of a direction?

Rather than presenting every possible detail of the work, emphasizing what the audience needs to know and what they will need to do as a result of that information will tell you what to focus on when you present.

If there were only one takeaway you'd want your audience to get from your presentation, what would it be? If there was only one thing the audience could give you to help improve your work, what would it be? Focus on that.

2. Speak to specifics, not generalizations

Start any presentation or conversation related to your work by being explicitly clear about what's being shown and what specific parts of the work the audience should pay attention to.

When designers only rely on the visuals of what they're sharing to communicate a point, it can confuse the audience.

As a visual medium, it's intuitive to show work and let it speak for itself, but when it comes to presenting work to an internal team, you have to help the audience focus on the right things using the right lens or perspective.

Showing a great-looking design or animated prototype and hoping they convey meaning effectively only leads to different people noticing different details. This broad approach of looking at the work can be constructive for uncovering concerns early in a project, but it's unlikely to get you as a designer focused, constructive feedback for the next steps.

Instead of showing work and speaking broadly about it, tell people exactly what they're looking at on the screen and where to focus their attention for analysis. It may sound counter-intuitive when the audience can look at the work for themselves, but immediately speaking about what's on-screen adds clarity and help everyone focus on the same elements.

To focus on the critical parts of the work, add clarity by explicitly calling attention to the design parts that matter most for the conversation. Sideline anything else by reminding people of what you're showing and why it matters for the meeting or chat thread.

An example of how speaking to the visuals of your presentation might sound: “Here is a concept of our new reporting page. You're seeing six charts in a grid pattern on the screen, each diagram representing data in a concise way. I need feedback on how I can help customers discover the option to modify each chart type without exposing an additional control overlaid on each grid element.”

3. Listen more than you speak

Ensure that when you present design work you are making more time to actively listen and engage the audience than you are on talking yourself.

Presenting work is more about the audience than the presenter — or even the work itself.

Often the point of sharing design work is to get feedback, set expectations, or get approval in some form. If you present work in a way where the audience doesn't have a chance to ask questions, clarify their interpretation or connect the design to other priorities, you ruin the point of sharing in the first place.

Additionally, when designers present work, they often do so with a foundation of assumptions about what the audience already knows and what they care about related to the design. If you aren't giving the audience a chance to clarify their knowledge and needs, you're missing out on an opportunity to ensure the conversation is productive for everyone involved.

Ensuring at least 75% of the time you're presenting designs is set aside for conversation, led by questions or ideas from the audience, means you're much more likely to have a constructive discussion.

A 30-minute presentation might have an agenda with 10 minutes to show the work and provide context to guide the conversation, and 20 minutes is for the audience to ask questions and get clarification or share feedback. What happens if the audience doesn't have questions or feedback in those 20 minutes? Congratulations: you just saved your team 20 minutes each and have done a good enough job to continue with your designs.

Presenting work can be stressful and intimidating, particularly for work in the latter stages of design. If you present work with a clear focus on what the audience will need to do with the shared information, with a shared focus on specific aspects of the work, and by allocating more time for listening and conversation than talking, the work and the team benefits.

Designing the best possible solution

Illustration by Connie Noble for this article.

Most problems don't have one single solution.

Most problems have many possible solutions. Our job as digital creators is to identify the trade-offs and constraints that will enable us to select the best possible solution for any given problem.

The job we have is not to identify the "one true solution," as there often isn't going to be one. Pursuing an imaginary "perfect" solution can lead to wasted time and effort. We must buck our instincts of pursuing perfection and instead seek to find a solution that is best for the specific occasion, audience, and contexts, in front of us, given the information we have available.

To design the best possible solution for any given problem is not to seek perfection in our work but to create in small, iterative steps that will uncover more information for improving the design over some time. Such a process is good design.

Yet, as a designer or digital creator, how often do you find yourself paralyzed with indecision? Afraid of selecting a lesser solution out of fear of how it might fail or wanting to impress those that rely on your expertise, you dilly-dally and procrastinate until the project clock starts to run low.

At some point, you have to decide on a design solution; you're just holding off until the decision point is "out of your control. At which point it's easier to say "I ran out of time!" than it is to say "I was afraid of making a poor decision."

Back when I was an individual contributor at a large, globally known tech company, I believed my role was to select the single, absolute, best solution to any given problem. I would spend inordinate amounts of time researching existing solutions, auditing competitors, or tackling menial tasks, all beneath the guise of productivity but really out of fear and anxiety. Working alongside some of the best minds in software design led me to believe my job was to be the best at my role too.

I procrastinated because I was afraid of making a bad judgment call, miss a crucial characteristic of our existing design system, or overlook an edge case that would break the entire solution.

As the time available to me to develop a design solution reached its maxima, I'd inevitably have to make a call and cross my fingers. In almost every single case I can recall, my fears turned real. I had overlooked a critical edge case. I did miss an existing component or design pattern used elsewhere in the product—something the team could have easily reused. I had slowed the entire initiate down out of fear I'd not come up with the ideal possible solution to the challenge at hand, only to do just that.

Over time I've learned there will always be something in the problem space we won't know about until the design gets out into the real world.

“It is not necessary to be perfect; we can make thousands of mistakes during our voyage. What is required is that we commit ourselves to a course and remain alert to the actualities of each moment, so we can guide our adjustments.”

Our job as creators isn't to design the perfect solution to the problem at hand, and trying to convince ourselves as much will only lead to more frustration and wasted time.

Instead, we should focus on putting in the work to understand the implications of our design solutions, knowing there will be things we miss, overlook, or fail to get right. As long as we are willing to progress toward some designed solution, we can learn where the design has fallen short once it's out and in the world. Then our job shifts to be adaptable and responsive.

Too many of us stress and procrastinate over trying to find "the one true solution" to a design problem, but the reality is we don't know what we don't know. The only way to learn is to pick the best solution you can with the information you have, then learn from getting it out into the world.

Pick a solution and learn from it.

How to measure design impact

Illustration by Martyna Wieliczko for this article.

How do you know what design's impact is on the product you're building? Even if you could measure design, how do you identify what parts of the product experience are worth measuring in the first place?

These are the types of questions digital designers inevitably end up asking themselves. They're also the types of questions business leaders look to design for clarity around. How do we know if our design team is doing the best, most impactful work for our customers? How do we measure the effectiveness of a design change on revenue? Where should the design organization be spending most of their focus?

It turns out there are many straightforward methods and strategies for measuring design impact. Two areas I recently combined while exploring the design impact at Gem—where we're building the source of truth for top-of-funnel recruiting—are top tasks and PURE (Pragmatic Usability Ratings by Experts). Here's how I did it.

Note there are many important ways of measuring product usability and design impact. This is just one small part of a larger effort and is in no way meant to be a complete, comprehensive way of measuring design impact.

1. Identify and map stages of user jobs

To start: I needed to understand the essential work our product seeks to solve for customers. My go-to for figuring out user needs related to workflows in a product is what's known as "top tasks."

The Nielsen Norman Group defines top tasks as "a list of 10 or fewer activities that users should be able to achieve using a design. If people can't do these things, the design has failed." Therefore, the first step for identifying top tasks is to outline all significant tasks. To do this, I started by outlining the various "jobs" our customers seek to accomplish. (Similar to Clayton Christensen's Jobs to Be Done framework.)

Identifying the main jobs customers use our product for was easier said than done. As someone new to the recruiting space, I had to leverage internal stakeholders and external resources like blog posts and conference talks to identify the primary "stages" of recruiting.

I learned that the recruiting process—or "lifecycle"—tends to have seven typical stages:

  • Preparing
  • Sourcing
  • Screening
  • Engaging
  • Managing the interview process
  • Hiring
  • Onboarding

Once I had a grasp of the stages, I documented each step. I began listing out the various jobs and tasks within each, enlisting the help of external experts at the company to help audit and contribute to the list. I formatted the text document as follows:

Stage

Job within this stage

  • Task for accomplishing the job
  • Task 2 for accomplishing the job
  • etc.

An example stage, job, and task breakdown within the document ended up looking something like this:

Stage 4: Engaging

Managing passive candidates

  • Continue messaging candidates with strong resumes
  • Set reminders for candidates that aren't actively looking right now
  • Connect team members with passive candidates to schedule "coffee chats"

2. Survey internal experts on perceived job pain

Once I had a relatively comprehensive list of all the stages, jobs, and tasks a typical recruiting team faces, I then created a visual "map" in Figma of that same data.

Creating a graphic format of these stages, jobs, and tasks would be essential for helping internal and external stakeholders better visually parse and interpret the information. Quickly scanning and understanding each stage and task was critical for the next step of the process: surveying internal experts for their feedback and emotional rating of each job.

To do this next stage of the process, I selected groups of three to four internal experts at random to participate in a 30-minute workshop, with a total of 10 participants. The workshop format consists of two 15 minute sections.

The first 15 minutes: quiet, heads-down time to provide feedback on the visual map. Were there any stages, jobs, or tasks that felt out of place? Were there any tasks that were missing? Fixing typos, ensuring I was using industry language, and checking that generally, everything made sense were the goals of this time.

During the next 15 minutes, each participant to access a private, just-for-them, version of the map in Figma that was immutable except for a series of small circles with a question mark (?) in them that sat above each job. I asked participants to use the circles to select an emotional response for each job based on what they know or have experienced around it. Answers could fall into one of four emotion classifications.

  • Exciting
  • Satisfying
  • Annoying
  • Upsetting

Even if a participant wasn't an expert in the job they were evaluating, I asked them to each leave an emotional response regardless. The more raw data I had to use here, the better the outcome would be for identifying top tasks.

3. Survey customers on job pain

Once internal experts at Gem had given feedback and rated each stage's jobs on perceived emotion, I reached out to external customers for their ratings.

This step of the process took the sort of a Google Form. For each job I had identified, I created a question that the participating customer could rate. Because this meant customers would be evaluating more than 30 unique "jobs," I also offered customers the chance to win one of two $100 Amazon e-gift cards for their time. Offering an incentive to participants significantly increased participation and meant I could get a broad audience to participate in the research.

Once customer ratings were in, I filtered responses based on 1. Time the customer had worked in the recruiting space, and 2. Job role. Specific job roles (like hiring managers) weren't likely to have a broad swath of insights needed for the whole end-to-end recruiting process, so I removed those responses from the final tally. In comparison, users who were talent executives or full-cycle recruiters I prioritized.

4. Index all pain ratings

With internal expert ratings and external customer ratings available, I transferred each stage and job to a Google Sheet and put each emotional score near them. The breakdown was to pair emotional responses with a numeric value, with Exciting being the lowest (1) and Upsetting the highest (4).

The reason for this was I wanted to track the most "painful" parts of the recruiting process. Added to the Google Sheet, I was then able to quickly create an index of which details from the recruiting lifecycle customers and internal experts viewed as being the worst. I then added the totals for each column and row and calculated the average, aggregate, max, and min for each job and the entire index. From there, a simple set of color conditional formatting showed me where the most critical areas for our team to focus on were.

5. Identify task flows for the most painful jobs

At this point, the team had everything they'd need to evaluate task flows for the most painful jobs (the "top tasks" identified by most painful in the pain index).

I took a look at the most painful jobs—those with the highest scores in our index—then looked at the individual tasks within each job area. Those tasks became the most critical areas for our team to monitor and evaluate. The intention here is that if we can improve the efficiency and ease with which customers can accomplish these specific tasks, we'll be helping to design a product that improves customer's lives.

Here is where the team is today: we know the top tasks as identified through internal and external workshops and surveys, now we need to break down those tasks into individual steps users have to take to achieve them.

To break down and evaluate each step, the team looks at each set of tasks behind a job. These task steps become workflows that we break down into a Google sheet. The result looks something like this:

"1. Go to the Prospects tab. 2. Click to filter on rejected candidates. 3. Select a candidate who was recently rejected. 4. Click to send them feedback. 5. Enter a feedback message. 6. Send the feedback. 7. Confirm the feedback was sent successfully."

6. Conduct PURE against each task flow

In the final stage of measuring design's impact, we take these task workflows as identified for the most painful jobs our customers face and ask three internal experts to follow the workflow and rate every step on a scale of 1-3 (good to bad) for three variables:

  • Ease of use
  • Speed
  • Efficiency

This type of testing is what's known as PURE: Pragmatic Usability Ratings by Experts. The "experts" are internal Gem employees who may or may not be experts on the product but are experts when it comes to an understanding our customers and their pains.

The ratings these internal experts provide are calibrated and added together for each job area. The job areas are then given a total score, with lower scores being better.

For example, we may end up with a usability score of 38 for the customer job of "Giving feedback to job candidates." Compared to a score of, say, 22 for a different job we identified, we can add these scores together to get a complete usability score of 60 for the product.

As the team focuses on improving the usability of these workflows within our product, we then re-evaluate all workflows, jobs, and usability scores every quarter. If the score lowers—say, it drops to 52—then we know our team's work on the design and usability side of things is having an impact.

These PURE ratings are not the end-all-be-all of the work we do as a design team, of course. We also track progress toward new feature goals, leveling up the foundations of the product holistically, looking at how we collaborate and communicate with other business units, and look at many different areas of ownership for our team.

Using top tasks and PURE analysis means our team can set some level of measurement for monitoring the usability of our design work. Having a set area and method for measuring the work we do, we can align efforts and how we think about the most important work we do. This process, in turn, ensures consistent design efforts toward a shared objective.