Lyft

Product Designer, 2018 - 2020. Led design for autonomous vehicle data tooling, including product strategy, defining metrics and objectives, producing interaction and visual designs, and prototyping.

Lyft data curation platform

To get autonomous vehicles operating, the machine learning models they utilize to make sense of the world must first be curated and trained.

Engineers equip test vehicles with sensor hardware, including radar and lidar sensors, an array of photo cameras, inertial measurement units (or IMUs), and GPS. Professional drivers manually take the cars along a pre-defined route to capture data from each component. Once that data is collected, software engineers must process it to enable the hardware to make sense of it. A team of human curators annotates the information by hand, creating models from which the algorithms can learn—this is an often costly and inefficient pipeline.

Beginning in 2018, I worked with a small team—one researcher, one product manager, and a dozen software engineers—to build a platform for new, optimized data tools. Our objective was to radically improve the data curation process to save time and money without sacrificing the quality of the annotated data.

The product manager on the team and I came up with the idea: if we could build an in-house set of software tools with exceptional user experience, we could save Lyft millions of dollars annually.

As with any design process, I began by conducting research—qualitative surveys of our users, lab studies in our Seattle office, and competitor analysis. We sought to understand what data our models need to perform successfully and how curators annotate that data. We spent a week watching data curators work through data synthesis and the pains they often encountered with the existing—then industry-standard—data tools.

We found existing tools in the market needed to be improved in how they were built and designed. We knew there was an opportunity to enhance the front-end experience and back-end modeling of complex data sets.

As I started the project, I knew it would be challenging because there was only one notable lidar-annotation tool on the market. Therefore, I would have no other sources of inspiration for such an ambiguous problem as "How do you make software for data curation?" I looked to markets for similar industry tools that provided easy-to-use interfaces for complex data: filmmaking, music production, programming IDEs, 3D software, and video games such as Fortnite, SimCity, and CitySkylines.

After initial brainstorming and sketching, we began to develop a keen understanding of how our tools would need to work and how we might build a scalable platform for managing and delegating tasks across a team of hundreds of human curators.

Because each task—or annotation scene—is often unique, we needed to enable things for users, like customizing the layout of the tool to allow for multi-view editing of a scene and precise controls of camera angles in a 3D scene. We learned through lab studies that adjusting the software's layout, saving presets, and giving curators control of their workflow—to meet their diverse visual and cognitive abilities and functional specialties—would enable faster and more efficient work.

Additionally, the team and I developed a strong intuition from our in-person research that focusing on a keyboard-driven workflow would enable rapid curation of even the most daunting curation tasks. We wanted users to be able to do their jobs without needing to regularly move between keyboard and mouse, which we observed them often do.

As a result, we developed a proprietary workflow for curators to quickly and efficiently curate thousands of scenes rapidly and at scale. We've since patented four unique concepts from this work, including US Patent 10909392 and US Patent 11151788.

As part of my design process, I needed a straightforward way to see the curation flow across use cases and how all of our controls could work together. To help the team understand the proposed designs and interactions, I worked in Framer to build a mid-fidelity, comprehensive, end-to-end prototype of our lidar annotation flow. Our curators provided rapid feedback on the prototype, and we could iterate changes on the spot.

Overall, this process helped cement some of the concepts into our engineering and product partner's minds, propelling the project forward faster than if we were to work on static mockups or low-fidelity prototypes.

In the end, this prototype helped establish how we might build our data curation tools and served as the foundation for all web-based tools across Lyft's self-driving car organization. The prototype ensured a faster and more efficient construction of data tools than the team had produced over the previous two years.

The team built out a suite of tools from the designs I provided, and we found the new software dramatically impacted curation time: cutting average data curation time from 6 hours on a set of data to just 45 minutes! These changes, in turn, would save the business millions of dollars every year.

Autonomous vehicle system visualizations

To ensure the safety of our vehicles and quality within our vision and prediction models, we rely on a proprietary tool called CarViz to show engineers what our cars are sensing and thinking about doing. When I joined the team, they already had a functional MVP of CarViz. However, the product was a kludge of information and decisions from across multiple teams of engineers.

I began my work by simplifying the interface, focusing on fundamental principles (defined in partnership with members of the team) such as:

  • Mirror reality with shapes and colors where possible. Roads should look like roads, cars like cars, and traffic lights should look like traffic lights.
  • Leverage machine learning to indicate what's essential at any moment and visualize it for drivers and engineers.
  • Use color and vibrancy in a scene to communicate the most critical bits of information, such as a potential road hazard or a vehicle suddenly changing lanes.

Through these principles, the engineering team created a more consistent and reliable way of visualizing what the software was thinking and doing. In turn, these improvements led to a safer and better performant system of operating autonomous vehicles.

While at Lyft, I learned a lot about designing in 3D space and leveraging 3D physical space, time, and boundaries. One of my favorite innovations was developing a system for communicating with vehicle operators about impending, potentially dangerous vulnerabilities in traffic patterns—such as a rapidly approaching motorcyclist just offscreen. This idea came directly from my observation while watching videos of people playing Fortnite. Any time a team partner was offscreen, the screen would "anchor" a visual indicator of that player's general location to the side of the screen.

Designing Lyft for web

After nearly a year of working on software for the self-driving team at Lyft, a small group of engineers was beginning to form to work on a new, modern version of Lyft for the web.

To date, Lyft has been predominantly a native mobile app. The web experience has been relatively lackluster and had yet to be updated in nearly six years.

Our team decided it was time to look into building a minimum viable version of an updated web experience to test the market's response—including a responsive version of the Lyft experience. After a handful of meetings to evaluate and discuss current affairs, the team and I decided to build out an initial web experience in just three months.

We began the project by looking at existing metrics for Lyft's outdated web experience. Our initial assumption was that we could easily 10x Lyft account creation and rides booked by simply updating the web experience to more modern technologies. We could expand our reach by creating a flexible tool that could be integrated with third parties such as Yelp, Facebook Messenger, Open Table, and more.

I also audited the existing market for web-based ride-sharing apps and similar booking experiences across travel, hotels, and restaurants to understand what comparative experiences were like and what users may expect from such an interface.

After initial research, I began wireframing what a modern, flexible web experience might look like for Lyft. I worked closely with the engineers on the team to iterate on a prototype, which we presented to stakeholders to share our vision of a more modern web interface for Lyft.

Through my prototype, I guided the team to have us start building an updated "fare estimator" anyone could use to get an estimate for a Lyft ride without having to create an account with Lyft. We would then add account functionality and the ability to book a ride shortly after the estimator shipped.

In mid-November, just two months after the project began, we successfully launched our initial test. Not only did we see customers use the updated web tool frequently, but we also had the chance to ship the first modern Lyft web interface using Lyft's new product language. And qualitative research indicated it was a much better experience than the outdated tool—helping shape the way for future Lyft web tools.

Today Lyft supports a whole booking experience on the web thanks to the work of the small team I helped drive with design.

No items found.