HONEYWELL

Warehouse asset management system

 Overview

B2B software for IT/Warehouse managers to monitor and optimize their hardware devices

User Problem
How might we improve visibility into device performance, usage and maintenance to help B2B customers get the most out their Honeywell hardware in distribution centers?

Business Goal
Launch a new data product that integrates with Honeywell hardware (computers, printers, and scanners) to help customers find actionable insights to optimize their operations and close a B2B deal with Macy’s

Solution
A new software system called Operational Intelligence with dashboards, alerts, device details, and location/user management

Skills
Prototyping, interaction design, icon design, information architecture, design workshop planning, product management, agile methodology

My role
Product/Interaction Designer, Design Program Manager

Platform
Desktop

Duration
6 months

Collaborators
Yamilet Esquivel & Mauricio Guerrero (Design), Nancy Wojack (Research)

Tools
Sketch, Invision, Abstract, JIRA

User Problem

Our key Honeywell customer had used hardware devices such as handheld scanners, tablets, and barcode printers in their distribution centers for decades.

Before developing this product, a warehouse asset management tool, customers didn’t have insight into how the devices were used, where they were located within a particular site, or when they would fail.

Customers needed the right data to make decisions about how to allocate their Honeywell devices better and when to replace or fix them.

User Research

Site visits & contextual inquiry

Nancy, our user researcher, conducted site visits with eight different customers in both their corporate offices with IT managers and warehouse managers in distribution centers. She observed their ongoing business activities and interviewed ~20 people.


Journey map

From the interviews and site visits, I created a journey map. These were the three key areas in the experience that needed improvement, which would become the focus of our designs later:

Personas

Warehouse manager

Responsibilities:

  • Optimizing processes and operations

  • Managing employees

Goals:

  • Increase output and efficiency of employees on their team

IT Manager

Responsibilities:

  • Choosing equipment and software for the company

  • Maintaining/repairing equipment & software

  • Provide tech support to employees & workers

Goals:

  • Buy the right equipment in the right quantity

  • Keep equipment up and running

  • Minimize costs from lost, broken, or poorly performing equipment

Ideation

Organizing a design workshop

I created an agenda for a three-day design jam in Seattle where we hosted 12 representatives from each of our cross-functional teams.

Attendees included engineering managers, developers, hardware designers, product managers, product marketers, and UX designers/researchers. They traveled from Atlanta, Charlotte, Mexico, India, Pittsburgh. The workshop that I organized included crazy eights time-boxed sketching, dot voting, storyboarding, and an impact/feasibility framework.

Here’s the agenda:

Crazy eights and dot voting

Each member of the team generated one concept per square every two minutes (eight concepts in 16 minutes). Then we reviewed all concepts, giving each person three votes, with the exception of our PM/Product Marketer, who got six votes. Here are my sketches:

Sketches of the ideas I generated. Some of my more popular ideas included dashboard metrics and alerting.

Ideation framework to narrow down concepts

We prioritized all the concepts based on votes, affinitized the ones that were duplicative or similar, and mapped them for impact and feasibility.

The ideas in the upper right are high impact and highly feasible. The top ideas were a notification system about relevant maintenance issues and a calendar or to-do list of maintenance tasks.

Storyboarding the device management process

Here’s my storyboard for the winning concept of our first Minimum Viable Offer (MVO) – a set of dashboards with alerts and a daily email digest.

In this storyboard, a DC Manager gets an email with a daily report, notifying her that she has 52 dead batteries and won’t make it through the shift. The batteries are dying so quickly because they’re too old, so she orders new ones and her team’s productivity increases.

User flows for preventing maintenance problems

I then created a user flow to diagram how the alerts and notifications would help achieve our experience outcome (preventing maintenance problems with less effort and downtime).

UX & Content

Initial concepts

To wrap up the design jam, I sketched a few screens that tied back to each stage of the user flow, using components from the Honeywell design system for the mockups wherever possible. The sketches helped guide discussion and alignment during sprint planning.

Dashboard: The home screen for Operational Intelligence with clickable charts and alerts that help IT/DC Managers understand problems and trends with their business.

Device management: Users click a chart to arrive at this clickable table, where they can drill into specific devices driving trends or alerts.

Device detail: device specific details and data points

Device details

Yamilet created the information architecture (IA) for the navigation, while I focused on defining the IA below for each unique device type (mobile computers, printers, scanners, and batteries) using research from Nancy’s site visits.

I used the IA to define relevant column headers for data tables, categories for key data points, and their location above the fold vs. below the fold on device detail pages.

I then validated my assumptions with hardware experts and our key customer.

Information Architecture

Feedback & collaboration

Peer collaboration

Yamilet and I split the design work. I worked on dashboards and data visualization, table filters, device details, and in-app alerts, while she worked on navigation and email alerts. Each of us created components and saved them in a shared Sketch library. I got feedback from my design peers, PM, and engineers on a weekly basis.

Co-developing and iterating with customers

I also presented the designs to our key customer’s leadership and IT team for feedback every two weeks using a prototype before locking in final designs. These co-development sessions helped validate decisions. I made several changes and additions to the designs based on their input and insights gleaned from our ongoing discussions. For example:

  • V1: At first, I used a critical red or warning yellow icon to represent the most severe issue a device had. But Macy’s wanted more detail about the number of issues the device had or what types of issues without having to open a new window.

  • V2: I then created icons for each different type of issue and showed all the possible issues in black, and highlighting the actual issues in red or yellow. My team gave feedback during a design critique that the number of icons was causing cognitive overload.

  • V3: Finally, Ianded on V3, which only showed a colored icon specific to the type of issue.

Contributing to the design system

Alerting patterns

I surveyed several users and customers about which alerts were most important to include in Operational Intelligence. The survey results also determined the order and color coding scheme of alerts.

I also created patterns for alerting to go on the dashboard and defined the three levels of severity:

  1. Critical: active issue affecting the business (error icon, red color)

  2. Warning: impending issue (warning icon, yellow color)

  3. Informational: no immediate impact or action required (info icon, blue color or no color)

For example, a device that is offline for a long time is critical (red), because it’s worth thousands of dollars and it could be lost or stolen. Software updates are slightly less severe, so they are considered important (yellow).

Alert banners for critical, warning, and info states

Page alert banners

Alerting tooltips

KPI cards for critical, warning, and info states

KPI card banners

Insight cards and data visualization color palette

Honeywell had just launched data visualization components in the design system, but none of the components had been used in a shipped design yet. I piloted the components, and identified two gaps:

  1. There was no pattern or way to show insights or trends from the charts

  2. The data viz color palette included shades of yellow and red to show data that didn’t correlate to severity. The usage of red and yellow in charts and graphs conflicted with other parts of the UI, where those colors meant “warning” or “critical”.

To mitigate this, I added trendlines into charts and a new insight component showing the key takeaway below each chart. I also swapped out the color palette and created documentation for developers and future designers on how and when to use the new colors.

Charts before redesign

Charts after redesign: new color palette, trendlines and insights

New component: Expandable tables

Warehouse and IT managers needed to be able to give their employees and teammates the right access to view device data and troubleshoot problems. But the design system didn’t have a component that would allow users to view high level details for a lot of items at a glance, drill down into details, and make edits all at the same time without having to switch contexts.

To solve for this, I explored a few interaction patterns. I ruled out several options (ex: a list component that linked to a modal) because I didn’t want to interrupt the task. Instead, I settled on an expandable table, which allowed users to see all data on a single screen.

I then contributed my expandable table pattern to the Honeywell design system for other designers to use.

An example of the expandable table component that I created. In this case, I used it to display employee information.

Icon design

Our design system already included many standard icons, but the product required a few new custom icons. I created the icons below, which are being added into the design system for others to use.

Three icons designed to show device location

Final UI

Mockups: Asset Summaries and filters

Mockups: Device details

Feedback & development

Peer feedback

Before sharing screens with customers, I presented them to fellow Honeywell designers and design managers from other teams. I conducted six reviews over the course of the project using Honeywell’s Design Quality Review process.

Cross-functional feedback

I also reviewed my work with cross-functional members of my team including the product manager, product marketing manager, engineering manager, hardware designers, hardware engineers, and the developers assigned to each design. During the ongoing reviews, we worked together to align on what was feasible and the level of effort to build. Getting stakeholder alignment through these reviews was important because we uncovered several technical limitations in the designs. For example, I had to remove filtering functionality from expandable tables because it was too complex for engineering to build in time for the first release.

Agile

I collaborated with 18 developers in Mexico City, Atlanta, Charlotte, and Hyderabad. We had 6 two-week sprints and met every Monday and Wednesday to discuss progress on user stories, issues, and defects.

QA & usability testing

I tested, filed defects, and audited the user experience before launch, and found a total of 24 areas for improvement, which I filed as JIRA tickets and triaged with the rest of the team.

Results

System Usability Score of 83

We measured the success of the release with a usability test using the System Usability Score (SUS) method. The final score of 83 was well over our target of 75, so we were thrilled!

Learnings

One of the best parts about this project was regular access to our key customer to understand their wants and needs and get their ongoing feedback. I plan to push for this level of continuous customer/user involvement in my future projects.

At the beginning of the project, it was tough to stay aligned with my team because we had yet to work together, and roles and responsibilities needed to be clarified.

I was further challenged by the fact that almost my whole team was remote and in different time zones, which meant a lot of time on the phone and screen sharing, often at odd hours. Implementing a JIRA labeling process and regular check-ins made a difference in improving our communication and increasing accountability.

The core team: design, research, PM, and engineering leads (not pictured: 20+ devs)

 More projects