Autotask PSA is the premiere cloud-based software suite for running IT businesses end-to-end. We provide solutions for CRM, Service Desk, Projects, Time Tracking, Billing, File Sharing, Endpoint Management, and Business Intelligence (just to name a few).
We released Operational Dashboards to our customers to much success. It was one of the most widely utilized new features we’d released to date.
With the feature in users’ hands, we were able to start gathering feedback on ways to improve the experience. We’ll be discussing some of this feedback and the testing we employed to help inform the product design.
(All work Copyright Autotask Corporation)
Research/Validation, Visual Design, Interaction Design
>95% of PSA users are using Operational Dashboards.
Customers consistently report increased accountability, productivity, and sales within their organization.
One of the design-related points of feedback we received quite often after release was regarding widget size. We released with widgets that came in a fixed height, and the ability to customize widget width within our specified sizes (we compare them to “t-shirt” sizes).
Users could select a width of 1-4 segments wide, but the height of the widget was always fixed. What we were hearing from our customers, was that for viewing tabular data especially, they wanted to be able to see more items, and more vertical space in a widget would help them accomplish this.
It may not sound like a difficult change to make, but this is enterprise software. While the dashboard feature was written in a modern framework (more-than-capable of supporting a solution), the PSA product itself is used by a myriad of different types of organizations. If you combine unique organizational process with their employees’ roles and responsibilities, there are countless permutations of user expectations to carefully consider and accommodate when making any change.
Here are some of the questions/concerns we had as designers, that we wanted to address before deciding on a plan of action:
- How would drag-and-drop composition be affected?
- Would users welcome or be frustrated by a “smart” layout system which attempts to use screen space as efficiently as possible?
- Could we make accurate assumptions about read order?
- Is this something that users think they want (because it sounds nice) but wouldn’t actually use it IRL?
Testing Design Theories
In an attempt to answer these fundamental questions, I designed a couple of activities using paper, that would help us gain more insight into how people expected dashboard widgets to respond to their interaction(s).
- A worksheet to help determine read-order
Participants were shown a sample dashboard full of widgets. Each widget had an empty space for a label. They were instructed to label the widgets with numbers indicating their own perceived order. There were no right/wrong answers, but we were looking to see what would happen when we disrupted the traditional grid layout in favor of a masonry style grid.
- A tabletop exercise with a real-life scenario
Traditionally, in many drag-and-drop layout environments, moving one item can cause other items on the screen to react to that movement. This tabletop exercise presented each participant with a dashboard of widget cut-outs. I provided a scenario of moving one widget to a new location between two existing widgets, and tasked each participant with actually performing the action using our paper simulation. This was intended to show us how people expected the Dashboard to react when they made one intentional movement.
Why were these activities important?
Users were accustomed to arranging widgets via drag & drop. Widgets reacted quite predictably when height was uniform. However, we wanted to determine what the predictable behavior would be, if widgets were no longer the same height and users were manipulating more irregular shapes.
After running the validation activities with 12 participants, the results were…
Largely inconclusive. Really.
Our group was split almost evenly between differing read-orders, while expectations of layout behavior were generally split evenly as well (with some outliers). We also had a split of participants who wanted their dashboard to position widgets based on how they fit the best, vs. people who felt it was extremely important to specify the order themselves (even if it meant that there would be large gaps in the grid layout, and more wrapping content out of view).
TL:DR – it was all over the map.
While we love proving our theories right, and are thankful when we prove them wrong (avoiding a very expensive train-wreck, and finding a better solution in the process), the reaction to inconclusive data just feels so anti-climactic.
In reality, we avoided some potentially very expensive mistakes by collecting our inconclusive data. We decided that it definitely wasn’t the right time to build any of the costly features/enhancements we’d been mulling over. At the very least, we needed another cycle or two of testing to see if anything conclusive would shake out. We know what we don’t know, and that’s still knowledge!
This is how the system handled moving widgets around.
Participants numbered the widgets on this work sheet with their perceived read-order.
Working with individual paper widgets, participants were asked to shift the widgets in the way they would expect them to move, when one widget’s position was changed.
When working with technology day in and day out, it’s often easy to forget that we can use very primitive “tools” in creative ways to measure the needs and behaviors of humans.
We weren’t able to put customers hands-on with live code, but as it turns out, we didn’t need to. We were able to obtain the insight needed to inform the design of Dashboards and build something our customers continue to find delightful and valuable.