Within the product team at Carbon, we are always exploring different ways to measure the impact we have on our customer’s lives. Through new user interviews and regular feature usability testing, we’ve become more and more confident that we’re solving the right problems for the people using our products.

Although the insights we gain from speaking with our customers are instrumental in the progression of our offering, we introduced the System Usability Scale (SUS) as a measure to give us a high level benchmark when monitoring the satisfaction people experience whilst using our product. This takes the form of a multiple choice 10 question survey that prompts people to score the usability of Carbon, resulting in a single metric. Since doing so we’ve noticed a significant shift in two key areas:

  1. Our customers feel more valued
  2. The wider team understand product reception

Increased user engagement

We regularly run usability tests with our core user base (every sprint, in fact), to make sure we are headed in the right direction. This regularity means that we have the chance to iterate on features whilst simultaneously gaining validation on the correct route to take within a short period of time.

In efforts to complement this in-person testing with additional measures, we’ve previously tested routinely reaching out to our user base for feedback using incentives as a persuasion mechanism. In our particular case we found responses to be fairly vague, typically as a result of the person being out of the context of use. Using the SUS survey we have instead pivoted to asking participants to provide structured feedback following a usability test.

Having people complete this survey immediately after interacting with the product, as oppose to receiving an ad hoc request, has dramatically improved response rates. Not only this, but the approach ensures as much as possible a consistent scenario and mindset. This is all extremely beneficial when using these metrics to track the fluctuation in responses over time.

An additional side effect of this approach is that many participants have expressed they feel much more valued as a customer when asked to provide this feedback in person, particularly when issues they have faced have been addressed with subsequent updates to the product.

Shared ownership

At the end of each sprint our product team share a recap of research findings, including the changes in SUS score, along with how product updates might have impacted the score. This really helps to highlight correlations between features delivered and the end user response. As a result, the wider tech and commercial teams have come to reference the SUS score as a consistent benchmark, resulting in positive steps towards a shared appreciation for delivering the best experience possible for our customers.

Although the SUS score doesn’t give the complete picture into why people are feeling a certain way, it certainly has it’s place as a method to help guide our research when delving deeper to uncover the why.

The SUS score is just one method of measurement within our toolkit, but using it alongside our core research practices, we’ve found it a worthy metric to track as we aim to not only build the right thing, but to build the thing right.