During my time with the public sector earlier this year, I spent 8 months improving the internal tools and processes used by a large number of operational teams. I was also lucky enough to work under the same roof as the very users of these tools. Grabbing a morning coffee whilst picking up on a handful of pain points became something of a daily ritual.
Within my first couple of weeks it became clear just from speaking with the teams, we had a user base who were keen to help us make a difference.
Setting the scene
Using the existing software as a starting point, we were able to begin building up user insight to benchmark system performance and usability right away. To provide real benefit, we needed to learn how each team used the platform within their daily workflow.
To start with, our weekly interviews and contextual enquiries led us to begin feeding in product updates, communicating back much needed insight to the delivery teams, whilst mapping out user journeys. However, to ensure a regular, consistent feedback loop was in place, we began championing a monthly Model Office. This is a contextual behavioural study in which groups of users would complete their daily tasks within a moderated environment, similar to their own workspace. In this case we predominantly worked with data entry specialists, using prototyped versions of the software.
These 2 day Model Office sessions were typically suited to testing significant feature updates, informed by the outcomes of all other research conducted that month. Traditional usability testing took place at more regular intervals, typically on a fortnightly basis, due to the large number of teams the platform supported.
Members of the delivery team would be invited along to observe each session and gain first hand insight, which also enabled us to demonstrate the value of investing in user research throughout product development.
Adapting to changing conditions
To get the most out of these studies, we knew it was key to visit the teams in their natural environment. Everything from their building and surroundings, right down to their equipment and workspace had to be accurate.
We were also very aware of one important factor…
Taking users away from their day to day work would impact business performance.
While it might have seemed preferable, from an operations point of view, to only conduct research during the quieter months, it was vital for us to understand how the software held up during busy periods. This being the time users rely on their tools the most.
To minimise interruption, we set aside time each week to shadow team members as they carried out data entry using the live system, observing behaviours and their reactions to system feedback. We limited the number of observers during this time, typically only 1 per user, in an effort to not overwhelm or distract participants.
The relationships we’d built with team managers helped us address this potential hurdle and continue with our research. Through previous contextual studies and co-design workshops, we’d already demonstrated how we not only address issues with their workflow, but encourage the wider business to be involved in the design process.
By being completely transparent with both our users and business stakeholders, all those involved understood that continuous questioning and testing of our products would benefit everyone.
After running Model Offices across multiple locations, the studies attracted the attention of key members of the organisation. We saw this as an opportunity to demonstrate the positive outcomes on a larger scale. We began sharing our experiences in company-wide communications and invited government stakeholders to participate in the workshops we held following each session.
Requests came in for us to run Model Offices as a way of testing updates across a number of projects. While it was great to see enthusiasm and a belief in the approach, this came with the responsibility of determining the most appropriate research method for each scenario. A chance to educate others on the benefits of understanding user needs before making a decision on how to move forward.
Continuing the journey
Although our immediate users were within the organisation, we needed to consider how our end users would be impacted by the usability of internal tools. This meant understanding each and every touch point our external customers would encounter throughout their journey, both online and offline.
Pain points could be gathered from discussions with those users who handle customer queries, providing us with valuable findings to share with the wider UX teams delivering public facing products.
By building this awareness, delivery teams across the organisation could focus on the shared holistic goal of providing consistent end to end experiences.