At NHS.UK we have specialist user-researchers – such as myself.

We’re not the first to believe that everyone should spend time with users.

We observe and analyse research as a team, and encourage everyone to moderate interviews in our pop-up research.

I sat down with Kim, a content designer, and Dom and Paul, two of our developers, to hear what they felt about the experience of observing in the GDS user research labattending a diabetes day and running popup guerrilla testing in a GP waiting room.

Martin: Tell me about talking to strangers.

Paul: There’s a lot of adrenaline – it’s tiring.

I felt clunky, approaching complete strangers, saying “Hello. I work for the NHS. Can I come and ask you some questions?”

It was also exciting – every single one of them said yes – and told me their life story.

Martin:  Did that surprise you?

Paul: Not really, but I still had to overcome my fear of approaching people.

Martin: What did you learn?

Paul: I had a suspicion, from the research we had done before – but it was amazing how not like me people think. People approach the web completely differently from how I do it. That’s really good to see.

It’s a reminder  – you can build anything you like. It doesn’t matter if I think it’s intuitive. Until I’ve shown it to someone who isn’t me I don’t really know if it working, or not.

Martin: You’re a developer. How are you different from the audience?

Paul: I’m probably more analytical – looking for patterns. Being a technical person gives me a different perspective.

When you build something, having a real user in mind helps.

You can make something that pleases you but that’s not helpful. If you’re making something for a user you can visualise in your head, that’s really helpful to make the right thing.

Martin: What surprised you?

Paul: One thing that was surprising was a lady who whizzed to the bottom of some concise content, without reading it, and said she liked it.

[Martin: We find that what people do is more reliable evidence than what they say, in understanding and predicting user behaviour.]

Martin: Tell me how it felt to do the interviews yourself? How did that compare with watching the moderated lab sessions?

Dom: It’s harder.

There’s a noticeable difference in quality of conversation, the sort of stuff the participant will offer up when it was me or Joe, versus when it was you.

We were in one room, so I could listen to how we all were getting on.

The script is very useful, but there was time pressure – being worried they’re going to get called in to their appointment.

Part of it is not having interviewing experience, to guide the conversation back to where I need it to be.

In the lab I get a comfortable experience – sitting there, listening, and trying to build insights. When running it myself, I can ask any question I think is appropriate, but in the moment it can be hard to come up with those questions.

Like Paul, I found it very tiring. Speaking to strangers is not my natural forté.

Martin: How about you, Kim?

Kim: Firstly I was pushed completely out of my comfort zone. It felt quite nerve wracking, at first.

That soon dissipated. I found that people were really keen to talk and help, where they could. It was good to have those conversations face to face.

It’s difficult – I found it difficult to keep the questions open – that’s a challenge. 

Martin: Did you feel me as your conscience, at your shoulder?

Kim: <laughs> Yes!

It’s in your head, it’s a natural thing to lead, to slip things into conversation. Afterwards you say “Why did I say that?!”

It’s really interesting and useful, to see people reading and commenting on your content. You can’t be precious about it – although your natural reaction inside, is to be a bit defensive.

It’s important. You are writing content for people; to see how they react teaches you something. You take that into when you iterate, or when you’re writing your next piece.

Martin: Did you find user-testing discouraging, when it failed, or encouraging, motivating you to fix things?

Dom: I have this belief that we can’t get anything right to start with, and even after a few iterations, it’s going to be a long way from great. We need to put things in front of people; we find things to make better – often it’s face-palm obvious: “How didn’t we see that?”

That gives me the energy to fix things, rather than being depressing.

It helps if stuff is broadly well received.

We were showing a GP booking and registration journey. Had the idea been poorly received, it would have been depressing.

Martin: What’s the difference between reading an interview transcript, and being the interviewer?

Dom: I’ve learnt most from doing the pop-up testing, closely followed by the lab – because although I’m slightly more removed, the quality of the interview is much higher.

A transcript is of less value, and a report back, a summary-analysis of something that I didn’t witness has less value again.

Obviously when reports are from people I’ve done research with before, I trust them, so consciously I accept their arguments as valid.

But there’s less of that emotional “I know this to be right”.

Martin: About analysis… is interpreting what you saw clear-cut? Is it a challenge?

Paul:  We got a lot of different things. It was good coming back together, and discovering that my sample wasn’t totally representative. My six people were just a small part. You had had quite different experiences – some overlap, some different.

I enjoyed that process, of crystallising what we had learned, turning that into action.

Dom:  It was helpful we managed to catch up several times, during the day. We spoke to six people each – gradually building up a picture of how the thing we had built was being received, how it was working – and failing. At the end of the day I had a pretty clear idea of what we should change, and how we had reached that decision.

It gave a much clearer idea than analysis for research I wasn’t present at – when someone comes back and says ‘Here’s what we learned and here’s what we’ve got to change.’ That feels more detached.


Martin: We talk about data-driven design. What’s the difference between the soft, squishy things in conversations, and harder numbers, statistics and analytics?

Paul: I don’t think they’re at odds, at all. I think the empathy stuff feeds in.

There’s a misconception that development isn’t creative – but no, you look at a problem, and then you have to jump to a solution.

It really helps, then, if you can apply some empathy.

The next stage – the quantitative bit – is testing if that solution performs.

That leap, from a goal, of booking, to a solution that might work. That’s the bit where having met the user is very useful.

Martin: There’s a question of how much evidence you need.  Sometimes people try to hold quantitative interviews to statistically significant standards. Do you feel when you come back from talking to six people that it’s… valid?

Paul: Six versus nothing is huge. Six versus twenty ? It may be huge, but I don’t know what the other 14 would have said.

Dom:  I wouldn’t want to roll out a national programme on the basis of having spoken to six users and doing no more than that.

But that’s not what we do.

The six is enough to help us correct course, slightly – and make more progress. And then we repeat the process. Do another pop-up research session. Speak to another 18, and then do some online, remote testing.

Martin: So, confidence that we’re getting better is good enough?

Dom: Yeah – we’re not trying to boil the ocean. I don’t think we’re trying to get the perfect answer. A final answer. We’re asking “What is the direction we’re going in?”

We’re taking the insights that we gained from those six, seeing how they interact, and listening to their experiences,

But looked at that through the light of the total experience we have. We all have a lot more experience than this project. Collectively, we’ve got a century or more of experience in software product design.

Those users remind us of the bits of our experience we should be pulling on.

Paul: There are diminishing returns. The first six, you get so much feedback – nothing works as you’re expecting it would. The next time you iterate there are smaller surprises.

You start off with this massive gap of uncertainty, then you reduce that to something quite slim, pretty quickly. With the next users it’s more tweaky, little things.

Martin: Has our team has taken to the mantra “user research is a team sport”?

Dom: Yes. The lab days go really well. There’s always a lot of energy in the observation room – getting all the transcripts typed up – and jotting down our top-three observations after each session.

We’ve done those two pop-up days in Leeds, there was the diabetes day, and lots of speaking to, and interviews with clinicians.

Martin: I think the buy-in’s been great. You’ve all made my job really easy…

Kim: But people have been volunteering. People haven’t been told “you have to do this”. You’ve said “who’s up for this” and you’ve had more people than you’ve taken.  

Dom: Everyone volunteered. After the last popup session, you said who wants to do the next one and everyone said “me”.

Paul: and that’s been in the same spirit as actually everyone’s done everything on this project.

Martin: You’ve not let me write any code!

Dom: You haven’t asked!

Paul: We did invite you…  No, we’ve all done sketching and paper prototyping – and we’ve built things together. It’s been really good.

Martin: Thank you everyone.


  1. […] User research as a team sport: User research is not a one person job, everyone on the team should spend time with users. […]

  2. […] team on the NHS alpha wrote a piece about getting developers involved in user research. Will Myddelton, a researcher on GOV.UK Notify, took that blog and applied it to our […]


Leave a comment