All opinions expressed in in these episodes are personal and do not reflect the opinions of the organizations for which our guests work.
For the latest episode of FWDThinking, I had a chance to dive into a subject that’s near to my heart: Analytics. Measurement is how you know it’s working—and government needs to work in the open. Getting your metrics right is critical in any sector, but in an era of rapid change and increased scrutiny of the public service, analytics are an essential part of digital transformation.
My guests for this episode were:
- Cori Zarek, a lawyer and journalist who’s worked on civic tech, public policy, and today serves as the Director of the Digital Service Collaborative at the Beeck Center for Social Impact + Innovation. When it comes to metrics, “I think we can achieve the goals that we want in our major institutions better, faster, and cheaper than the ways they’re currently being carried out,” she said. “Those seem like pretty good metrics.”
- Kate Tarling, a user experience designer with a background in digital strategy and product in both the public and private sector. She works with “leadership and delivery teams to help them understand what the services are that they’re offering to people, to understand how well they’re performing and to bring real clarity to what we actually want to happen as a result of that service existing.”
We touched on a wide range of topics: How to make measurements properly; how to tie policies to outcomes; and what the perverse and sometimes unexpected results of policies can be when we overanalyze things. Perhaps most importantly, we discussed the need to ensure that the analytics we use to track progress in the applications we build dovetail with the original intent of the laws and legislation that drove those analytics.
In analytics, vanity metrics are numbers that celebrate meaningless achievements. In the private sector, “number of followers” is a vanity metric—until those followers are willing to do something that has a material impact on the business model.
In the public sector, Kate says, vanity metrics tend to be more about launches and dates and delivery. “It gives a story, it demonstrates productivity, demonstrates progress—it’s a sort of firm story to get behind. But as we know, loads of these things don’t necessarily impact outcomes in the way that we might think of them.”
Instead, she suggests pushing back against this tendency is to “give a name to the service, say what the users are doing, and the key user need that is fulfilling, describe the intent of the policy in a short statement.” Just using simple sentences such as “ensure that the right payments [are] successfully made to people who are vulnerable and eligible” can overcome the tendency towards vanity metrics that don’t actually speak to the efficiency or effectiveness of the service.
Cori says finding these stories can be hard. “We can’t possibly imagine that a large government that has been operating under a patchwork of policies and legislation and directives pieced together and overlapped over a very, very long time horizon, is going to be able to have a very quick succinct business case for every single thing it must do, every single service it must deliver, every single aspect of mission it must execute.” The pursuit of metrics can be tyrannical in this situation.
One of the big reasons for measurement is improvement—but that requires experimentation. “We can’t just stop everything and iterate and test and experiment with something new,” said Cori. “We also have to keep delivering all of the services that people rely on and keep all of those mission-critical balls in the air.” There’s less room for taking risks and testing things out.
Kate does see a big shift in service delivery as we move to digital, though. “That shift is in mindset from well, I’m government. I set the policy. I tell you what to do. You do the right thing. Otherwise you’re going to get in trouble to Okay. You need to drive safely. Here’s a way to get a license, to be able to do that. That is a service rather than me controlling you from doing something.”
We touched on the tension between politics and metrics; trust in government; the need for better data education; and the importance of multidisciplinary teams. There’s plenty more in this conversation, but I wanted to close with one thing I learned from some of Kate’s past work in defining metrics: Starting with services.
Start with services
Analytics is the analysis of metrics: Did it get better, or worse? How does it compare to similar things? Is it exceeding our expectations? Kate has a great model for thinking about choosing good metrics:
- Define the service, and what “good” looks like
- Split it into stages
- Collect metrics on each stage
Here’s an example, drawing from Kate’s post Types and stages of services.
Government services, Kate says, fall into several broad categories: Get permission to do something, start something, stop something, move something, claim something, or become something; learn, share, or check something; provide information; and son on. So first, pick the service in question. What does a successful use of that service look like?
Each service can be split into stages. Getting permission, for example, involves discovery, routing, eligibility, suitability, issuing, and meeting rules, as shown in this table from the 2015 GDS-led ‘Government as a Platform — enabling strategy’ project.
Then ask, “what measurement of this stage indicates whether it’s delivering on its objective?” The result is a set of metrics that measure the effectiveness of the service.