Guest post co-authored by Pia Andrews & Alistair Croll.
Trust in the public sector is at an all-time low and declining, with enormous ramifications for social cohesion and stability. To rebuild it, we need to start with a trustworthy foundation. It follows, then, that restoring confidence in the public sector needs to involve the public in establishing what trustworthy structures, processes, oversight, and public scrutiny would look like today?
If we don’t create a trustworthy sector, public trust will continue to erode. We’ll become unable to craft policies, implement laws, administer public policies, and ultimately, run functioning democracies. After all, how can we deliver the public good if we don’t have the public’s trust?
Truth and trust are the basis of the relationship between citizens and their governments. Modern technology has made it easy for a few bad actors—or an enemy nation-state—to undermine these foundations with deepfakes, misinformation, and myriad other tactics that distract us from what’s important to society, and pit us against our own best interests. In the past, we relied on independent media and broadcast models to identify and mitigate such risks, but the Internet bypasses the constraints that traditional media imposed on alternate narratives. Moreover, the amplification of extremist views through algorithms and self-reinforcing echo-chambers have serious implications for public policy, social cohesion, and elections.
The two questions every democracy must tackle today are: 1) What role should the public sector or the judiciary play in trying to help citizens navigate these treacherous, and unexplored, waters? And 2) How can public sectors operate in a way that is trustworthy by the public, today?
Restoring public trust faces three big challenges: An industrial-era paradigm; the gaming of public opinion; and the opacity of automation.
The industrial-era paradigm
Despite plenty of digital transformation theatre, public institutions maintain centralised, need-to-know mindsets where citizens are treated as outsiders (or at best, “customers.) This leads to self-referential governance, and a pattern of seeking social licence to operate with impunity rather than engaging citizens in the process.
The public sector seeks permission from its denizens. A department first gets public signoff in the form of a “social licence,” and then acts independently from the civil society it serves. The result is citizen disenfranchisement: People feel, and are, left out from the policies, decisions, and services that affect them.
The public sector needs to do more than just deliver better services. It needs to create services and policies that were designed and delivered in a trustworthy way. That starts with more open, engaged, inclusive, equitable design processes where the public is acting alongside the government, instead of abdicating its needs to the public sector. Only by building with, rather than for, the public can we restore trust in public institutions.
The gaming of public opinion
The escalation of algorithmic misinformation and the rise of “parallel truths” have perverted many of the systems of policy-making, media coverage, and public engagement that once allowed governments to build consensus among the population.
In the digital world, fakery is indistinguishable from truth. When we can’t verify a fact independently, we rely on our trust of the person or organisation delivering that information—we believe the source rather than the message. Rehashing old content in new contexts, creating false narratives, and using algorithms to generate entirely fake content are easy ways to create alternate truths that have hastened mistrust in traditional institutions such as news media, science, academia and the public sector.
This is only going to get worse. Deepfake technology can automate the creation of believable videos of anyone saying anything, no matter how offensive, outrageous or worse, subtly misleading. This heralds a dark age in which saboteurs, criminals, trolls and bots act to game individuals, communities, and entire governments en masse for profit, crime—or just upvotes.
The New Zealand Law Society commissioned an excellent report into deepfakes in 2019, which offered a range of regulatory recommendations worth considering domestically. But it also makes clear that the dominant threat will always be “overseas”, so such laws may not provide much protection. Robust institutions, citizen empowerment, and tools are needed.
The opacity of automation
Widespread adoption of technology as a replacement for paper processes, compounded by black-box AI decision-making, has given us impenetrable processes, systems that are hard to audit, and an inability to trace outcomes back to laws. This makes it hard for all citizens—particularly our most marginalised—to appeal, or even understand, the decisions of departments whose job is to serve them.
The thing about digital is that it’s cheap and fast. Unlike physical products (such as a car) or in-person interactions (such as a service call), the marginal cost of another digital user is vanishingly small. And that makes it catnip for cash-strapped governments seeking to rein in spiralling budgets.
There’s a risk, however. As we digitise things, we automate them. And as they become more complex, we apply machine learning and AI to them, relying on algorithms to triage and classify and respond and approve outcomes. In so doing, we make those processes opaque.
Government needs a better vision of what an “augmented public sector” looks like. It must, above all else, have trust and real accountability designed in at the outset. Machines are fine for making services more responsive, proactive, and ubiquitous, but should supplement an empowered and humane workforce. Done right, AI can actually help humanise government services.
But as it stands, increased adoption of algorithms and automation often does just the opposite: Mechanising services and marginalising citizens. Without strong and ethical service frameworks measured according to human outcomes—like the NZ Wellness Framework—we may miss an amazing opportunity to modernise public service in a way that gets humans and machines working together better than either can on its own.
So what can public servants do?
The public sector needs to be more accessible, transparent, responsive to, and engaged with, the people and communities it serves. It won’t be easy: Generating trust is difficult and complex due to the collective experiences and personal nature of the relationships on which trust is built. But we need to do it if we want a civilized, open, free society.
Start with these four questions
One way to make initiatives inherently trust-creating is to have the service designer ask four questions of all policies and services they create to make sure they’re naturally trustworthy:
- How would you audit the process and the decisions it makes, in real time, with independent oversight?
- How would an end user appeal a decision?
- How would you know whether this action/process is having a positive or negative impact?
- What does the public and the participant need for you to be considered trustworthy?
Every government system demands a solid answer for all of these questions. Mapping the ‘user journey’ for the first three inevitably forces us to dig into several facets of service design:
- Decision capture: Have we recorded the events leading up to the decision, the data on which the decision was based, and the decision itself?
- Traceability of authority: Can we tie the process and its outcome back to legislation, delegation, or policy?
- Discoverability of decision: How do we communicate the decision to the end user?
- Immediate and long-term timeframes: Can we investigate the decision in real time, at an individual level, when troubleshooting? Can we also aggregate the results in the legislative timeframe to inform policy improvements?
Focusing on the user’s experience with auditing and appealing decisions forces you to create inherently trustworthy systems. These questions will ensure that you do so—but they’re still assumptions about trustworthiness. The ultimate test is to ask people what would make a relationship with a particular agency trustworthy (which will vary according to the mandate of the agency) rather than just asking for them to trust you to do so.
Don’t compromise on explainability
There’s a reason Byzantine is both a term for complexity, and the name of an empire: Government is complex. It’s the hard work of turning information into collective behaviour. And with technology, that complexity increases dramatically.
As the public sector embraces tools like AI to deliver better outcomes for greater taxpayer value, we need to understand how these technology stacks interact with the institutions of state. We don’t just want a data-driven government—we want an informed and engaged democracy. And this requires that our processes, however complex, must be explainable.
Some critics claim explainability is too hard. They worry that opening the black box of automated decision-making would somehow infringe the intellectual property of its creator; or that it would weaken the competitive position of its owner, or that it would expose users’ private data, or that it would reveal the personally identifiable information in the training data.
Yes, getting data privacy right is hard. But unless algorithmic systems are accountable to those they were created to serve, they’ll be untrustworthy. And every service that public institutions must generate trust. Users have to clearly see the basis for the advice and actions of the public service. Explaining a decision is vital if we want to audit, appeal, and maintain the integrity of our public institutions. It’s also critical for ensuring the actions and decisions are lawful, permitted, and correctly executed.
All public sector technology—particularly when it impacts citizens in areas such as service delivery, taxation, justice, regulation, or penalties—must be so explainable that it generates, rather than undermines, trust. To be the trusted advisor to an informed democracy, the public sector has always been required to explain its decisions. Technology is no reason to stop.
A trustworthy digital government checklist
There’s plenty that service designers can do to build services that generate trust. Here’s a checklist:
Traceability and accountability
- Ensure, agree and document the principles and practices of Administrative Law across government to guide and drive the ethical and transparent use of digital, data and AI practices as they have evolved in recent decades.
- Establish a Better Rules approach for all new legislation and regulation, with publicly available reference implementations of all legislation and regulation as code.
- Create a public record of all decisions, based on what rules were invoked and with what authority to drive ease of auditing, records recall and visibility by individual citizens.
- Build in automated monitoring and escalation of transactive services to ensure compliance with administrative law internally & externally.
- Keep improving. Develop active and continuous feedback loops from delivery back into policy/legislative improvement to provide for continuous improvement.
Measurably good human outcomes
- Don’t ask for trust, build trustworthy systems together. Engage with diverse communities to create measurement frameworks and to co-design policy, services and to ensure alignment of programs and delivery to public values and public good.
- Build human systems from the start. Create and implement Government Service Standards that embed and normalise human outcomes.
- Measure the outcomes that matter. Proactively impact monitor quality-of-life outcomes at a process and line of business/ service scale, and link all activities to purpose, human outcomes and policy intent in a publicly accessible framework. Implement something like the NSW Human Service Outcomes Framework across government, including in service delivery measures, budgets, business cases analysis, prioritisation frameworks, policy assurance.
Be smart about AI and automation implementation
- Get ready for machines to talk to machines. Always assume machines will be “users of your services, and plan for good and bad examples,
- Measure the impact of algorithms. Complement the Algorithmic Impact Assessment with a measurement of human impact based on quality of life, and monitor for changes that indicate improved or worsening life indicators to identify harm. Require quality of life measures to be part of budget bids and project reporting, and to actively plan for “good” machine usage and mitigate “bad” machines.
- Assume humans can’t do it by hand: Technology scales. That also means when it breaks, it breaks at scale, and humans can’t easily intervene. Include real-time monitoring for patterns in government services and policy interventions.
- Get agile. Use agile, test driven & scalable techniques to create a policy-service spectrum that meets the evolving needs of the public.
Safe & ethically motivated teams and organisational structures
- Thinking counts as work. Make thinking through hard problems mandatory, rather than settling on the most expedient approach. Simple tactics like building a ten percent innovation time factor into “business as usual” operations are a good start.
- Teach critical thinking skills. Create a situational awareness of emerging trends and respond strategically and timely in the interests of the public. Teach forecasting and critical thinking.
- Choose good governance. Evolve proactive and collaborative governance to implement tiriti o waitangi and empower service owners.
- Develop “by default” mindsets. Pick your fundamental requirements—accessibility, trustworthiness, openness, and so on—and create systemic incentives that drive teams towards those outcomes by default.
- Encourage candour. Help teams build cultures that value peer review, transparency, and a sense of purpose.
- Tie executive KPIs to human outcomes. Include human measures in executive performance metrics, mandates, and reporting for agencies to help nudge good decisions.
Our job is to generate trust
In an era of unprecedented mistrust of public institutions, it’s not enough that a service is faster, or more cost-effective. The only way we will restore trust is to create systems that generate trust itself, and that starts with involving citizens in service design, demanding explainability, and solving for meaningful human outcomes.
After all, the first job of government is to create trust. Without that, nothing else matters.