Skip to content
Blog

AI in the public sector: white heat or hot air?

The new Government needs to work out what it wants from AI in the public sector

Imogen Parker , Matt Davies

18 July 2024

Reading time: 10 minutes

Downing Street and Whitehall building signs

The UK’s new administration is warming up to the ‘white heat’ of technology. During the election campaign, Labour politicians announced plans for using AI[1] to help with truancy, to support jobseekers and to analyse hospital scans. Peter Kyle, the incoming Secretary of State for Science, Innovation and Technology, has spoken warmly about the power of technology to save time and make interacting with public services a ‘more satisfying experience’.

This tempered ambition is welcome, particularly when set against more speculative claims of AI’s ‘truly revolutionary’ potential. And there is certainly an opportunity for the change of Government to mark a shift of public sector AI policy towards longer-term and more strategic ends. But seizing this opportunity will require a clear-eyed account of where we are and a realistic assessment of what makes these technologies work when embedded into public services.

At Ada, we’ve built up extensive evidence over the past five years about how the public sector is using AI on the ground, from NHS and local government use of data analytics right through to AI-powered genomic health prediction, COVID-19 technologies and live facial recognition. Later this year we will be publishing a review of cross-cutting findings from this body of evidence, with lessons for the new Government’s agenda. As a prelude to that we offer some first steps for the new administration.

The status quo: AI and the public services ‘doom loop’

It’s no surprise that Government is taking an interest in whether AI can help to tackle stretched and struggling public services.

The National Audit Office (NAO) has reported that the UK is wasting ‘tens of billions of pounds on crumbling infrastructure and poorly-run projects’. Recent years have seen local councils placed under increasing financial pressure, while strikes across key services continue to take place in response to low pay and poor conditions. Ageing populations, economic turbulence, COVID-19 legacy problems and the ongoing impacts of an austerity programme that saw the UK make steeper cuts to public spending than most of our European peers are all putting huge pressure on public services and the welfare state.

Set against this daunting inheritance, embracing the rhetoric of an AI revolution that will improve services and administrative efficiency sounds appealing. The new Government faces considerable fiscal and resource constraints, both external and self-imposed, which threaten to restrict significant financial uplifts in spending on public services.

If – and it’s a big if – AI can square this circle, then it would be a mistake not to think about how these technologies could be used in the public interest, and the role they could play in reviving public services.

So how exactly could AI help the UK public sector escape the ‘doom loop’ of underfunding and struggling services? Usually, the claims made by proponents of greater public sector AI use fall into one of two categories:

  • Using AI to improve the efficiency or productivity of a service through automation and triage (for example supporting a civil servant with document analysis), thereby saving time and/or money.
  • Using AI to improve the quality of a service, through innovations in what it is possible to offer (like tools to improve cancer diagnostics) or by providing a more personalised or joined-up service (for example, through a chatbot to guide job seekers into training or employment).

On paper, these claims sound plausible enough. And they are echoed by some working in public services: a rapid evidence review carried out by Ada last year found optimism that generative AI might be able to support tasks such as document analysis, policy design and answering public enquiries.

But are they borne out in practice? To find out, we need to ask three fundamental questions about any deployment of AI. First, does it work? Secondly, does it work well enough for everyone? And finally, does it work well in context – not just under test conditions, but on the street, in the hospital or in the classroom?

It’s particularly important that we ask these questions of AI deployments in the public sector. The state is responsible for profoundly important decisions related to complex social issues – from decisions about who to prioritise for surgical operations; to the assessment of child welfare, asylum claims and disability benefits applications; to strategic decisions about the country as a whole. Public services are the ‘front of house’ for most people’s engagement with the state – their experience of democracy beyond the ballot box – and so any intervention that alters that experience needs to be considered with care.

Understanding the present: mapping existing uses of AI in government

This is where things get challenging. Despite growing political and media interest, there is little empirical evidence about how AI and other data-driven tools are being used in the public sector. What we have are snapshots: for example, an NAO survey carried out last year found that over a third (37%) of public bodies were actively using AI, and a further 37% were actively piloting (25%) or planning (11%) uses of AI.[2]

This is likely to be an underestimate, but we have no way of knowing because there is currently no common resource or obligation to monitor where and how AI tools are being used across the public sector, even within central Government. As a consequence, no person or team within Government has a systematic understanding or ‘birds-eye view’ of what AI tools are being deployed and where, let alone what is working across the public sector. We lack the basic information and evaluation of what works that we would expect of any other major public service intervention.

An urgent priority for the new Government should therefore be to develop a more detailed understanding of where and how AI is being used in government and public services, and what is currently working. Bringing digital, data and AI delivery under one roof in the Department for Science, Innovation and Technology (DSIT) should help with this by improving coordination, but building up a comprehensive picture of AI use in the public sector will require input from across central Government and beyond.

Accordingly, Government should redouble efforts to roll out the Algorithmic Transparency Recording Standard (ATRS) across the public sector. Delivery teams should be required to complete the recording standard, records should be promptly uploaded once an AI tool enters piloting or production, and those records should be kept up to date as the tool is updated, refined or decommissioned. This will support our understanding of what AI tools are being used and where, making scrutiny and evaluation easier.

Building on this, the new, consolidated DSIT should be tasked with an immediate review of the state of AI in government and public services, and with ongoing monitoring and evaluation of AI deployment. To catalyse this, Government could consider giving DSIT stronger levers to ensure departmental buy-in: for example, spend controls over the roll-out of AI in central departments – a model that has been used in the past by Government Digital Services (GDS) – and a role in scrutinising Spending Review bids.

Cutting through the hype: bringing transparency to AI development

The lack of monitoring and evaluation of AI within the public sector is compounded by the opaque market environment that exists outside of it. Significant information asymmetries exist between the large firms selling AI systems and those procuring, using or building on top of them, including in the public sector. Too often, important information about AI systems – how they work, and their safety and effectiveness when deployed in particular contexts – either does not exist or is kept private.

The ability of public sector decision-makers – from procurers in academy chains and NHS foundation trusts to senior leaders in the Civil Service – to make effective decisions about AI is limited by this one-sided market. They are left overly reliant on sales pitches from private providers, supplemented by the limited investigative research that is provided by the media and civil society.

All of this means that while the regulation of AI firms in the private sector is often seen as a separate policy agenda from rolling out AI across public services, the two are in fact deeply linked. Bringing in ‘binding regulation’ on foundation model developers to ensure these systems are tested before coming to market, as committed to in the 2024 King’s Speech, is an urgent first step towards providing the assurance public sector leaders need to procure, deploy and use these models with confidence. As well as establishing a statutory footing for the AI Safety Institute (AISI) and making mandatory its access agreements with foundation model developers, the Government may need to consider giving it an expanded remit to support the testing and piloting of AI products across the public sector.

Alongside regulation, there are complementary measures Government could take that would improve its ability to critically assess the value of different AI products and services. It could set stricter conditions for companies supplying AI products and services to the public sector, giving procurers the powers to demand relevant information about systems and the underlying data on which they are trained. It could invest to reduce the skills gap between industry and public sector organisations, enabling more critical engagement with any information provided and with the results of evaluations. And, in the longer term, it could consider establishing independent advice functions to support public sector organisations looking to procure AI technologies, taking inspiration from earlier examples such as the British Educational Communications and Technology Agency.

Building a vision: engaging with the politics of public sector AI

Much of this risks sounding technocratic, and up to a point it is. Many of the challenges associated with AI in the public sector would be eased, if not solved entirely, by getting the basics right. But if there is a single overarching lesson from our research, it is that AI is irreducibly sociotechnical. It influences and is influenced by the social contexts in which it is deployed, often creating unintended and profound ripple effects. When used in government, AI technologies have the potential to transform for better or for worse the relationship between people and public services. Conversely, they can also risk entrenching existing business-as-usual approaches to delivery at the expense of fresh thinking and systemic solutions.

One upshot of this is that rolling out AI across the public sector should not be seen as a ‘quick win’. While there is optimism about the potential for cost savings and productivity gains, there is little evidence of them working in practice yet. Quantitative improvements may need to be weighed up against qualitative changes in the experience of those who deliver and use public services, and in some cases approaches such as ‘ringfencing’ may be desirable to safeguard particular tasks from automation.

These aren’t calculations that can be made on a balance sheet, but fundamentally values-based questions about how and for whom the state is run. The answers to these questions ought to be informed not by hype or vested interests but by political leadership, informed by independent expertise and democratic input. The task is nothing less than to develop a new consensus vision for the state and public services in the AI era.

The views of the public – particularly those of the diverse groups of people who use and work in public services – should be at the heart of this process. Deliberative and participatory approaches to policymaking, coupled with more traditional modes of input such as consulting worker representatives, can help ministers and senior public sector leaders to understand how frontline professionals envisage their roles changing, and what members of the public expect and want from their services.

The skill and craft of public sector AI

The new Government is right to explore how AI could be used for public benefit, just as Harold Wilson once sought to forge a ‘new Britain’ in the white heat of a previous technological revolution. But by the same token, white heat must be handled with care. It takes skill, craft and consideration to forge something useful from it.

All of which is to say that we need to be more precise – more careful – about the role AI can play in Government and public services. Fixing Government knowledge gaps and addressing the steep asymmetries that exist in AI procurement will provide the foundations for the intricate political work needed to forge a new consensus for AI in public services. This work may be less glamorous than the rhetoric of easy wins, revolutionary transformations and off-the-shelf solutions, but – carried out with humility and attentiveness – it could yet yield sustained value for the public.

 


Footnotes

[1] There is no commonly accepted scientific definition of ‘AI’ and it is used to refer to a wide range of technologies, from analytics systems used to make predictions and judgements about individuals, to so-called ‘general purpose’ AI systems or foundation models. Throughout this blog, when we refer to ‘AI’, we mean the wide variety of these systems.

[2] Numbers are rounded.

Related content