Skip to content
Report

Buying AI

Is the public sector equipped to procure technology in the public interest?

Anna Studman , Mavis Machirori

1 October 2024

Reading time: 116 minutes

A man in a purple jumper looks at different coloured post-it notes that are on a glass wall

Executive summary

Public services in the UK are stretched and struggling. A record number of local authorities have declared effective bankruptcy,[1] largely as a result of central Government funding cuts, compounded by other factors including the COVID-19 pandemic and an increased demand for statutory services.

Similar financial strains on the NHS have left it facing serious resource challenges. Across the public sector, thousands of staff have been striking in response to low pay and poor conditions.

Against this backdrop, policymakers have been raising expectations about the potential role of AI in the public sector. Government departments and public-sector organisations – including local authorities – are considering how AI and data-driven systems could help address societal problems such as the cost-of-living crisis, as well as enable innovation or improve efficiency within government at all levels.[2]

While there is optimism around the potential for AI to enhance public services, the understanding and adoption of these technologies is at a relatively immature stage. The use of AI in the public sector must be carefully assessed to ensure it is fit for purpose and used with public legitimacy.

As the public sector is responsible for spending public money and delivering statutory services, it is subject to higher levels of scrutiny and accountability than the private sector, particularly around issues such as legitimacy, trust, fairness and equality. Higher levels of transparency and explainability are required regarding important decisions about public services such as welfare, healthcare and education.[3]

Most AI and data-driven systems are created by, or in partnership with, the private sector, which is less incentivised (than the public sector) to focus on improving societal benefits and mitigating harms. Throughout our past research on AI and data-driven systems in public services – ranging from digital healthcare in the NHS, local government use of data analytics, and AI in education – procurement has emerged as an important process for scrutinising technology.

It can help anticipate and mitigate potential risks, to ensure any use of AI is effective, proportionate, legitimate and in line with broader public-sector duties. However there are high-profile examples where the public sector has struggled to successfully use procured technology, and the evolving nature of AI systems raises new questions for public sector scrutiny and confidence.

Ada has undertaken research to explore whether procurement processes in the public sector are fit for purpose when it comes to AI, and to identify where and how they could be strengthened. This is based on a document analysis of guidance and legislation, and workshops with experts and practitioners involved in public-sector procurement.

This first paper presents the findings from the document analysis, which examines what the existing guidance and legislation say about how public-sector procurement decisions can be made with consideration of principles such as equity, fairness, public engagement and community impact. This is explored in the local government context.

A review of guidance, legislation and policy documents

In this paper, we consider the procurement of newer AI technologies such as generative AI, and also of data-driven systems that are more widely used by local government, such as predictive analytics and automated decision-making tools.

Although the effective procurement of AI and data-driven systems is essential across the public sector, this review focuses primarily on local government in England and the information available to staff about how to consider safety and ethics when procuring these technologies.

However, given that private-sector companies are a key part of technology provision across the public sector, many of our findings are applicable to public-sector technology procurement as a whole.

This project was completed before the UK general election in July 2024. All Government guidance and policy documents in our analysis were published under the 2010–24 Conservative Government. They were identified through scoping work with those involved in procurement processes in the public sector.

Through reviewing 16 pieces of guidance, legislation and policy documents (‘the documents’) relating to the procurement of AI and data-driven systems, we found that local government does not have access to a clear or comprehensive account of how to procure AI in the public interest.

This leaves a significant burden on local government to navigate and interpret different parts of the guidance and legislation, and to determine the practical implementation of themes like transparency, fairness and public benefit.

Various risks arise from the use of AI and data-driven technologies. These range from contributing to poor labour practices and environmental harms in the supply chain, to biased outputs and the spread of misinformation.[4] Getting procurement right is a prerequisite for mitigating these risks and leveraging AI for the improvement of public services.

However, without improved cross-cutting support from central Government on how to implement procurement guidance, local government faces a challenge when making decisions about procuring AI and data-driven systems.

Findings

The full list of guidance, legislation and policy documents in our analysis can be found in the Methodology chapter. Excerpts relating to identified themes have been listed in the Appendix.

  • Based on the information about procurement that is available to local government, we found that many different terms are used to measure societal benefit throughout the procurement guidance and legislation. We grouped these terms under five key themes:
    • (In)equalities / fairness
    • Transparency
    • Public engagement
    • Public benefit / social value
    • Impact assessments

These themes relate to societal benefits from different angles: some refer to outcomes of technology (inequalities or public benefit / social value) and some refer to mechanisms that could enable these outcomes (transparency, public engagement, impact assessments).

  • The guidance available to local authorities lacks specificity about how and where to operationalise these themes for societal benefit. For example, public-sector organisations are legally responsible for considering questions of inequalities and fairness, but it is unclear how to build this into the procurement process.Additionally, legal obligations – for example, under the UK General Data Protection Regulation (GDPR) or the Public Sector Equality Duty (PSED) – and Government guidance – for example, A guide to using artificial intelligence in the public sector – do not always align.[5]This leaves organisations to navigate multiple processes that are not always complementary. There is also little practical guidance for local government teams on how to engage suppliers in conversations about possible social impacts of their technologies, by requesting access to underlying data for testing, for example. There are also no clear structures for supplier accountability.
  • This is further complicated by a lack of clarity on the definitions of key terms, including ‘AI’. In our document analysis, there were 14 terms that related to fairness, eight that related to transparency and more than 50 terms overall that related in some way to achieving societal benefit. These were sometimes used interchangeably, sometimes in conjunction with each other, and some were better defined than others. This can make it difficult for procurement teams to know how to assess these technologies.

Conclusions

It is crucial to improve practices around the procurement of AI and data-driven systems in local government to help ensure that technology works equitably for people and society.

Procurement decisions can have significant implications for how people access and experience public services in the UK. When faced with limited human and financial resources – against the backdrop of rapidly evolving technology and enthusiasm about the potential of AI to improve public services – procurement teams must ensure that procured technologies will benefit the public and the public sector.

We acknowledge the extremely difficult financial situation faced by many local authorities, and understand the potential challenges of embedding a robust, ethical procurement process under existing resource constraints. But it is important to also consider the cost of not doing this, financially and ethically. This cost has been demonstrated most recently by the Post Office–Horizon scandal and by procured technologies that have caused harm in high-risk settings, including visa decisions, child welfare allocation and fraud prediction.[6]

AI and data-driven systems might appear to reduce administrative burden, for example by automated decision-making, but can severely damage public trust and reduce public benefit if the predictions or outcomes they produce are discriminatory, harmful or simply ineffective. Procurement teams must take this into consideration, even when faced with imperatives to innovate or keep costs down.

These negotiations are often taking place in the context of an imbalance of expertise between private companies and under-resourced local authorities. This makes it even more important to have clarity around guidelines and responsibilities, and enforceable redress. As explained in the Findings chapter, procurement teams need clearer support so that they can procure AI that is effective and ethical.

This paper provides practical steps for improving procurement of AI and data-driven systems in local government. These include:

  • Reviewing and streamlining Government guidance on procurement of AI and data-driven systems.
  • Gaining consensus on definitions, leveraging existing data ethics frameworks and Government AI regulatory principles to clarify and consolidate relevant terminology.
  • Improving governance, including the planned rollout of the Algorithmic Transparency Recording Standard and implementing the Government’s AI regulatory principles.
  • Piloting an Algorithmic Impact Assessment Standard for local government to use when procuring AI and data-driven systems.[7]
  • Setting out metrics for success at procurement stage that technologies can be assessed against post deployment.
  • Clarifying when and how to engage with publics and experts in this process.
  • Supporting local government to upskill teams to ensure effective AI use and auditing.
  • Enabling transparency mechanisms so local government teams and suppliers have clarity and coherence on what transparency means for them, and procurers are equipped to engage with suppliers.
  • Defining responsibilities across the AI procurement process, including between public- and private-sector actors.

These initial findings will be developed in a second (forthcoming) output, based on qualitative and collaborative research with procurement stakeholders from across the public and private sectors. This output will describe the barriers to effective procurement and will include recommendations to help local governments make procurement decisions that lead to positive social impact.

How to read this paper

…if you’re involved in AI or procurement in local government:

  • Read the Executive summary and Conclusions to understand our findings at a glance to help inform your procurement practices.
  • The Methodology chapter describes the legislation and guidance we reviewed and our approach to analysis in more detail.
  • Read the sections on What the documents say for an overview of each theme identified in the guidance and legislation.
  • Read ‘Implications’ under each theme heading for a detailed discussion of the omissions and inconsistencies in the guidance and legislation.
  • The Definitions of AI section provides a sense of the lack of concrete descriptors for this range of technologies.

…if you’re involved in AI or procurement in central Government:

  • The Executive summary, Introduction and Conclusions provide an overview of the AI procurement landscape and the key challenges. These sections also describe the document analysis.
  • Read the sections on What the documents say for an overview of each theme identified in the guidance and legislation.
  • Read ‘Implications’ under each theme heading for a detailed discussion of the omissions and inconsistencies in the guidance and legislation.

…if you are a researcher interested in AI and public services:

  • The Executive summary and Introduction discuss why we see procurement as an important lever for ensuring that AI is adopted responsibly in public services.
  • The Methodology chapter describes the legislation and guidance reviewed and our approach to analysis in more detail.
  • The Conclusions chapter and the section on Next steps outline unanswered questions and suggest areas for further research.

Introduction

Public services in the UK are at crisis point: the Institute for Government (IfG) has warned that short-term policies are contributing to a ‘doom loop’ as a result of capital underinvestment, funding cuts and resultant strike disruption.[8]

Some local government budgets have reduced by half since 2010,[9] with some councils declaring effective bankruptcy and many others on the precipice.[10] The IfG reported that there were more Section 114 (bankruptcy) notices in 2023 than in the 30 years before 2018.And a survey from the Local Government Association (LGA) showed that almost one in five councils think it is ‘very or fairly likely that [they] will need to issue a section 114 notice this year or next due to a lack of funding to keep key services running.’[11]

Still, local government has an obligation to provide crucial services to residents, many of whom face growing challenges such as income from work or benefits not keeping pace with the true cost of living,[12] and declining health outcomes compared to a decade ago.[13]

The Levelling Up, Housing and Communities Committee (now the Housing, Communities and Local Government Committee) said that while local authorities’ resources have shrunk, demand is rising for their services, such as adult social care, child protection, homelessness and special education needs. It called the current local government funding system ‘broken’.[14]

To many of these problems, digital transformation – and AI and data-driven systems in particular – is seen as a promising solution. While the Institute for Public Policy Research (IPPR) has predicted that public services in the UK will not recover to historic levels of access or performance until the 2030s, it argued that rolling out AI tools like ChatGPT could save billions of pounds.[15] The Society for Innovation, Technology and Modernisation (Socitm) said in its Digital Trends 2024 report that ‘generative AI […] and large language models promise huge value to the public sector’, representing a ‘radical shift’ in how data can be harnessed.

If AI in the public sector is to realise these ambitions, it is crucial to look at how the systems are being procured and how we can ensure they are working effectively. The procurement stage provides an important opportunity for local authorities to interrogate suppliers on the possible societal impacts of their technologies.

The National Audit Office has highlighted the importance of procurement in reforming public services, particularly noting that ‘maximising the Government’s buying power in [the IT market] dominated by global giants is essential.’[16] In its report, Use of artificial intelligence in government, it noted that ‘building assurance in public procurement of AI is a way of ensuring AI risks are mitigated’.[17]

One challenge to mitigating these risks is that it is typically private companies that are supplying AI solutions to the public sector (except for comparatively rare instances where AI technologies are built in-house). These companies often have a fiduciary duty to their shareholders and do not have the same incentives as public bodies for considering societal benefit.

To ensure that AI in public services works in the interest of people and society, procurement guidance and legislation must be fit for purpose. In scoping this project we spoke to people involved in public procurement across local and central government and the NHS. A salient challenge emerged around the utility and cohesiveness of procurement guidance in the age of AI.

Our findings show that without improved cross-cutting support from central Government on how to implement procurement guidance, local authorities face a challenge when making decisions about procuring AI and data-driven systems.

In this paper, we consider the procurement of newer AI technologies such as generative AI, and also of data-driven systems that are already in use by some local authorities, like predictive analytics and automated decision-making.

Given that private-sector companies are a key part of technology provision across the public sector, many of our findings are applicable to public-sector procurement more broadly.

The importance of getting procurement right

Getting the procurement process right for AI is challenging. This is in part because terminology in this area is contested: there is no clear consensus even on how AI is defined.  There is also not yet a standardised approach for testing and evaluating AI and data-driven systems.[18]

Many of these challenges are not new and represent fundamental issues around data ethics, transparency and trust in technology. Indeed, automated decision-making and predictive analytics systems are already in use across the public sector[19] and present many of the same issues as newer technologies such as generative AI.

Research from the Data Justice Lab in 2022 found that the use of automated decision-making systems in public services (for example, risk-based verification for benefits claims) could exacerbate inequalities. It found that greater transparency is required around how the systems make decisions and what data is used.[20]

We have already seen examples in public-sector procurement of ‘simpler’ data-driven systems having adverse effects in communities.

For example, North Tyneside Council’s now-discontinued predictive system for checking benefits, which wrongly identified some low-risk claims as high-risk; and Hackney Council’s Early Help Profiling System, which was dropped as it ‘did not deliver expected benefits’.[21] The Metropolitan Police recently decommissioned their Gangs Violence Matrix, which was criticised for over-representing young Black men.[22] In these cases, harms resulted specifically from data quality and algorithm design, which did not accurately reflect reality.

More complex AI systems using larger amounts of data will make oversight and accountability even more difficult. For example, while it might be relatively straightforward to understand the decision-making process of a risk-prediction algorithm, the processes used by a generative AI chatbot that is powered by a large language model are more complex.

AI-related guidance and documents issued by central Government acknowledge that AI and data-driven systems present additional challenges in ensuring ethical public-sector procurement. This is also evidenced by previous Ada Lovelace Institute research – including Foundation models in the public sector,[23] Mission critical,[24] and A knotted pipeline [25] – that demonstrates the need for careful planning and deployment to ensure that AI in the public sector results in equitable and beneficial outcomes for communities.

Document analysis

We conducted a document analysis of 16 pieces of guidance, legislation and policy documents to understand whether local authorities have sufficiently clear and comprehensive documentation to support the procurement of AI and data-driven systems that will benefit people and society. The full list of documents can be found in the Methodology chapter.

This project was completed before the UK general election in July 2024. All Government guidance and policy documents in our analysis were published under the 2010–24 Conservative Government.

Throughout this paper, the legislation and guidance in our analysis are collectively referred to as ‘the documents’.

Though we reference local government throughout, we are aware of the significant variation when it comes to capabilities and resource for digital transformation.[26] Our research highlights overarching principles and dynamics that can be applied across local government and the public sector in general.

This paper is the first output from an Ada Lovelace Institute project on procurement of AI and data-driven systems in local government. It explores how procurement decisions by local government can be made with consideration of principles such as equity, fairness, public engagement and community impact. A second output will be based on collaborative research with procurement stakeholders and will focus on how to operationalise these principles into day-to-day processes.

Glossary

AI: AI is not a precisely defined group of technologies. Throughout the paper, we use ‘AI and data-driven systems’ as an umbrella term to encompass the technologies outlined below.

Types of AI

Data-driven systems: A range of technologies including advanced data analytics, predictive analytics and algorithms.

Automated decision-making: A function of technology that uses data and algorithms to make decisions, predictions or outputs without human input.

Foundation models: Foundation models, sometimes called a ‘general-purpose AI’ or ‘GPAI’ system, are capable of a range of general tasks (such as text synthesis, image manipulation and audio generation). Notable families of foundation models are Google’s Gemini 1, Anthropic’s Claude 3 and OpenAI’s GPT-4. The latter underpins the conversational chat agent ChatGPT and many other applications via OpenAI’s application programming interface (API). Foundation models are designed to work across many complex tasks and domains, and can exhibit complex, unpredictable and contradictory behaviour when prompted by human users.

Generative AI: A type of AI system that can create a wide variety of data, such as images, videos, audio, text and 3D models. Some generative AI applications are built ‘on top of’ foundation models (like OpenAI’s DALL-E image generator).

Large language models (LLMs): LLMs are trained on significant amounts of text data and can generate natural language responses to a wide range of inputs. They are the basis for most of the foundation models we see today (though not all, as some are being trained on vision, robotics, or reasoning and search, for example), performing a wide range of text-based tasks such as question-answering, autocomplete, translation and summarisation, in response to a wide range of inputs and prompts.

Findings

In this chapter, we summarise what the documents say about each theme identified through our document analysis, and highlight gaps, inconsistencies and challenges.

Definitions of AI

There are a range of definitions of AI in the documents. These include descriptions of types of AI systems (such as foundation models), technical capabilities (such as pattern recognition in large datasets), and illustrative examples (such as screening CVs for job hires).

What the documents say about definitions of AI

The 2010–24 Conservative Government’s AI White Paper, A pro-innovation approach to AI regulation, acknowledges that there is a need for a common definition of AI to ensure effective regulation.

The paper is not formal guidance, but does play a role in showing how the Government has most recently approached definitions of AI. For example, two defining characteristics are identified: adaptability, which refers to the training of algorithms through learning from data patterns; and autonomy, which refers to AI systems making decisions without direct human control.[27]

Similarly, in Assessing if artificial intelligence is the right solution – from the Department for Science, Innovation and Technology (DSIT), the Office for Artificial Intelligence (OAI) and the Centre for Data Ethics and Innovation (CDEI)[28] – a variation in functions is recognised.

This guidance says: ‘There is no one “AI technology” […] currently, widely available AI technologies are mostly either supervised, unsupervised or reinforcement machine learning. The machine learning techniques that can provide you with the best insight depends on the problem you’re trying to solve.’ It then provides a more detailed explanation of different machine learning techniques, applications and examples. It cautions that: ‘AI is not an all-purpose solution. Unlike a human, AI cannot infer, and can only produce an output based on the data a team inputs to the model.’[29]

DSIT, OAI and CDEI’s Understanding artificial intelligence focuses on machine learning, the most widely used type of AI,[30] which involves digital systems improving performance on tasks over time through experience. AI is initially defined in a broad sense: ‘The use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.’[31] The document then describes its technical capabilities as ‘machines using statistics for pattern recognition in large datasets, and the independent performance of repetitive tasks which reduces constant human guidance’, but acknowledges that the field is constantly evolving.

DSIT, OAI and CDEI also published Understanding artificial intelligence ethics and safety alongside the guidance above.[32] The definition of AI focuses on the shift in tasks from humans to AI systems. It also defines four ethical principles of fairness, accountability, sustainability and transparency and includes suggestions of how these may be applied in practice. However, these suggestions are limited and vague. For example, regarding fairness, one suggestion is to ‘use only fair and equitable datasets’. It otherwise directs readers to more detailed guidance of the same name from the Alan Turing Institute.[33]

The OAI’s Guidelines for AI procurement begins by positioning AI as a ‘set of technologies that have the potential to greatly improve public services by reducing costs, enhancing quality, and freeing up valuable time for frontline staff’.[34] The definition focuses on machine learning, highlighting ‘the development of digital systems that improve their performance on a given task over time through experience’.

Implications: Definitions of AI

As there is no general consensus on the definition of AI, the documents tend to define AI by describing a collection of functions, applications or characteristics, and by providing example use cases. Though this method is illustrative and helpful within each document, it differs slightly between the documents – which means there is no cohesive reference point stating what ‘counts’ as AI.

This lack of a clear and consistent definition of AI could make it difficult for procurers to assess or categorise the technology they are aiming to acquire, consequently making it difficult to identify relevant guidance.

Additionally, in Assessing if artificial intelligence is the right solution,[35] the lack of a concrete explanation of machine learning – compared to Understanding artificial intelligence[36] – requires procurers to cross-reference different pieces of guidance for a complete picture.

Providing illustrative case studies, such as those in Guidelines for AI procurement,[37] could also complement any definition of AI, helping procurers situate the technology in real-life applications and understand how procurement processes should adapt in relation to AI and data-driven systems.

Definitions of AI should also include explanations of training data – the data that goes into an AI model during development which it learns from – and how this might affect outputs. And while Guidelines for AI procurement mentions that AI systems improve ‘over time through experience’,[38] there is no explicit recognition that this process of continuous learning may also lead to an AI system’s outputs changing or its capabilities and uses developing beyond the original purpose.

This can happen through ‘model drift’ – where the quality of outputs gets worse because the input data no longer reflects reality (for example, due to a shift in circumstances like the start of the pandemic). It can also happen through ‘scope creep’, where a system is implemented for one purpose and is then used beyond its intended scope – for example, facial recognition tools being used for emotion recognition. It is important that post-deployment considerations, including risk of scope creep or evolving capabilities, are made clear to procurers so this can be accounted for when drafting contracts.

As we have noted, many of the ethical challenges and considerations around AI are closely related to data ethics challenges. So, when thinking about cohesive and useful definitions of AI, the link between data and AI must be made explicitly. This was not always clear across the guidance. If AI is defined as a separate category to data analytics, this could make it harder for local authorities to know what information is most relevant.

(In)equalities / fairness

The use of AI in public services can lead to inequalities and unfair outcomes for different groups. For example, there may be unequal access or poorer quality services for groups underrepresented in training data.[39] Fairness is a term commonly cited in guidance on AI and data-driven systems, when discussing mitigating against these harms.

We have considered these two themes together – (in)equalities and fairness – as they are often used in reference to similar aims: ensuring equitable impact and mitigating biases.

Local authorities procuring AI and data-driven systems have obligations towards equalities and fairness under law, primarily through the UK General Data Protection Regulation (GDPR) and the Public Sector Equality Duty (PSED). But across the documents reviewed in this study, definitions of inequalities and fairness are contextual and not fixed.

What the documents say about (in)equalities / fairness

A pro-innovation approach to AI regulation,[40] the AI white paper, refers to fairness, highlighting ways to assess and ensure it: ‘Actors involved in all stages of the AI life cycle should consider definitions of fairness that are appropriate to a system’s use, outcomes and the application of relevant law […] including equality and human rights, data protection, consumer and competition law, public and common law, and rules protecting vulnerable people.’[41]

The Information Commissioner’s Office (ICO), in its guidance on AI and data protection, challenges the notion that fairness can be achieved by technology in isolation from social context. It recommends thinking about factors like ‘the power and information imbalance between you and individuals whose personal data you process’ and ‘the underlying structures and dynamics of the environment your AI will be deployed in’.[42]

The Central Digital and Data Office (CDDO) guidance, Data Ethics Framework, provides questions that should be asked – for example, about an algorithm or AI foundation model – to ascertain its fairness. This includes questions about unintended consequences, possible impacts on human rights, and ongoing monitoring of outcomes. It explicitly connects the idea of fairness with obligations under the PSED – that ‘data analysis or automated decision making must not result in outcomes that lead to discrimination as defined in the Equality Act 2010’.[43]

Some guidance documents draw links between questions of inequalities and fairness, and steps in the procurement process – for example, the Equality and Human Rights Commission (EHRC) guidance, Buying better outcomes, discusses embedding equality considerations into the procurement process.

It advises embedding equality requirements in contractual specifications, stating, ‘the specification could require year-on-year improvements, such as increased take up of services by people with certain protected characteristics who were previously under-represented amongst users’.[44]

There are also recommendations in Buying better outcomes for how to assess equality requirements at the pre-qualification questionnaire and invitation-to-tender stages, and through supplier method statements. Supplier method statements can be requested from suppliers to demonstrate their understanding of equality criteria and how they propose to deliver on it).[45]

OAI’s Guidelines for AI procurement suggests ‘developing an internal AI ethics approach, with examples of how it has been applied to design, develop, and deploy AI-powered solutions’, as well as having ‘processes to ensure accountability over outputs of algorithms’ and ‘avoiding outputs that could be unfairly discriminatory’.[46]

Implications: (In)equalities / fairness

The documents include a broad range of definitions of and perspectives on fairness and inequalities – with some lacking to definition of these terms altogether. This mean it may be challenging for local authorities clarify responsibilities and timelines around ensuring that fairness and equality measures are considered in the procurement of AI and data-driven systems.

In some instances, ‘fairness’ and ‘equality’ are used to refer to similar aims within the guidance, but they have different legal implications. For example, under the PSED, the terms can relate to ensuring equality for groups with protected characteristics. UK GDPR refers to fairness in processing data. The terms can also be used outside legal compliance – for example, procurers can require that suppliers audit for algorithmic fairness.

In the UK, case law shows that the assessment of bias that can lead to inequalities is ultimately the responsibility of public authorities, as the PSED is a non-delegable duty. This was shown in the case of R (Bridges) v Chief Constable of South Wales, a challenge to police use of live facial recognition technology in crowds.

The Court of Appeal held that the police had failed to comply with the PSED as it could not rely on the technology manufacturer’s assessment of the fairness of the system. In this case, the manufacturer did not provide access to the datasets used to train the algorithms. The datasets should have been supplied to the police so they could assess whether there was bias relating to race or sex in the operation of the software.[47]

Reading the documents, it is sometimes difficult to differentiate between legal obligations and best practice. Clarity around these differences would be useful for procurement teams.

The documents in our analysis do not specify how to incentivise suppliers to review how their technologies impact on fairness and inequalities. There is little information on how local government can effectively hold suppliers to account in this area. More support and guidance on this is needed, and regulation could help strengthen the position of procurement teams in negotiations with potential suppliers.

The Data Ethics Framework recommends ‘[being] aware of fairness issues throughout the design and implementation of a model,’ and ‘[ensuring] that the project and its outcomes respect the dignity of individuals, are just, non-discriminatory, and consistent with the public interest’.[48] But if suppliers are not transparent about development processes, local authorities may not be able to effectively assess these for fairness and inequality.

When local authorities procure bespoke systems, suppliers should provide information about the design and development of the underlying technology. Effective procurement processes could help facilitate this, by ensuring suppliers are transparent about the design and development of their technologies, and risks of any associated harmful impacts (see our section on Transparency).

This raises the question about technical expertise within local government, and how procurement teams can be equipped and empowered to interrogate suppliers at procurement stage about the impacts of their technology and routes for redress. We see this as a gap in current guidance.

The Electronic Privacy and Information Centre (EPIC) in the USA recommends that public-sector procurers ‘include specific AI testing or auditing provisions [in contracts] targeting common sources of AI errors: biased or unrepresentative training data, unreliability across different use contexts, and loss of model accuracy over time’.[49] It suggests that stronger language in contracts could be an important part of embedding equality considerations and rebalancing power between suppliers and procurers.

The guidance could more explicitly encourage running equality impact assessments at the beginning of the process, to help teams consider whether using these technologies is justifiable, or if there is an alternative that could be used instead.

Transparency

Transparency – about training data, funding, development and deployment – is crucial to ensure proper oversight of AI and data-driven systems. Definitions of transparency shift with context. There are variations across the guidance and legislation about affected stakeholders and the purpose of transparency.

What the documents say about transparency

A stated objective of the Procurement Act 2023 is to improve transparency to ensure fair competition for suppliers and ‘provide the public with insight into how their money is being spent.’[50] The act, which is planned to come into effect in October 2024, is intended to ‘create a fully transparent procurement system’.[51] The stated purpose of the legislation is to ensure that everyone will ‘be able to view, search and understand what the UK public sector wants to buy, how much it is spending, and with whom,’ and it will ‘drive value for money’.[52]

In OAI’s Guidelines for AI procurement, transparency is similarly primarily described as ensuring fair competition for suppliers, avoiding vendor-lock in and ensuring internal teams understand how the technology they are procuring works.[53]

In ICO guidance on AI and data protection, the idea of transparency relates to informing people about the provenance of data used for a model, how the model makes decisions and how to contact a human for a review of a decision.[54]

DSIT, OAI and CDEI’s Understanding AI ethics and safety guidance states that teams should be able to ‘explain to affected stakeholders how and why a model performed the way it did in a specific context, and justify the ethical permissibility, the discriminatory non-harm, and the public trustworthiness of its outcome and of the processes behind its design and use’.[55]

Guidelines for AI procurement also notes that being clear about what data these systems will use is central to achieving transparency. [56] It advises that local government teams should set clear requirements in the procurement process to ensure that they have access to training data and information about how a model works.

Under the PSED, local authorities must ‘publish equality information at least once a year to show how they’ve complied with the equality duty’.[57] The EHRC, in its guidance Buying better outcomes, explains that, in procurement, this monitoring information can ‘help a public authority meet its duty to be transparent in reporting how it uses its resources, and to what effect’.[58]

Transparency may also refer to local authorities having a good overview of what type of AI (broadly interpreted, see Glossary) it is currently using. DSIT, OAI and CDEI’s Assessing if AI is the right solution suggests keeping a central record of:

  • where AI is in use
  • what the AI is used for
  • who is involved
  • how it is assessed or checked
  • what other teams rely on the technology.[59]

Implications: Transparency

Transparency in the context of procurement of AI and data-driven systems falls into three broad categories across the documents:

  • First, transparency in relation to the process of procurement itself, to ensure that suppliers have a clear understanding of the competition and what is expected.
    • Relevant documents: Procurement-focused legislation and guidelines
  • Second, transparency in relation to the technology itself, so that it is explainable to local government teams, other suppliers and the public.
    • Relevant documents: Data and AI ethics legislation
  • Third, the obligation for local authorities to be transparent about procured technologies’ impacts on communities.
    • Relevant documents: PSED, placed on public authorities by the Equality Act 2010

Something not captured in the documents is that there are different ways that the public might want to be informed about the technologies. The ICO’s Explaining decisions made with AI, although not specific procurement guidance, sets out best practice around transparent decision making.

Pursuing a high standard of transparency for decisions made with AI is valuable for local government: it contributes to public trust, and can support fairness – that is, both procurers and companies will be incentivised to ‘build in’ fairness, knowing that processes will be transparent and open to scrutiny.[60]

Ensuring transparency becomes more complicated with newer forms of AI such as foundation models and generative AI, because of the scale of data these systems and models use, and their ability to be adapted for a broad range of tasks.[61]

In our policy briefing Foundation models in the public sector, we note: ‘Procurement of foundation models for public sector use is likely to be challenging.’ This is because ‘the public sector risks overreliance on private-sector providers’, and there is a ‘potential lack of alignment between applications developed for a wider range of private-sector clients and the needs of the public sector’, which includes ‘higher levels of transparency and explainability in important decisions around welfare, healthcare, education and other public services’.[62]

There is a clear link between transparency and addressing inequalities: transparency and explainability of technologies enable scrutiny and help ensure that risks of inequitable outcomes are flagged and mitigated.

When AI capabilities are ‘slipstreamed’ into existing systems and do not go through procurement processes, this can mean there is a lack of transparency about what technology is being used and how.

For example, local authorities using Microsoft software may now find themselves working with Copilot – a large language model ‘AI assistant.’[63] This trend is likely to continue as technology developers scramble to add generative AI functionality to existing products.

Auditing what AI functionality is used across the organisation, as suggested in Assessing if AI is the right solution above, could help local government see where this is happening and decide on how to maintain transparency when the procurement process might not be applied.[64]

Public engagement

In the context of local government procurement of AI and data-driven systems, public engagement refers to actively involving the community, the wider public and stakeholders in decision-making processes related to the acquisition and implementation of these technologies.

Public engagement is a key mechanism for accountability and transparency and is mentioned in several of the documents, including in EHRC guidance on the PSED[65] and in the Public Services (Social Value) Act 2012 (‘the Social Value Act’), but it is not clear how it fits with procurement processes.

What the documents say about public engagement

Public engagement is only mentioned in some of the legislation and guidance, most significantly in the EHRC guidance on the PSED. It is also mentioned in the procurement-related sections of the PSED under the Equality Act and the Social Value Act. Public engagement is mentioned more generally in the Conservative Government’s AI White Paper A pro-innovation approach to AI regulation[66] and the CDDO’s Data Ethics Framework.[67]

The EHRC guidance The essential guide to the public sector equality duty says that understanding the needs of service users, including any needs due to having a protected characteristic, is ‘important for effective procurement, as well as for meeting the general equality duty, and will often involve engaging with existing or potential service users and using equality information’.[68]

EHRC guidance on embedding equality obligations in procurement (Buying better outcomes) says that public authorities will ‘need to decide at the pre-procurement stage whether or not to consult publicly on the application of social value considerations’.[69] (Read more on the themes of public benefit and social value.)

The Social Value Act requires public authorities to ‘consider whether to undertake any consultation’ with the public, but does not require them to undertake consultation – consideration is sufficient. It notes that public engagement in procurement might be more relevant for some contracts than others.

It says: ‘Consultation will be particularly relevant when considering procurement for services which are delivered directly to citizens. The voluntary and community sector, along with other providers and interested groups, should be engaged from the earliest stage to help shape policies, programmes and services.’ It then caveats that ‘consultation may be less relevant in procurements for back office services’.[70]

Throughout many of the other documents, public engagement is described mostly as something that would happen after a service has been deployed (ex post), in monitoring, or if something has gone wrong.

Buying better outcomes says that contracts could stipulate what public engagement looks like. It says that procurement teams ‘may consider specifying [in a contract] equality monitoring of people who use the service but also consultation with, or surveys of, those who use and those who do not use the service,’ and ‘should also specify that the contractor has procedures for dealing promptly and sensitively with complaints about discrimination’.[71]

Like the Social Value Act, Buying better outcomes also caveats that ‘it may not be cost-effective or prudent from a risk perspective for the provider to monitor delivery or outcomes on certain contracts due to their low value and/or low contact with the public. In this case the purchasing body may wish to draw on other sources of intelligence, such as consulting with service users or trade unions, reviewing complaints, undertaking mystery shopping or site visits’.[72]

The Data Ethics Framework recommends that teams working with data consider what channels have been established for public engagement and scrutiny throughout the duration of the project as part of a ‘plan to continuously evaluate if insights from data are used responsibly’.[73]

Implications: Public engagement

The Cabinet Office, in its Procurement Policy Note on the Social Value Act, encourages engagement with the voluntary and community sector, as well as other providers and interested groups, but does not provide advice on methodology. In Buying better outcomes, the potential challenges of balancing multiple perspectives and incorporating diverse insights into a cohesive business case are not addressed. [74]

Buying better outcomes and the Social Value Act note that public engagement may be more relevant when considering procurement for services that are delivered directly to communities. However, this distinction may not be immediately clear with the use of AI and data analytics systems because they may be used indirectly to make decisions about people, with potential adverse impacts (for example, summaries of case notes in social services).[75]

In the Cabinet Office’s Procurement Policy Note on the Social Value Act, the term ‘back office services’ is used without clear definition. Procurers may therefore face challenges in determining which services fall into this category and, consequently, whether consultation is relevant.

This shows significant gaps in guidance on how best to approach public engagement when procuring AI and data-driven systems. Indeed, there is uncertainty about the extent to which public engagement fits within procurement processes at all. This is an important area for further strategic thinking, as effective public engagement could help local government maintain legitimacy and trust among the community in their use of AI and data-driven systems.

Public benefit / social value

Public benefit and social value are two terms that describe positive impacts that a procurement may have, beyond elements of cost and efficiency. Legislation and guidance for procurement in general, and of AI and data-driven systems specifically, state that public benefit and social value may be influential factors in procurement decisions.

What the documents say about public benefit / social value

Public benefit and social value are two terms that in practice mean similar things, though social value is officially enshrined as a concept in the Social Value Act and is widely used in local government procurement. The Social Value Act requires ‘public authorities to have regard to economic, social and environmental well-being in connection with public services contracts; and for connected purposes’.[76]

The Social Value Act ‘places a requirement on commissioners to consider the economic, environmental and social benefits of their approaches to procurement before the process starts’. They also have to consider whether they should consult on these issues, which encourages consideration of public engagement in procurement (see the section on public engagement above).[77]

In Buying better outcomes, the EHRC’s guide to the PSED in procurement, it is noted that social value criteria can be specified at contract specification stage. It says: Case law recognises that the criteria for the evaluation of contracts need not be purely economic but can, in appropriate cases, include social and environmental criteria. It may be possible to include the provision of clearly identifiable and measurable social benefits as part of the contract specification and develop appropriate evaluation criteria accordingly.’[78]

Public benefit is referenced in the technology-focused guidance as a desirable outcome or something to aim for, but the term ‘public benefit’ is not defined. The OAI’s Guidelines for AI procurement advises that ‘defining the public benefit goal provides an anchor for the overall project and procurement process that the AI system is intended to achieve’. It recommends that procurers ‘explain in [their] procurement documentation that the public benefit is a main driver of [their] decision-making process when assessing proposals’.[79]

The Local Government Association’s (LGA’s) A Social Value Toolkit for District Councils notes that: ‘Ongoing contract management is extremely important to ensure that the Council receives the benefits of Social Value it agreed when it accepted the offer from the supplier.’[80] It also suggests that if a contract is not being fulfilled along the social value commitments, then remedies may be sought.

The CDDO’s Data Ethics Framework similarly advises that those working with data in the public sector ‘repeatedly revisit the user need and public benefit throughout the project’. It includes questions to ask, including assessing the levels of human oversight of the automated project.[81]

Implications: Public benefit / social value

The theme of public benefit and social value highlights the important connection between the commissioning stage, procurement stage and ongoing monitoring of the technology. The procurement stage can be a lynchpin for enshrining desired social value or public benefit outcomes in contracts.

Our document analysis suggests that the term ‘public benefit’ is more commonly used in guidance about technology and data, and the term ‘social value’ is used in legislation and guidance that applies to procurement in the public sector more broadly.

Though they may be taken to mean the same thing in practice, it is not immediately obvious that mentions of ‘public benefit’ in guidance would align with obligations to consider social value under law.

As shown in our section on (in)equalities and fairness, the language used in guidance and legislation implies who is responsible for achieving these goals. ‘Social value’ is a legal obligation for local government teams,[82] whereas ‘public benefit’ may not signal the same message when mentioned separately in technology-focused guidance.

The OAI’s Guidelines for AI procurement provides a useful broad approach to thinking about public benefit. It suggests that ‘as a general principle any AI procurement should be investigated with the mindset of “how could AI technologies potentially benefit us?” rather than “how can we make our problem fit an AI system solution?”’.[83]

Because social value is enshrined in law, there is clearer guidance on how procurers may adhere to the requirement at the contract stage. The LGA’s A Social Value Toolkit for District Councils provides guidance on how to evaluate and assess the social value of contracts in general. It uses ‘TOMs’ (Themes, Outputs and Measures) to provide a minimum reporting standard for measuring social value.[84]

It provides examples of areas in which social value might be achieved by a supplier, including contributing to skills and employment in the local area, supporting growth of local businesses, strengthening relationships with voluntary and social enterprise organisations, protecting and improving the environment and promoting social innovation.

AI and data-driven technologies might require different thinking, however. Currently, social value is not necessarily understood as an outcome of the procured technology itself, and is often seen as a separate ‘good’ that the supplier provides to the community.

For example, some of the national TOMs include steps like supplier donations to community projects, or offering work placements to people who are long-term unemployed or delivering talks at schools. These can be delivered by suppliers, but can be completely unrelated to the impact of the technology being supplied.

There is a question of whether current approaches to social value are adequate when it comes to AI and data-driven systems used in local governments. Arguably, the definition of social value should be expanded to include the impact of the technologies themselves (beyond, for example, basic adherence to the PSED and UK GDPR). For example, the technologies could help by improving the accessibility of local government advice services, or by identifying inequalities in service provision for certain groups.

The nature of AI can mean it is difficult to evaluate social value and public benefit as the technology evolves. This is something that the Data Ethics Framework addresses. It advises that user needs are regularly assessed for any changes (the user being either a member of the public or a frontline worker), as well as of any unintended consequences, like reinforcing societal inequalities and discrimination.[85]

For example, remote monitoring technology for older people to predict fall risk could be considered as providing social value at procurement stage. However, an unintended consequence could be that it becomes a tool for over-surveillance at the implementation stage.

Procurement teams have a responsibility to consider how to embed social value in contracts and they need to be empowered to ask questions of the supplier to help them assess whether AI is providing public benefit. In a context where innovation is highly desirable, and transparency and evaluation are not well developed, procurement teams may struggle with introducing guardrails and will be conscious of criticisms around introducing red tape and stifling innovation.

The LGA’s A Social Value Toolkit for District Councils mentions that remedies may be sought if a supplier is not fulfilling its social value obligations under the contract.[86] However, it can be difficult to allocate responsibility for a system’s outputs and, without clear pathways to redress through regulation, local government might not be empowered to pursue these conversations and challenges with suppliers.

Achieving social value for communities is clearly important for local government, and is a central part of local government guidance on procurement. Still, the realities of low budgets, stretched resources and potential power imbalances between local government and private companies mean that consideration of these identified themes may become ‘check-box’ exercises.

Clarity about expectations for public benefit or social value outcomes at the point of procurement will help local authorities hold suppliers to account. Regulation could help equip procurers with enforceable ways to achieve this, for example by codifying guides like the Data Ethics Framework into mandatory statutory requirements.[87]

Impact assessments

During the procurement phase, impact assessments are a key tool to help teams identify potential issues and compare suppliers. Impact assessments should take place throughout the procurement process, including before deployment (for example, via pilots), and should form part of ongoing monitoring after a technology is in use.

What the documents say about impact assessments

Impact assessments are mentioned in the guidance and legislation in relation to assessing a technology as a whole, and in relation to assessing the data that drives the technology.

The AI White Paper, A pro-innovation approach to AI regulation,[88] says that: ‘Assurance techniques like impact assessments can help to identify potential risks early in the development life cycle, enabling their mitigation through appropriate safeguards and governance mechanisms.’[89]

OAI’s Guidelines for AI procurement covers more detailed considerations for conducting impact assessments. It says AI impact assessments should outline factors including public benefit and social value delivery data quality, and unintended consequences.[90]

The guidance is clear that impact assessments are an iterative process. This is because ‘without knowing the specification of the AI system [acquired], it is not possible to conduct a complete assessment’.[91] This is particularly apparent with the use of large language models such as ChatGPT or other generative AI tools, where outputs transform and develop as the systems are deployed and used.

Guidelines for AI procurement says teams should ‘conduct initial AI impact assessments at the start of the procurement process, and ensure that […] interim findings inform the procurement.’ It advises that teams ‘revisit the assessments at key decision points’.[92] It also goes further, explaining that a procurer needs to have pre-specified points where decisions to continue or discontinue the use of an AI system take place.[93]

When it comes to assessing the data that is put into a system, the UK GDPR requires local authorities to conduct data protection impact assessments (DPIAs) to identify where a type of data processing is likely to result in a high risk to the rights and freedoms of individuals.[94]

Guidance from the ICO on AI and data protection notes that DPIAs are an opportunity for procurement teams to ‘consider and demonstrate [their] accountability for the decisions [they] make in the design or procurement of AI systems’.[95] DPIAs are also a mechanism for transparency where they are made publicly available.

Buying better outcomes, the EHRC guidance on embedding equality considerations into procurement, says that: ‘The obligation to give “due regard” to the PSED continues through the whole of the procurement cycle, and so must be included in the monitoring and management stage.’[96] Equality impact assessments (EIAs) are assessments that public authorities can carry out prior to implementing policies, with a view to predicting their impact on equalities. EIAs can therefore demonstrate their compliance with the PSED.[97]

Within Buying better outcomes is a caveat that ‘it may not be cost-effective or prudent from a risk perspective for the provider to monitor delivery or outcomes on certain contracts due to their low value and/or low contact with the public’.[98]

Implications: Impact assessments

Procurement teams must both assess the likely impact of a technology during procurement (ex ante), and embed mechanisms for monitoring of a technology once it has been procured (ex post). This reinforces the importance of links between procurement teams and other roles in the public sector, including commissioners and digital teams.

A range of tools is available, including DPIAs and EIAs, but the guidance isn’t clear on which to use and how to prioritise them. There is a risk that oversaturation of impact assessment frameworks overloads local authorities and dilutes their efficacy. To be effective, impact assessments need to be backed up by regulation that requires the supplier to act if something goes wrong.

The entire process is complicated by the fact that impacts from more complex AI and data-driven systems are not always direct and clear. In the Ada Lovelace Institute’s policy briefing Mission Critical, we note that harms and risks from AI are still not well defined.[99] Cross-sector learning and knowledge sharing by those who use and are affected by AI and data-driven systems could bolster the understanding of these systems and make their assessments more holistic.

Conclusions

It is crucial to improve practices around the procurement of AI and data-driven systems in local government to help ensure that technology works equitably for people and society.

Procurement decisions can have significant implications for how people access and experience public services in the UK. When faced with limited human and financial resources, against the backdrop of rapidly evolving technology and enthusiasm about the potential of AI to improve public services, procurement teams must ensure that procured technologies will benefit the public and the public sector.

We acknowledge the extremely difficult financial situation faced by many local authorities, and understand the potential challenges of embedding a robust, ethical procurement process under existing resource constraints. But it is important to also consider the cost of not doing this, financially and ethically. This cost has been demonstrated most recently by the Post Office–Horizon scandal and by procured technologies that have caused harm in high-risk settings, including visa decisions, child welfare allocation and fraud prediction.[100]

AI and data-driven systems might appear to reduce administrative burden, for example by automated decision-making, but can severely damage public trust and reduce public benefit if the predictions or outcomes they produce are discriminatory, harmful or simply ineffective. Procurement teams must take this into consideration, even when faced with imperatives to innovate or keep costs down.

These negotiations are often taking place in the context of an imbalance of expertise between private companies and under-resourced local authorities. This makes it even more important to have clarity around guidelines and responsibilities, and enforceable redress. As explained in the Findings chapter, procurement teams need clearer support so that they can procure AI that is effective and ethical.

Here, we identify several potential responses to these challenges.

Challenge: Lack of comprehensive advice on implementation and how documentation intersects

This puts pressure on procurement teams to bring together and prioritise multiple different pieces of legislation and guidance.

What can be done?

  • Consolidation of guidance: Government guidance on procurement of AI and data-driven systems must be more coherent, so it is easier for procurement teams to follow. This includes providing clarity on legal obligations and best practice across the procurement lifecycle.
  • Improve governance: In the Ada Lovelace Institute’s response to the 2024 Spring Budget, we said that investment in adopting AI across the public sector should be complemented by urgent progress on governance. This would include the planned rollout of the Algorithmic Transparency Recording Standard across the public sector and a clear roadmap for applying the Government’s AI regulatory principles (safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress) to public services.[101] This could help the public sector to consolidate and operationalise the guidance.
  • Algorithmic impact assessments: The Crown Commercial Service could design and pilot an algorithmic impact assessment standard for local authorities to use when procuring AI and data-driven systems. These assessments would be performed in the early stages of the design and development process, and could help identify potential risks or issues for the local government to address with the supplier. This could bring together varied guidance about assessing impact for AI and data-driven systems.

Challenge: Gaps and inconsistencies on how to ensure that procured technologies provide societal benefit

This makes it difficult for procurement teams to synthesise the guidance and robustly assess the societal impact of the technologies they are buying, while holding suppliers to account.

What can be done?

  • Outline metrics for success: Our analysis shows that metrics for success need to be carefully considered during the procurement of AI and data-driven systems. Responsibilities for ensuring these are met should also be clearly defined in guidance.
  • Clarity on public engagement: Guidance on whether and how to involve publics and frontline experts in the procurement of AI and data-driven systems is vague. More support for procurement teams on how and when to undertake public engagement would be helpful, including under what circumstances it should be considered a priority.
  • Upskill local government teams: Upskilling local government teams in using and auditing AI systems could help them operationalise guidance and assess impacts on communities.
  • Enable transparency mechanisms: Our analysis suggests that procurers need clarity and coherence on what transparency means for procurers and suppliers, and on how to enact mechanisms for transparency.
    • When considering transparency local government will need to take a holistic approach – not only looking at internal processes and fair competition but ensuring that communities are informed about the use of their data, and how to query decisions made about them based on systems using that data.
  • Review existing guidance: The National Audit Office has recommended that the Central Digital and Data Office ‘should work with the government functions to review existing guidance, government standards and assurance processes to ensure they adequately address the opportunities and risks of AI use and provide sufficient levers to promote safe and responsible use of AI across government, including reviewing arrangements for providing independent technical assurance for procured AI’.[102]

Challenge: No consensus on definitions of key terms, such as ‘AI’, ‘fairness’, ‘transparency’ and ‘public benefit’

This further complicates the task for procurement teams of addressing these in their decision making.

What can be done?

  • Get clarity on definitions: There should be more consensus around what the public sector defines as ‘AI’ and how AI solutions are assessed against alternative options.
  • Revisit data ethics: Revisiting fundamental data ethics considerations, like those in the Data Ethics Framework[103] would help consolidate definitions and solutions around ethical issues presented by AI use in the public sector. A clear roadmap for applying the Government’s AI regulatory principles to the public sector would also help with consensus on these terms and how to address them.

What is the Government already doing?

In its response to the AI White Paper consultation, A pro-innovation approach to AI regulation, the Conservative Government announced that the Department for Science, Innovation and Technology (DSIT) would launch an AI Management Essentials scheme, which would set a minimum good practice standard for companies selling AI products and services. [104]It is possible this could become a mandatory requirement for public-sector procurement.

It also announced that it will make the Algorithmic Transparency Recording Standard a mandatory requirement for government departments, and for the broader public sector ‘in time’. This is a standardised method for public-sector organisations to proactively publish information about how and why they are using algorithmic methods in decision making.

The National Procurement Policy Statement, which is due to come into force in October 2024, sets out social value as one of the Government’s priorities for public procurement. It emphasises that ‘commercial and procurement teams across the public sector do not have to select the lowest price bid, and that in setting the procurement strategy, drafting the contract terms and evaluating tenders they can and should take a broad view of value or money that includes the improvement of social welfare or wellbeing’.[105]

The July 2024 General Election resulted in a change of Government. One of the first acts of the new administration was to incorporate parts of the Cabinet Office responsible for data, digital and AI (including Government Digital Services, the Incubator for Artificial Intelligence and the Central Data and Digital Office) into DSIT. At time of writing, all the initiatives described above continue to be Government policy.

Next steps

This work provides an overview of the key legislative and guidance documents available to procurement teams in local government when buying AI and data-driven systems. It is hoped this will be a catalyst for further exploration and research into how procurement of these technologies can benefit communities.

This paper is part one of a larger Ada Lovelace Institute project on local government procurement of AI and data-driven systems.

Part two is forthcoming, and looks at where the barriers and levers are in practice for operationalising the themes we have explored in this analysis. It is based on in-depth interviews with people working in public-sector procurement, and on a workshop with public- and private-sector stakeholders from across the AI procurement supply chain. It will examine how to fill the gaps highlighted in this document analysis and how to make sure procurement teams are equipped to do this work. It also explores other areas impacting a procurer’s ability to make decisions that lead to societal benefit. This includes the infrastructure, the processes and the people involved in procurement.

Methodology

We completed a document analysis of guidance, legislation and policy documents on procurement of AI and data-driven systems. We also looked at broader legislation that related to impacts on people and society, such as the Public Sector Equality Duty (PSED). In compiling this list, we also sought input from central and local government stakeholders.

The documents apply UK-wide unless otherwise stated.

  • Note: Government documents listed here were published under the 2010–2024 Conservative UK Government
Guidance, legislation or policy document Organisation
Guidelines for AI procurement

 

Department of Science, Innovation and Technology (DSIT)

 

Department of Communications, Media and Sport (DCMS)Department for Business, Energy and Industrial Strategy (BEIS)Office for AI (OAI)

A guide to using artificial intelligence in the public sector (Collection) DSIT

 

OAI

 

Centre for Data Ethics and Innovation [now the Responsible Technology Adoption Unit (RTA)]

Understanding artificial intelligence DSIT, OAI, CDEI
Understanding artificial intelligence ethics and safety DSIT, OAI, CDEI
Assessing whether artificial intelligence is the right solution DSIT, OAI, CDEI
A pro-innovation approach to AI regulation 2010–24 Conservative Government
Data Ethics Framework Central Digital and Data Office (CDDO)
UK General Data Protection Regulation (GDPR)
Guidance on AI and data protection Information Commissioner’s Office
Public Sector Equality Duty (PSED) (applies to England, Scotland and Wales)
Buying better outcomes (applies to England) Equality and Human Rights Commission (EHRC)
The essential guide to the public sector equality duty (applies to England) EHRC
Procurement Act 2023 (summary) Government Commercial Function
Public Services (Social Value) Act 2012
Procurement Policy Note 10/12: The Public Services (Social Value) Act 2012

 

Cabinet Office

Efficiency and Reform Group

Crown Commercial Service

A Social Value Toolkit for District Councils (applies to England and Wales) Local Government Association

The 2010–24 Conservative Government’s AI White paper, A pro-innovation approach to AI regulation[106] – unlike the other documents in our analysis – is not guidance, nor does it set out obligations for local government. However, it was included in our analysis as it provides an overview of Government’s thinking at the time about the challenges around AI that will impact people and society, and consideration of the next steps for potential regulation and oversight.

We identified all the terms used in the documents that were key markers of, and mechanisms for, scrutinising AI technologies to ensure positive outcomes for people and society. We began with more than 50 terms which we then grouped under five key themes. These themes were used as a lens to analyse the documents.

  • (In)equalities / fairness
  • Transparency
  • Public engagement
  • Public benefit / social value
  • Impact assessments.

We also analysed the various definitions of AI that are used in the documents, to show the scope of these technologies. It is important to recognise how the current interest in AI, along with the lack of a clear definition of AI, can be confusing for public-sector organisations procuring these technologies.

For the purposes of this work, we focused on technologies that use large datasets to learn, make decisions or generate new outputs. Some AI and data-driven systems, such as automated decision-making and predictive analytics tools, are already more widely used by local authorities than newer generative AI technologies.

See the Glossary for detailed definitions of these terms.

The full document analysis is available in the Appendix, with excerpts from guidance, legislation or policy documents mapped to identified themes.

Acknowledgements

This paper was lead authored by Anna Studman, with input from Hannah Claus, Mavis Machirori and Imogen Parker.

We are grateful to Mia Leslie, Jenny McEneaney and Tina Holland for their review of the work.

Appendix: Document analysis

This Appendix contains the detail of our document analysis. It is organised by theme and contains corresponding text excerpts from the documents. Not all documents related to all of our identified themes.

Note: Italicised text indicates text that is quoted directly.

Document Definitions of AI
A pro-innovation approach to AI regulation

Department for Science, Innovation and Technology (DSIT)

 

Office for AI (OAI)

AI or AI system or AI technologies: products and services that are ‘adaptable’ and ‘autonomous’ in the sense outlined in our definition in section 3.2.1.

 

AI supplier: any organisation or individual who plays a role in the research, development, training, implementation, deployment, maintenance, provision or sale of AI systems.

 

AI user: any individual or organisation that uses an AI product.

 

AI life cycle: all events and processes that relate to an AI system’s lifespan, from inception to decommissioning, including its design, research, training, development, deployment, integration, operation, maintenance, sale, use and governance.

 

AI ecosystem: the complex network of actors and processes that enable the use and supply of AI throughout the AI life cycle (including supply chains, markets, and governance mechanisms).

 

Foundation model: a type of AI model that is trained on a vast quantity of data and is adaptable for use on a wide range of tasks. Foundation models can be used as a base for building more specific AI models.

 

From section 3.2.1

39. To regulate AI effectively, and to support the clarity of our proposed framework, we need a common understanding of what is meant by ‘artificial intelligence’.

 

There is no general definition of AI that enjoys widespread consensus. That is why we have defined AI by reference to the 2 characteristics that generate the need for a bespoke regulatory response.

  • The ‘adaptivity’ of AI can make it difficult to explain the intent or logic of the system’s outcomes:
    • AI systems are ‘trained’ – once or continually – and operate by inferring patterns and connections in data which are often not easily discernible to humans.
    • Through such training, AI systems often develop the ability to perform new forms of inference not directly envisioned by their human programmers.
  • The ‘autonomy’ of AI can make it difficult to assign responsibility for outcomes:
    • Some AI systems can make decisions without the express intent or ongoing control of a human.
AI systems can operate with a high level of autonomy, making decisions about how to achieve a certain goal or outcome in a way that has not been explicitly programmed or foreseen.
Understanding artificial intelligence

DSIT

 

OAI

 

Centre for Data Ethics and Innovation (CDEI) [Now the Responsible Technology Adoption Unit – RTA]

At its core, AI is a research field spanning philosophy, logic, statistics, computer science, mathematics, neuroscience, linguistics, cognitive psychology and economics.

 

AI can be defined as the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.

 

AI is constantly evolving, but generally it:

  • involves machines using statistics to find patterns in large amounts of data
  • is the ability to perform repetitive tasks with data without the need for constant human guidance

 

There are many new concepts used in the field of AI and you may find it useful to refer to a glossary of AI terms.

 

This guidance mostly discusses machine learning. Machine learning is a subset of AI, and refers to the development of digital systems that improve their performance on a given task over time through experience.

 

Machine learning is the most widely-used form of AI, and has contributed to innovations like self-driving cars, speech recognition and machine translation.

 

Recent advances in machine learning are the result of:

  • improvements to algorithms
  • increases in funding
  • huge growth in the amount of data created and stored by digital systems
  • increased access to computational power and the expansion of cloud computing

 

Machine learning can be:

  • supervised learning which allows an AI model to learn from labelled training data, for example training an AI model to help tag content on GOV.UK
  • unsupervised learning which is training an AI algorithm to use unlabelled and unclassified information
  • reinforcement learning which allows an AI model to learn as it performs a task
Understanding artificial intelligence ethics and safety

DSIT, OAI and CDEI 

AI systems increasingly perform tasks previously done by humans. For example, AI systems can screen CVs as part of a recruitment process. However, unlike human recruiters, you cannot hold an AI system directly responsible or accountable for denying applicants a job.
A guide to using artificial intelligence in the public sector

DSIT, OAI and CDEI 

Every day, artificial intelligence (AI) is changing how we experience the world. We already use AI to find the fastest route home, alert us of suspicious activity in our bank accounts and filter out spam emails.
Assessing if artificial intelligence is the right solution

DSIT, OAI and CDEI 

AI is just another tool to help deliver services.
It’s important to remember that AI is not an all-purpose solution. Unlike a human, AI cannot infer, and can only produce an output based on the data a team inputs to the model. (page 3)
There is no one ‘AI technology’. Currently, widely-available AI technologies are mostly either supervised, unsupervised or reinforcement machine learning. The machine learning techniques that can provide you with the best insight depends on the problem you’re trying to solve.
Guidelines for AI procurement

Office for AI

Artificial Intelligence (AI) comprises a set of technologies that have the potential to greatly improve public services by reducing costs, enhancing quality, and freeing up valuable time for frontline staff.

 

Document Public engagement
A pro-innovation approach to AI regulation

DSIT and OAI

Principle: Contestability and redress

 

Definition and explanation

 

Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.

 

Regulators will be expected to clarify existing routes to contestability and redress, and implement proportionate measures to ensure that the outcomes of AI use are contestable where appropriate.

 

We would also expect regulators to encourage and guide regulated entities to make clear routes (including informal channels) easily available and accessible, so affected parties can contest harmful AI outcomes or decisions as needed.

Guidance on AI and data protection

Information Commissioner’s Office (ICO)

Independent domain expertise and lived experience testimony will help you identify and address fairness risks. This includes relative disadvantage and real-world societal biases that may otherwise appear in your datasets and consequently your AI outputs over time.

 

This approach is known as “participatory design” and is increasingly important to AI systems. It can include citizens’ juries, community engagement, focus groups or other methods. It is particularly important if AI systems are deployed rapidly across different contexts, creating risks for a system that may be fairness compliant in its country of origin to be non-compliant in the UK for instance.

Buying better outcomes

Equality and Human Rights Commission (EHRC)

It may not be cost-effective or prudent from a risk perspective for the provider to monitor delivery or outcomes on certain contracts due to their low value and/or low contact with the public. In this case the purchasing body may wish to draw on other sources of intelligence, such as consulting with service users or trade unions, reviewing complaints, undertaking mystery shopping or site visits.
Engaging with service users and networks of people with shared protected characteristics can help you understand the issues [when building a business case]. The current service provider may monitor use of a service by different groups, as might officers managing the contract. Trade unions and employees may provide information about equality issues in employment.
The public authority will also need to decide at the pre-procurement stage whether or not to consult publically [sic] on the application of social value considerations, as per the Public Services (Social Value) Act 2012. This is where social value can be considered to greatest effect.
If you specify the achievement of certain performance targets, you may want to make explicit how you expect the contractor to monitor their performance against these targets. For example, you may consider specifying equality monitoring of people who use the service but also consultation with, or surveys of, those who use and those who don’t use the service. You may also make it a requirement for the contractor to make adjustments in light of the monitoring results. You should also specify that the contractor has procedures for dealing promptly and sensitively with complaints about discrimination, and should adjust the service if complaints highlight significant deficiencies.
The essential guide to the public sector equality duty

EHRC

Before you design and commission a service, it is helpful to understand the needs of the service users, including any needs due to having a protected characteristic. This information can be used to improve the design of your service. This is important for effective procurement, as well as for meeting the general equality duty, and will often involve engaging with existing or potential service users and using equality information.
Public Services (Social Value) Act 2012 (7) The authority must consider whether to undertake any consultation as to the matters that fall to be considered under subsection (3) (i.e. the main provisions of the Act).
Consultation will be particularly relevant when considering procurements for services which are delivered directly to citizens. The voluntary and community sector, along with other providers and interested groups, should be engaged from the earliest stage to help shape policies, programmes and services.
Consultation may be less relevant in procurements for “back office” services
Procurement policy note 10/12: The Public Services (Social Value) Act 2012 The Act does not set out how consultation should take place so commissioners should consider the most appropriate form of consultation bearing in mind the needs and requirements of people and organisations being consulted, the size of the procurement and the likely social, environmental and economic impact of the procurement. The Cabinet Office publishes principles on consultation exercises. Authorities may wish to take account of those principles when deciding whether to consult and how to do it. The expectation is that consultations should be “digital by default” and carried out on line if at all possible but authorities should consider the types of services they are looking to procure and the best way of getting the views of potential users who may not be familiar with modern IT.
Data Ethics Framework
Central Digital and Data Office (CDDO)
What channels have you established for public engagement and scrutiny throughout the duration of the project?

 

Document (In)equalities and fairness
Bias Fairness Negative impacts
A pro-innovation approach to AI regulation

DSIT and OAI

5. Public trust in AI will be undermined unless these risks, and wider concerns about the potential for bias and discrimination, are addressed. Principle: Fairness

 

Definition and explanation

 

AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Actors involved in all stages of the AI life cycle should consider definitions of fairness that are appropriate to a system’s use, outcomes and the application of relevant law.

 

Fairness is a concept embedded across many areas of law and regulation, including equality and human rights, data protection, consumer and competition law, public and common law, and rules protecting vulnerable people.

 

Regulators may need to develop and publish descriptions and illustrations of fairness that apply to AI systems within their regulatory domain, and develop guidance that takes into account relevant law, regulation, technical standards,[footnote 99] and assurance techniques.

 

Regulators will need to ensure that AI systems in their domain are designed, deployed and used considering such descriptions of fairness. Where concepts of fairness are relevant in a broad range of intersecting regulatory domains, we anticipate that developing joint guidance will be a priority for regulators.

Box 1.2. [Illustrative AI risks] An AI tool assessing credit-worthiness of loan applicants is trained on incomplete or biased data,leading the company to offer loans to individuals on different terms based on characteristics like race or gender. We expect that regulators’ interpretations of fairness will include consideration of compliance with relevant law and regulation, including:
  1. AI systems should not produce discriminatory outcomes, such as those which contravene the Equality Act 2010 or the Human Rights Act 1998.Use of AI by public authorities should comply with the additional duties placed on them by legislation (such as the Public Sector Equality Duty).
  2. Processing of personal data involved in the design, training, and use of AI systems should be compliant with requirements under the UK General Data Protection Regulation (GDPR), the Data Protection Act 2018, particularly around fair processing and solely automated decision-making.
  3. Consumer and competition law, including rules protecting vulnerable consumers and individuals.
  4. Relevant sector-specific fairness requirements, such as the Financial Conduct Authority (FCA) Handbook.
 
Box 4.1. [Supporting a layered approach to AI technical standards] For example, standards for bias mitigation could be promoted by the Financial Conduct Authority (FCA) and the Equality and Human Rights Commission (EHRC) as practical tools for providers of AI scoring models to identify and mitigate relevant sources of bias to ensure the fairness of the outcomes when the AI model is applied to financial services (credit scoring) and HR practices (candidate scoring) respectively.
Understanding artificial intelligence

DSIT, OAI and CDEI

Fairness – are the models trained and tested on relevant, accurate, and generalisable datasets and is the AI system deployed by users trained to implement them responsibly and without bias  
Understanding artificial intelligence ethics and safety

DSIT, OAI and CDEI

Fair and non-discriminatory – consider its potential to have discriminatory effects on individuals and social groups, mitigate biases which may influence your model’s outcome, and be aware of fairness issues throughout the design and implementation lifecycle (page 4) The main ways AI systems can cause involuntary harm are:

 

misuse – systems are used for purposes other than those for which they were designed and intended

 

questionable design – creators have not thoroughly considered technical issues related to algorithmic bias and safety risks

 

unintended negative consequences – creators have not thoroughly considered the potential negative impacts their systems may have on the individuals and communities they affect

Carefully reviewing the FAST Track Principles helps you:
  • ensure your project is fair and prevent bias or discrimination·
  • safeguard public trust in your project’s capacity to deliver safe and reliable AI
To build and maintain a culture of responsibility you and your team should prioritise 4 goals as you design, develop, and deploy your AI project. In particular, you should make sure your AI project is:
  • ethically permissible – consider the impacts it may have on the wellbeing of affected stakeholders and communities
  • fair and non-discriminatory – consider its potential to have discriminatory effects on individuals and social groups, mitigate biases which may influence your model’s outcome, and be aware of fairness issues throughout the design and implementation lifecycle
  • worthy of public trust – guarantee as much as possible the safety, accuracy, reliability, security, and robustness of its product
  • justifiable – prioritise the transparency of how you design and implement your model, and the justification and interpretability of its decisions and behaviours
If your AI system processes social or demographic data, you should design it to meet a minimum level of discriminatory non-harm. To do this you should:

 

use only fair and equitable datasets (data fairness)

 

include reasonable features, processes, and analytical structures in your model architecture (design fairness)

 

prevent the system from having any discriminatory impact (outcome fairness)

 

implement the system in an unbiased way (implementation fairness)

You should make sure designers and users remain aware of:

 

the transformative effects AI systems can have on individuals and society

 

your AI system’s real-world impact

Assessing if artificial intelligence is the right solution

DSIT, OAI and CDEI

When assessing if AI could help you meet users’ needs, consider if: it’s ethical and safe to use the data – refer to the Data Ethics Framework
Guidelines for AI procurement

Office for AI

Require the successful supplier(s) to assemble a team with the right skill sets, and to address the need for diversity to mitigate bias in the AI system. Robust practices may include, but are not limited to:
  • Having an internal AI ethics approach, with examples of how it has been applied to design, develop, and deploy AI-powered solutions.
  • Processes to ensure accountability over outputs of algorithms.
  • Avoiding outputs that could be unfairly discriminatory.
 
Have suppliers highlighted and/or addressed any issues of bias within the data?    
As part of the evaluation process also review the specialist skills, qualifications and diversity of the team that will develop and deploy the AI system. This can also help to anticipate or detect unfair bias in the system.    
Procurement Act (Summary)

Government Commercial Function

  The Act introduces a new procedure for running a competitive tendering process – the competitive flexible procedure – ensuring for the very first time that contracting authorities can design a competition to best suit the particular needs of their contract and market.  
Guidance on AI and data protection

ICO

  If you are procuring AI as a service or off-the-shelf models,  for documentation could assist you with your fairness compliance obligations as the controller for processing your customer data. This could include: information around the demographic groups a model was originally or continues to be trained on; what, if any, underlying bias has been detected or could emerge; or any algorithmic fairness testing that has already been conducted. You must therefore implement risk management practices designed to ensure that data minimisation, and all relevant minimisation techniques, are fully considered from the design phase. Similarly, if you buy in AI systems or implement systems operated by third parties (or both), these considerations should form part of the procurement process due diligence.
  Establishing clear policies and good practices for the procurement and lawful processing of high-quality training and test data is important, especially if you do not have enough data internally. Whether procured internally or externally, you should satisfy yourself that the data is representative of the population you apply the ML system to (although this is not enough to ensure fairness). If you outsource an AI service to another organisation, this could also make the process of responding to rights requests more complicated when the personal data involved is processed by them rather than you. When procuring an AI service, you must choose one which allows individual rights to be protected and enabled, in order to meet your obligations as a controller.
  Fairness is not a goal that algorithms can achieve alone. Therefore, you should take a holistic approach, thinking about fairness across different dimensions and not just within the bounds of your model or statistical distributions.

 

You should think about:

  • the power and information imbalance between you and individuals whose personal data you process;
  • the underlying structures and dynamics of the environment your AI will be deployed in;
  • the implications of creating self-reinforcing feedback loops;
  • the nature and scale of any potential harms to individuals resulting from the processing of their data; and
  • how you will make well-informed decisions based on rationality and causality rather than mere correlation.
 
  When an AI system is involved in a decision that impacts individuals in a legal or similarly significant way, you must ask:
  •  what kind of decision is it (ie is it solely automated)?;
  • when does the decision take place?;
  • what is the context in which the system makes the decision?; and
  • what steps are involved in reaching it?
 
Public Sector Equality Duty Public authorities must have due regard to:
  • eliminate unlawful discrimination
  • advance equality of opportunity between people who share a protected characteristic and those who don’t 
The essential guide to the public sector equality duty

EHRC

You may find it useful to include the following contract conditions:
  • Prohibit the contractor from unlawfully discriminating under the Equality Act 2010
  • Require them to take all reasonable steps to ensure that staff, suppliers and subcontractors meet their obligations under the Equality Act 2010.
When advertising the contract, set out how the ability to meet any relevant equality-related matters will be assessed in the competition. Engaging with potential suppliers can help them to better understand your equality-related requirements and encourage a more diverse range of suppliers to tender for the contract. You must not, however, give any potential supplier an advantage over another.
The Act explains that the second aim (advancing equality of opportunity) involves, in particular, having due regard to the need to:
  • Remove or minimise disadvantages suffered by people due to their protected characteristics.
  • Take steps to meet the needs of people with certain protected characteristics where these are different from the needs of other people.
  •  Encourage people with certain protected characteristics to participate in public life, or in other activities where their participation is disproportionately low.[…]It describes fostering good relations as tackling prejudice and promoting understanding between people from the different groups.
Buying better outcomes

EHRC

Equality and procurement strategy

The corporate approach to procurement: You should ensure that your procurement processes include consideration of equality issues, and clarify areas of responsibility. Increasing supplier diversity: This may be part of a procurement strategy, as it has the potential to widen the pool of bidders and result in more creative and cost effective proposals. (Plus suggestions on how to do this.)

 

Identifying need and building a business strategy

To build a business case you need to identify legitimate and reasonable need. This information should help you to establish how relevant equality is to the procurement and whether it needs to be a core requirement. Some questions to ask about the current provision:

 

Do current arrangements adversely affect some people with shared protected characteristics or unlawfully discriminate against them?

 

Do differences in service take up or satisfaction levels indicate that it is not being provided fairly or that there is unlawful discrimination in the way it is delivered? If there have been cuts or changes to the service or related resources (such as the voluntary sector), has this affected some people disproportionately, relative to others, as a consequence of their protected characteristics?

 

Are there population changes that might indicate new needs?

 

Are there alternative ways of meeting your requirements that could advance equality? 

 

Other issues you may consider when building your business case are:

  • Strategic fit: Does the inclusion of equality measures add value to and help meet your authority’s vision and objectives including its equality objectives?
  • Cost and benefits: What are the costs of meeting equality measures and are they justified in terms of the expected immediate or wider social benefits? Is your approach affordable, proportionate and value for money?
  • Options: What procurement and contract options are available to you and what effect might they have on equality? 
Document Bias Fairness Negative impacts
Buying better outcomes [cont.] Equality requirements in contract specifications – For example: A police force decides to introduce an artificial intelligence system for automatic facial recognition (AFR), as part of its work on managing public order offences. The force is aware that recent case law[1] has highlighted the need for PSED to be considered from the very first stage of thinking about whether or not to introduce an AFR system and throughout the decision making process. The force also recognises that it will have to monitor how the AFR works in practice once it is introduced.

 

Therefore, the force starts by researching the available information on the equality and human rights risks to introducing AFR and considers how these will be mitigated. The police force includes in the contract specification that:

  • the AFR supplier must have a way of ensuring there is a live monitoring process available with the AI system, and
  • the supplier must be able to demonstrate that the AI provides the same accuracy in facial recognition for ethnic minorities, women and other protected groups, as for white males.
Risk: You should consider any legal, financial, reputational or even political risks that may be incurred by yourself and potential suppliers. Non-compliance with the PSED may lead to legal challenge and affect your authority’s reputation as well as incur financial costs.
Writing the specification – the requirements on promoting equality, like the rest of the specification, should be objective, and stated in terms that are clear and explicit.

For example, the specification could require year-on-year improvements, such as increased take up of services by people with certain protected characteristics who were previously under-represented amongst users.

Specifying positive action and reasonable adjustments – The Act allows employers or service contractors to take positive action[1] measures to improve equality for people who share a protected characteristic. Positive action means that services can be provided to encourage people from disadvantaged groups or those who are under-represented to access services.

 

[1] Positive action is not the same as positive discrimination, which is unlawful. Positive discrimination occurs when one person or a group of people with particular protected characteristics is treated more favourably than another person, or group with different characteristics, would be treated in the same situation.

Preparing the contract notice – The contract notice […] should set out the equality requirements clearly so that any potential supplier can understand them.
Assessing contractor technical capacity and ability – The pre-qualification questionnaire (PQQ) is a good opportunity to find out about a potential supplier’s track record on equality, both in terms of their technical competence or to determine any grounds of exclusion as permitted by relevant procurement law.You may exclude a prospective tenderer who has been found in breach of laws about equal treatment of workers unless they can show they are taking steps to remedy the issue.
Invitation to tender – Equality requirements in an ITT should be objective, and stated in clear, explicit terms. You should be able to verify, monitor, and evaluate whatever you specify.

 

The ITT can ask how the tenderer intends to meet equality obligations or other social requirements through method statements.

Developing an award process – Method statements[1] can be an effective way of assessing equality performance. They provide the tenderer with the opportunity to demonstrate their understanding of equality criteria, and how they propose to deliver this.

 

[1] A method statement is a statement, usually annexed to a contract, detailing the contractor’s proposal for the performance of the service they are contracted to provide. These are part of the contract and can be used as a way of monitoring the contractor’s performance of the contract.

Data Ethics Framework

CDDO

4.3: Bias in data
  • How has the data being used to train a model been assessed for potential bias? You should consider:
    • Whether the data might (accurately) reflect biased historical practice that you do not want to replicate in the model (historical bias)
    • The data might be a biased misrepresentation of historical practice, for example because only certain categories of data were properly recorded in a format accessible to the project (selection bias)
  • If using data about people, is it possible that your model or analysis may be identifying proxy variables for protected characteristics which could lead to a discriminatory outcome? Such proxy variables can potentially be a cause of indirect discrimination; you should consider whether the use of these variables is appropriate in the context of your service (i.e. is there a reasonable causal link between the proxy variable and the outcome you’re trying to measure?; do you assess this to be a proportionate means to achieve a legitimate aim in accordance with the Equality Act 2010?)
  • What measures have you taken to mitigate bias?
2.2 Ensure diversity within your team
  • How have you ensured diversity in your team? Having a diverse team helps prevent biases and encourages more creativity and diversity of thought.
  • Avoid forming homogenous teams, embrace diversity of lived experiences of people from different backgrounds. If you find yourself in a homogenous team, challenge it.
 
Fairness definition: It is crucial to eliminate your project’s potential to have unintended discriminatory effects on individuals and social groups. You should aim to mitigate biases which may influence your model’s outcome and ensure that the project and its outcomes respect the dignity of individuals, are just, non-discriminatory, and consistent with the public interest, including human rights and democratic values.
  Score the fairness of your project from 0 to 5 where:
  • 0 means there is a significant risk that the project will result in harm or detrimental and discriminatory effects for the public or certain groups
  • 5 means the project promotes just and equitable outcomes, has negligible detrimental effects, and is aligned with human rights considerations
  3.6 Ensure the project’s compliance with the Equality Act 2010

Data analysis or automated decision making must not result in outcomes that lead to discrimination as defined in the Equality Act 2010.

  • How can you demonstrate that your project meets the Public Sector Equality Duty?
  • What was the result of the Equality Impact Assessment of the project?
 
  1.2 Understand unintended consequences of your project
  • What would be the harm in not using data? What social outcomes might not be met?
  • What are the potential risks or negative consequences of the project versus the risk in not proceeding with the project?
  • Could the misuse of the data or algorithm or poor design of the project contribute to reinforcing social and ethical problems and inequalities
  • What kind of mechanisms can you put in place to prevent this from happening
  • What specific groups benefit from the project? What groups can be denied opportunities or face negative consequences because of the project?
1.3 Human rights considerations
  • How does the design and implementation of the project or algorithm respect human rights and democratic values?
  • How does the project or algorithm work towards advancing human capabilities, advancing inclusion of underrepresented populations, reducing economic, social, gender, racial, and other inequalities?
  • What are the environmental implications of the project? How could they be mitigated?

 

 

 

Document Transparency
A pro-innovation approach to AI regulation

DSIT and OAI

Principle: Appropriate transparency and explainability

 

Definition and explanation

 

AI systems should be appropriately transparent and explainable. Transparency refers to the communication of appropriate information about an AI system to relevant people (for example, information on how, when, and for which purposes an AI system is being used).

 

Explainability refers to the extent to which it is possible for relevant parties to access, interpret and understand the decision-making processes of an AI system. An appropriate level of transparency and explainability will mean that regulators have sufficient information about AI systems and their associated inputs and outputs to give meaningful effect to the other principles (for example, to identify accountability). An appropriate degree of transparency and explainability should be proportionate to the risk(s) presented by an AI system.

Understanding artificial intelligence

DSIT, OAI and CDEI

If you want to use automated processes to make decisions with legal or similarly significant effects on individuals you must follow the safeguards laid out in the GDPR and DPA 2018. This includes making sure you provide users with:

 

specific and easily accessible information about the automated decision-making process

 

a simple way to obtain human intervention to review, and potentially change the decision

Understanding artificial intelligence ethics and safety

DSIT, OAI and CDEI

Designers and implementers of AI systems should be able to:

 

explain to affected stakeholders how and why a model performed the way it did in a specific context

 

justify the ethical permissibility, the discriminatory non-harm, and the public trustworthiness of its outcome and of the processes behind its design and use

Assessing if artificial intelligence is the right solution

DSIT, OAI and CDEI

It can be useful to keep a central record of all AI technologies you use, listing:
  • where AI is in use
  • what the AI is used for
  • who’s involved
  • how it’s assessed or checked
  • what other teams rely on the technology
Guidelines for AI procurement

Office for AI

As a guiding principle, be transparent about your AI project and the tools, data and algorithms you will be using, working in the open where possible.
Maximise transparency in AI decision-making to give users confidence that an AI system functions well.
All requirements should be transparent and should not discriminate against particular types of suppliers, for instance, SMEs and VCSEs, or those from countries with which the UK has trade agreements with procurement obligations.
Encourage explainability and interpretability of algorithms and make this one of your design criteria. This means using methods and techniques that allow the results to be understood by your team. Highly ‘explainable’ outputs from your AI system will be able to be interpreted by your team, and by other suppliers. This will also make it more likely for you to be able to engage with other suppliers to continue or build upon your AI system in the future, limiting the risk of vendor lock-in.
Consider the use of anonymisation techniques to help safeguard data privacy, including data aggregation, masking, and synthetic data.
Avoid relying on ‘black-box’ algorithms. Underline the need for an ‘explainable approach’ to AI development (the extent to which an AI system’s decision making process can be understood) in your invitation-to-tender. Highly ‘explainable’ outputs from your AI system will be able to be interpreted by your team, and by other suppliers. This will increase the likelihood for you to be able to engage with other suppliers in the future to continue or build upon the initial AI system, limiting the risk of vendor lock-in. Consider addressing these issues in your procurement documentation. Good practice could involve adopting open standards, royalty-free licensing agreements, and public domain publication terms.
  • Designing for reproducibility.
  • Testing the model under a range of conditions.
  • Defining acceptable model performance.
  • Robust and proportionate security provision.
Procurement Act (Summary)

Government Commercial Function

Running throughout the Act are requirements to publish notices. These are the foundations for the new standards of transparency which will play such a crucial role in the new regime. Our ambitions are high and we want to ensure that procurement information is publicly available not only to support effective competition, but to provide the public with insight into how their money is being spent. Part eight of the Act provides for regulations which will require contracting authorities to publish these notices, resulting in more transparency and greater scrutiny.
Guidance on AI and data protection

ICO

Explaining decisions made with AI

Responsibility explanations help people understand ‘who’ is involved in the development and management of the AI model, and ‘who’ to contact for a human review of a decision. If your system, or parts of it, are procured, you should include information about the providers or developers involved.

Data explanations: Data explanations are about the ‘what’ of AI-assisted decisions. They help people understand what data about them, and what other sources of data, were used in a particular AI decision. Generally, they also help individuals understand more about the data used to train and test the AI model. This includes who took part in choosing the data to be collected or procured and who was involved in its recording or acquisition. How procured or third-party provided data was vetted.
Public Sector Equality Duty Public authorities must:·
  • publish equality information at least once a year to show how they’ve complied with the equality duty
  • prepare and publish equality objectives at least every 4 years
Buying better outcomes

EHRC

The monitoring information helps the contract manager ensure successful delivery of a contract. But the information can also help a public authority meets its duty to be transparent in reporting how it uses its resources, and to what effect.
A Social Value Toolkit for District CouncilsLocal Government Association (LGA) To ensure that Social Value offers from Bidders are treated in an open fair and transparent manner in accordance with the Public Procurement Regulations, the National TOMs as set out in Section B above are used as the basis for enabling bidders to submit their offers and for officers to carry out their evaluation in a fair, open and transparent manner.
Data Ethics FrameworkCDDO 1.5 Make your user need and public benefit transparent (transparency)
  • Where can you publish information on how the project delivers positive social outcomes for the public?
  • How have you shared your understanding of the user need with the user?
3.5 Transparency

Publish your DPIA and other related documents.

 

 

Document Public benefit / social value
Understanding artificial intelligence

DSIT and OAI

AI can benefit the public sector in a number of ways. For example, it can:
  • provide more accurate information, forecasts and predictions leading to better outcomes – for example more accurate medical diagnoses
  • produce a positive social impact by using AI to provide solutions for some of the world’s most challenging social problems
  • simulate complex systems to experiment with different policy options and spot unintended consequences before committing to a measure
  • improve public services – for example personalising public services to adapt to individual circumstances
  • automate simple, manual tasks which frees staff up to do more interesting work
A guide to using artificial intelligence in the public sector

DSIT, OAI and CDEI

There are huge opportunities for government to capitalise on this exciting new technology to improve lives. We can deliver more for less, and give a better experience as we do so.

 

For citizens, the application of AI technologies will result in a more personalised and efficient experience. For people working in the public sector it means a reduction in the hours they spend on basic tasks, which will give them more time to spend on innovative ways to improve services.

Guidelines for AI procurement

Office for AI

It [data science] offers huge public benefits in creating better evidence-based policy and in making government operations more targeted and efficient.
Defining the public benefit goal provides an anchor for the overall project and procurement process that the AI system is intended to achieve. AI technology also brings specific risks which must be identified and managed early in the procurement phase. Explain in your procurement documentation that the public benefit is a main driver of your decision-making process when assessing proposals.
As a general principle any AI procurement should be investigated with the mindset of ‘how could AI technologies potentially benefit us?’ rather than ‘how can we make our problem fit an AI system solution?’.
Procurement Act (Summary)

Government Commercial Function

Contracting authorities must have regard to delivering value for money, maximising public benefit, transparency and acting with integrity.
Buying better outcomes

EHRC

There may be additional equality or social outcomes that generate added value, but are not absolutely necessary for fulfilment of the contract. These might include training and employment opportunities, regeneration objectives, improved labour standards or a more diverse supplier base.

 

Public authorities seeking to gain added value from the contract or to contribute to the wider objectives of the authority can do so in three ways:

  • making them part of the specification
  • including them as part of the terms and conditions, or
  • by using voluntary measures.

 

Case law recognises that the criteria for the evaluation of contracts need not be purely economic but can, in appropriate cases, include social and environmental criteria. It may be possible to include the provision of clearly identifiable and measurable social benefits as part of the contract specification and develop appropriate evaluation criteria accordingly.

Procurement Policy  Note 10/12: The Public Services (Social Value) Act 2012

Cabinet Office, Efficiency and Reform Group and Crown Commercial Service

Commissioners should consider social value before the procurement starts because that can inform the whole shape of the procurement approach and the design of the services required. Commissioners can use the Act to re-think outcomes and the types of services to commission before starting the procurement process.
The Act places a requirement on commissioners to consider the economic, environmental and social benefits of their approaches to procurement before the process starts. They also have to consider whether they should consult on these issues.
When considering how a procurement process might improve the social, economic or environmental well being of a relevant area the authority must only consider matters which are relevant to what is proposed to be procured. The Act also provides that if there is an urgent need to arrange procurement the requirements to consider consultation and the impact on social, environmental and economic well being can be disregarded if it is impractical to consider them.
A Social Value Toolkit for District Councils

LGA

Developing a Social Value Policy

The aim of the National TOMs Framework is to provide a minimum reporting standard for measuring Social Value. Approved by the LGA National Advisory Group for Procurement.

 

National TOMs – to identify and measure the social value being delivered by a contract:

  • Promoting Skills and Employment: To promote growth and development opportunities for all within a community and ensure that they have access to opportunities to develop new skills and gain meaningful employment.
  • Supporting the Growth of Responsible Local Businesses: To provide local businesses with the skills to compete and the opportunity to work as part of public sector and big business supply chains.
  • Creating Healthier, Safer and More Resilient Communities: To build stronger and deeper relationships with the voluntary and social enterprise sectors whilst continuing to engage and empower citizens.
  • Protecting and Improving our Environment: To ensure the places where people live and work are cleaner and greener, to promote sustainable procurement and secure the long-term future of our planet.
  • Promoting Social Innovation: To promote new ideas and find innovative solutions to old.
Social Value has been defined as the additional benefit to the community from a commissioning/procurement process over and above the direct purchasing of goods, services and outcomes.  
The toolkit gives some more detailed guidance on building SV requirements into contracts:

 

The Social Value Portal recommends that a standalone weighting of 10-20% for Social Value is included alongside the Quality/Price matrix for evaluating procurements to ensure that contractors take social value seriously in their bids. Social value bids should be assessed against the criteria laid out within the ITT based on a combination of a quantitative and qualitative assessment.

The toolkit gives some information on ongoing contract management and remedial measures:

 

Ongoing contract management is extremely important to ensure that the Council receives the benefits of Social Value it agreed when it accepted the offer from the supplier.

 

If, during the delivery phase of the contract, it is considered that the SV commitments and actions committed to by a contractor have not been delivered remedies may be sought provided these have been allowed for in the contract.

Data Ethics Framework

CDDO

The Framework involves scoring projects (0– 5) against overarching principles of transparency, accountability and fairness.

 

Accountability is part of effective governance, and includes public oversight of stated objectives and decision-making. This ensures that initiatives ‘respond to the needs of the communities they are designed to benefit’.

 

There is emphasis on:

  • understanding user needs
  • being clear on potential benefits or harms mitigation.

 

‘Specific actions’ to support reflection include:

1. Define and understand public benefit and user need

When starting a public sector data project, you must have a clear articulation of its purpose. This includes having clarity on what public benefit the project is trying to achieve and what are the needs of the people who will be using the service or will be most directly affected by it.
 

 

 

Document Impact assessments
A pro-innovation approach to AI regulation

DSIT and OAI

Assurance techniques like impact assessments can help to identify potential risks early in the development life cycle, enabling their mitigation through appropriate safeguards and governance mechanisms.
Box 3.1: Functions required to support implementation of the framework

 

Monitoring, assessment and feedback

 

Activities

Develop and maintain a central monitoring and evaluation (M&E) framework to assess cross-economy and sector-specific impacts of the new regime.

 

Ensure appropriate data is gathered from relevant sources – for example, from industry, regulators, government and civil society – and considered as part of the overall assessment of the effectiveness of the framework.

 

Support and equip regulators to undertake internal M&E and find ways to support regulators’ contributions to the central M&E function.

 

Monitor the regime’s overall effectiveness including the extent to which it is proportionate and supporting innovation.

 

Provide advice to ministers on issues that may need to be addressed to improve the regime, including where additional intervention may be required to ensure that the framework remains effective as the capability of AI and the state of the art develops

 

Rationale

This function is at the heart of our iterative approach. We need to know whether the framework is working – for example, whether it is able to respond to and mitigate prioritised risks and whether the framework is actively supporting innovation – and we need the ability to spot issues quickly so we can adapt the framework in response.

 

M&E needs to be undertaken centrally to determine whether the regime as a whole is delivering against our objectives. M&E will assess whether our regime is operating in a way that is pro-innovation, clear, proportionate, adaptable, trustworthy and collaborative.

 

Support for innovators (including testbeds and sandboxes as detailed in section 3.3.4)

Identify cross-cutting regulatory issues that are having real-world impacts and stifling innovation, and identify opportunities for improvement to our regulatory framework.

Guidelines for AI procurement

Office for AI

Conduct initial AI impact assessments at the start of the procurement process, and ensure that your interim findings inform the procurement. Be sure to revisit the assessments at key decision points.
Your AI impact assessment should be initiated at the project design stage. Ensure that the solution design and procurement process seeks to mitigate any risks that you identify in the assessment. Your AI impact assessment should be an iterative process, as without knowing the specification of the AI system you will acquire, it is not possible to conduct a complete assessment. Your AI impact assessment should outline:
  • Your user needs and the public benefit of your AI system.
  • Human and socio-economic impacts of your AI system – this will help to ensure it delivers social value benefits.
  • Consequences for your existing technical and procedural landscape.
  • Data quality and any potential inaccuracy or bias.
  • Any potential unintended consequences.
  • Whole-of-life cost considerations, including ongoing support and maintenance requirements.
  • Associated risks and their respective mitigation strategies must be provided and agreed upon within the impact assessment, and should include ‘go/no go’ key decision points where applicable. Review your impact assessment at these decision points, or every time a substantial change to the design of an AI system is made.
Guidance on AI and data protection

ICO

They [DPIAs] are also an ideal opportunity for you to consider and demonstrate your accountability for the decisions you make in the design or procurement of AI systems.
Buying better outcomes

EHRC

Equality considerations are typically weighted towards the start of the procurement process with less attention given at the monitoring and management stage. However, the obligation to give ‘due regard’ to the PSED continues through the whole of the procurement cycle, and so must be included in the monitoring and management stage.  
It may not be cost-effective or prudent from a risk perspective for the provider to monitor delivery or outcomes on certain contracts due to their low value and/or low contact with the public. In this case the purchasing body may wish to draw on other sources of intelligence, such as consulting with service users or trade unions, reviewing complaints, undertaking mystery shopping or site visits. 
The essential guide to the public sector equality duty

EHRC

Where relevant and proportionate, it may be useful for the contract specification to set out what equality outcomes you require the contractor to achieve. For example, how the goods, service or works that are being procured will meet the needs of people with the protected characteristics, or how take-up will be increased for different groups that may face barriers in accessing the service. 
You may also need to specify what information you need the contractor to collect and report on. For example, you might need to monitor health outcomes for people with learning disabilities in relation to contracted-out health services. If you are covered by the specific duties, this will help you to meet your obligation to publish annual information on your service users.  
A Social Value Toolkit for District Councils

LGA

The National TOMs are supported by a set of ‘Proxy Values’ that allow users to assess the financial impact that the measures will have on society in terms of fiscal savings and local economic benefits. It is of course recognised that social value is not all about ‘money’ but nonetheless this is an important metric to help understand the scale and breadth of impact that a measure can make. Importantly, it allows procuring bodies to compare tenders in a way that is proportional and relevant to the bid, and to better justify a procurement decision.
The contractor should also be asked to provide evidence during contract delivery that the SV offer has been delivered. This helps councils to keep on track on an annual (or more frequent basis) whether offers have been delivered.The Social Value Portal can assist with this process by ensuring that the winning bidder is required to regularly upload evidence and we provide reporting such as the example set out below to demonstrate progress by the successful contractor in delivering against the agreed targets
Data Ethics Framework

CDDO

4. Review quality and limitations of data

Insights from new technology are only as good as the data and practices used to create them. You must ensure that the data for the project is accurate, representative, proportionally used, of good quality, and that you are able to explain its limitations.

 

5. Evaluate and consider wider policy implications

It is essential that there is a plan to continuously evaluate if insights from data are used responsibly. This means that both development and implementation teams understand how findings and data models should be used and monitored with a robust evaluation plan and effective accountability mechanisms. 


Footnotes

[1] ‘The Government Has Abdicated Responsibility for Public Services’ (Institute for Government, 24 November 2023) <https://www.instituteforgovernment.org.uk/comment/autumn-statement-public-services> accessed 24 June 2024.

[2] Elliot Jones, ‘Foundation Models in the Public Sector’ (Ada Lovelace Institute 2023) <https://www.adalovelaceinstitute.org/evidence-review/foundation-models-public-sector/>.

[3] Ibid.

[4] Julia Smakman, Matt Davies and Michael Birtwistle, ‘Mission Critical’ (Ada Lovelace Institute 2023) <https://www.adalovelaceinstitute.org/policy-briefing/ai-safety/> accessed 24 June 2024.

[5] ‘A guide to using artificial intelligence in the public sector’ (GOV.UK) <https://www.gov.uk/government/publications/understanding-artificial-intelligence/a-guide-to-using-artificial-intelligence-in-the-public-sector> accessed 25 June 2024.

[6] ‘Home Office Drops “racist” Algorithm from Visa Decisions’ BBC News (4 August 2020) <https://www.bbc.com/news/technology-53650758> accessed 25 June 2024; Burns, ‘Council Algorithms Mass Profile Millions, Campaigners Say’ BBC News (20 July 2021) <https://www.bbc.com/news/uk-57869647> accessed 25 June 2024; Redden and others (n 21).

[7] Ada Lovelace Institute, Critical Analytics? (2024) <https://www.adalovelaceinstitute.org/report/local-authority-data-analytics/>.

[8] Rachel Hall, ‘UK Public Services in “Doom Loop” Due to Short-Term Policies, Thinktank Warns’ The Guardian (30 October 2023) <https://www.theguardian.com/society/2023/oct/30/uk-public-services-policy-institute-for-government-report> accessed 24 June 2024.

[9] Michael Goodier, Carmen Aguilar García and Richard Partington, ‘How a Decade of Austerity Has Squeezed Council Budgets in England’ The Guardian (29 January 2024) <https://www.theguardian.com/uk-news/2024/jan/29/how-a-decade-of-austerity-has-squeezed-council-budgets-in-england> accessed 24 June 2024.

[10] Eugenio Vaccari and Yseult Marique, ‘One in Five Councils at Risk of “Bankruptcy” – What Happens after Local Authorities Run out of Money’ (The Conversation, 14 February 2024) <http://theconversation.com/one-in-five-councils-at-risk-of-bankruptcy-what-happens-after-local-authorities-run-out-of-money-222541> accessed 24 June 2024.

[11] ‘The Local Government Finance Settlement Is Unlikely to End Council “Bankruptcies”’ (Institute for Government, 21 December 2023) <https://www.instituteforgovernment.org.uk/comment/local-government-finance-settlement-council-bankruptcies> accessed 24 June 2024.

[12] Meri Åhlberg and others, ‘The National Red Index: How to Turn the Tide on Falling Living Standards’ (Citizens Advice Bureau 2024) <https://www.citizensadvice.org.uk/policy/publications/the-national-red-index-how-to-turn-the-tide-on-falling-living-standards/> accessed 24 June 2024.

[13] ‘Health State Life Expectancies in England, Northern Ireland and Wales – Office for National Statistics’ (Office for National Statistics 2024) <https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/healthandlifeexpectancies/bulletins/healthstatelifeexpectanciesuk/between2011to2013and2020to2022> accessed 24 June 2024.

[14] Patrick Butler and Patrick Butler Social policy editor, ‘English Councils Need £4bn to Prevent Widespread Bankruptcy, MPs Say’ The Guardian (1 February 2024) <https://www.theguardian.com/uk-news/2024/feb/01/english-councils-need-4bn-to-prevent-widespread-bankruptcy-mps-say> accessed 24 June 2024.

[15] Harry Quilter-Pinner and Halima Khan, ‘Great Government: Public Service Reform in the 2020s’ (2023) <https://www.ippr.org/articles/great-government> accessed 24 June 2024.

[16] ‘Improving Productivity Could Release Tens of Billions for Government Priorities – NAO Insight’ (National Audit Office, 16 January 2024) <https://www.nao.org.uk/insights/improving-productivity-could-release-tens-of-billions-for-government-priorities/> accessed 25 June 2024.

[17] Gareth Davies, ‘Use of artificial intelligence in government’ (National Audit Office 2024) <https://www.nao.org.uk/wp-content/uploads/2024/03/use-of-artificial-intelligence-in-government.pdf>.

[18] ‘Evaluation of Foundation Models’ (Ada Lovelace Institute) <https://www.adalovelaceinstitute.org/project/evaluation-foundation-models/> accessed 25 June 2024.

[19] Jones (n 2).

[20] Joanna Redden and others, ‘Automating Public Services: Learning from Cancelled Systems’ (Data Justice Lab 2022) <https://d1ssu070pg2v9i.cloudfront.net/pex/pex_carnegie2021/2022/09/21101838/Automating-Public-Services-Learning-from-Cancelled-Systems-Final-Full-Report.pdf>.

[21] Ed Sheridan, ‘Town Hall Drops Pilot Programme Profiling Families without Their Knowledge’ (Hackney Citizen, 30 October 2019) <https://www.hackneycitizen.co.uk/2019/10/30/town-hall-drops-pilot-programme-profiling-families-without-their-knowledge/> accessed 25 June 2024.

[22] Mark Say, ‘Met Police Decommissions Matrix Gang Database’ (UKAuthority) <https://www.ukauthority.com/articles/met-police-decommissions-matrix-gang-database/> accessed 25 June 2024.

[23] Jones (n 2).

[24] Smakman, Davies and Birtwistle (n 4) <https://www.adalovelaceinstitute.org/policy-briefing/ai-safety/> accessed 4 June 2024.

[25] Ada Lovelace Institute, A knotted pipeline (2022) <https://www.adalovelaceinstitute.org/report/knotted-pipeline-health-data-inequalities/> accessed 4 June 2024.

[26] ‘2022 Local Government Workforce Survey | Local Government Association’ (2023) <https://www.local.gov.uk/publications/2022-local-government-workforce-survey>

[27]  Department for Science, Innovation and Technology and Office for AI, A pro-innovation approach to AI regulation (2023) <https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper> accessed 25 June 2024.

[28] The CDEI has since been renamed as the Responsible Technology Adoption Unit (RTA).

[29] Department for Science, Innovation and Technology, Office for Artificial Intelligence and Centre for Data Ethics and Innovation, ‘Assessing if artificial intelligence is the right solution’ (2019) <https://www.gov.uk/guidance/assessing-if-artificial-intelligence-is-the-right-solution> accessed 25 June 2024.

[30] Department for Science, Innovation and Technology, Office for Artificial Intelligence and Centre for Data Ethics and Innovation, ‘Understanding artificial intelligence’ (2019) <https://www.gov.uk/government/publications/understanding-artificial-intelligence> accessed 22 July 2024

[31] Ibid.

[32] Department for Science, Innovation and Technology, Office for Artificial Intelligence and Centre for Data Ethics and Innovation, ‘Understanding artificial intelligence ethics and safety’ (GOV.UK) <https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety> accessed 2 August 2024.

[33]  David Leslie, ‘Understanding Artificial Intelligence Ethics and Safety’ (The Alan Turing Institute 2019) <https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf>.

[34] ‘Guidelines for AI procurement’ (Office for Artificial Intelligence 2020) <https://assets.publishing.service.gov.uk/media/60b356228fa8f5489723d170/Guidelines_for_AI_procurement.pdf>.

[35] ‘Assessing if artificial intelligence is the right solution’ (n 29).

[36] ‘Understanding artificial intelligence’ (n 30).

[37] ‘Guidelines for AI procurement’ (n 34).

[38] Ibid.

[39] Jones (n 2).

[40] A pro-innovation approach to AI regulation (n 27).

[41] Ibid.

[42] ‘What about Fairness, Bias and Discrimination?’ (19 January 2024) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/what-about-fairness-bias-and-discrimination/> accessed 25 June 2024.

[43] Central Digital and Data Office, ‘Data Ethics Framework’ (2020) <https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-2020> accessed 25 June 2024.

[44] Equality and Human Rights Commission, Buying better outcomes: mainstreaming equality considerations in procurement (2022) <https://www.equalityhumanrights.com/guidance/public-sector/guidance-public-sector-procurement>.

[45] Ibid.

[46] ‘Guidelines for AI procurement’ (n 34).

[47] Matthew Ryder and Jessica Jones, ‘Facial Recognition Technology Needs Proper Regulation – Court of Appeal’ (Ada Lovelace Institute, 14 August 2020) <https://www.adalovelaceinstitute.org/blog/facial-recognition-technology-needs-proper-regulation/> accessed 25 June 2024.

[48] ‘Data Ethics Framework’ (n 43).

[49] Grant Fergusson, ‘Outsourced and Automated: How AI Companies Have Taken over Government Decision-Making’ (Electronic Privacy Information Center 2023) <https://epic.org/wp-content/uploads/2023/09/FINAL-EPIC-Outsourced-Automated-Report-w-Appendix-Updated-9.26.23.pdf>.

[50] ‘The Procurement Act – a Summary Guide to the Provisions’ (GOV.UK) <https://www.gov.uk/government/publications/the-procurement-bill-summary-guide-to-the-provisions/the-procurement-bill-a-summary-guide-to-the-provisions> accessed 30 July 2024.

[51] ‘How to Prepare for the Procurement Act 2023 – Procurement Essentials | CCS’ <https://www.crowncommercial.gov.uk/news/procurement-essentials-procurement-act-2023-crown-commercial-service> accessed 30 July 2024.

[52] ‘Transforming Public Procurement – Our Transparency Ambition’ (Cabinet Office 2022) Policy paper <https://www.gov.uk/government/publications/transforming-public-procurement-our-transparency-ambition/transforming-public-procurement-our-transparency-ambition> accessed 25 June 2024.

[53] ‘Guidelines for AI procurement’ (n 34).

[54] ICO, ‘How Do We Ensure Transparency in AI?’ (19 May 2023) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-transparency-in-ai/> accessed 25 June 2024.

[55] ‘Understanding artificial intelligence ethics and safety’ (n 32).

[56] ‘Guidelines for AI procurement’ (n 34).

[57] The Equality Act 2010 (Specific Duties) Regulations 2011.

[58] Buying better outcomes (n 44).

[59] ‘Assessing if artificial intelligence is the right solution’ (n 29).

[60] ICO and The Alan Turing Institute, ‘Explaining Decisions Made with AI’ (3 February 2024) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence/> accessed 25 June 2024.

[61] Jones (n 2).

[62] Matt Davies and Elliot Jones, ‘Foundation Models in the Public Sector [Policy briefing]’ (Ada Lovelace Institute 2023) <https://www.adalovelaceinstitute.org/policy-briefing/foundation-models-public-sector/>.

[63] Anna Humpleby, ‘Using Copilot in Local Government: Lessons from the LOTI AI Meetup’ (LOTI, 10 January 2024) <https://loti.london/blog/using-copilot-in-local-government-lessons-from-the-loti-ai-meetup/> accessed 25 June 2024.

[64] ‘Assessing if artificial intelligence is the right solution’ (n 29).

[65] Buying better outcomes (n 44).

[66] ‘A pro-innovation approach to AI regulation’ (n 27).

[67] ‘Data Ethics Framework’ (n 43).

[68] ‘The Essential Guide to the Public Sector Equality Duty: England (and Non-Devolved Public Authorities in Scotland and Wales)’ (Equality and Human Rights Commission 2022) <https://www.equalityhumanrights.com/essential-guide-public-sector-equality-duty>.

[69] Buying Better Outcomes (n 44).

[70] Public Services (Social Value) Act 2012.

[71] Buying better outcomes (n 44).

[72] Ibid.

[73] ‘Data Ethics Framework’ (n 43).

[74] Cabinet Office Policy Notes are guides that local authorities can use to ensure they are adhering to legislation.

[75] Buying better outcomes (n 44).

[76] Public Services (Social Value) Act 2012.

[77] ‘Procurement Policy Note 06/20 – Taking Account of Social Value in the Award of Central Government Contracts’ (Cabinet Office, Department for Culture, Media and Sport and Department for Digital, Culture, Media & Sport 2020) <https://www.gov.uk/government/publications/procurement-policy-note-0620-taking-account-of-social-value-in-the-award-of-central-government-contracts>.

[78] Buying better outcomes (n 44).

[79] ‘Guidelines for AI procurement’ (n 34).

[80] ‘A Social Value Toolkit for District Councils’ (Local Government Association) <https://www.local.gov.uk/sites/default/files/documents/District%20Councils%20Social%20Value%20Toolkit%20Final_0.pdf>.

[81]  ‘Guidelines for AI procurement’ (n 34).

[82] Public Services (Social Value) Act 2012.

[83] ‘Guidelines for AI Procurement’ (n 34).

[84] ‘A Social Value Toolkit for District Councils’ (n 80).

[85] ‘Data Ethics Framework’ (n 43).

[86] ‘A Social Value Toolkit for District Councils’ (n 80).

[87] Ada Lovelace Institute, Regulate to innovate: A route to regulation that reflects the ambition of the UK AI Strategy (2021) <https://www.adalovelaceinstitute.org/report/regulate-innovate/>.

[88] ‘A pro-innovation approach to AI regulation’ (n 27).

[89] A pro-innovation approach to AI regulation (n 27).

[90] ‘Guidelines for AI Procurement’ (n 34).

[91] Ibid.

[92] Ibid.

[93] Ibid.

[94] General Data Protection Regulation 2016.

[95] ICO, ‘What Are the Accountability and Governance Implications of AI?’ (19 May 2023) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/what-are-the-accountability-and-governance-implications-of-ai/> accessed 25 June 2024.

[96] Buying better outcomes (n 44).

[97] Doug Pyper, ‘The Public Sector Equality Duty and Equality Impact Assessments’ (2020) <https://researchbriefings.files.parliament.uk/documents/SN06591/SN06591.pdf>.

[98] Buying better outcomes (n 44).

[99] Smakman, Davies and Birtwistle (n 4).

[100] BBC News (n 6); Burns, ‘Council Algorithms Mass Profile Millions, Campaigners Say’ BBC News (20 July 2021) <https://www.bbc.com/news/uk-57869647> accessed 25 June 2024; Redden and others (n 20).

[101] Imogen Parker, ‘Ada Lovelace Institute Statement on Spring Budget 2024’ (Ada Lovelace Institute, 6 March 2024) <https://www.adalovelaceinstitute.org/press-release/spring-budget-2024-ai/> accessed 25 June 2024.

[102] Davies, ‘Use of artificial intelligence in government’ (n 17).

[103] ‘Data Ethics Framework’ (n 43).

[104] A pro-innovation approach to AI regulation (n 27).

[105] ‘National Procurement Policy Statement’ (HM Government 2021) <https://assets.publishing.service.gov.uk/media/60b0c048d3bf7f4355c1b800/National_Procurement_Policy_Statement.pdf>.

[106] A pro-innovation approach to AI regulation (n 27).


Image credit: Drazen_