Skip to content
Report

Critical analytics?

Learning from the early adoption of data analytics for local authority service delivery

Laura Carter

21 June 2024

Reading time: 222 minutes

Town hall sign on a building facade

Foreword

Like other parts of our ever-changing world, local governments are grappling with advanced data analytics and artificial intelligence (AI), and their potential to change lives for the better. This comes at a time when local authorities across the UK face unprecedented demand and increasingly complex needs across housing, welfare support and social care services. Our drive to be innovative is born out of necessity. Since 2010, local governments have absorbed real-term cuts to core spending power of 27%.[1] If we are going to provide services that support people at the right moment and help prevent personal issues from becoming social problems, the use of AI and intelligent data is an avenue worth exploring.

At the London Borough of Barking & Dagenham (LBBD), we began our advanced data analytics journey in late 2019, just before the pandemic hit. Local authorities are known for running schools and collecting bins. But we also hold a wide range of data about our residents to help deliver services more effectively and protect the most vulnerable people in society. Our analytics platform, OneView, combines data from children’s and adults’ social care, housing, the school census, revenues and benefits to create a ‘single view of the resident’. Access to this data is restricted and used on a need-to-know basis by different services. But it has transformed how we work in three important ways:

Case summaries: Our emerging localities model is supporting thousands of residents each month by creating a range of community hubs, staffed by council officers and members of the community sector, across a range of locations in our borough. If residents provide consent when they contact us, frontline officers can access core information about services such as council tax, debt advice and housing. When crisis hits it tends to touch people’s lives in multiple ways. Seeing this data means we can intervene in a more holistic and preventative way. We have more comprehensive data about residents’ situations, helping us provide a better service for residents, and saving hundreds of hours of staff time each month.

Dashboards: Dashboards have transformed our approach to delivering the Department for Levelling Up, Housing and Communities ‘Supporting Families’ programme. Prior to the introduction of OneView, we were reliant on Microsoft Excel data-matching exercises – a resource-intensive process consuming huge energy from different teams across the Council.

OneView has significantly automated our processes, freeing up staff time. This has contributed substantially to LBBD continuing to meet increasingly challenging programme targets and has seen us become one of only 14 local authorities awarded Earned Autonomy status, meaning we receive funding upfront rather than after results are delivered.

Cohort identification: This is the ability to identify residents that meet certain criteria and to undertake proactive outreach. This was vital to our COVID-19 response. It meant we were able to identify thousands of residents at risk and contact them to offer shielding support weeks before data was shared from central government and health partners, potentially saving countless lives.

We want to make sure we share the lessons we have learned as we have tested new approaches to service delivery using data. This report is based on research done in 2020, in the first few months of the OneView programme, and during the first months of the COVID-19 pandemic. We have learned – and changed – a lot since researchers from the Ada Lovelace Institute took this snapshot of the programme.

The long tail of the pandemic has highlighted intransigent inequalities, ingrained need and the everyday realities of a fast-changing, increasingly young East End community that has more in common with Blackburn and Bradford than Wandsworth or Westminster.

Our broad universal service offer works for most but can fail to meet the needs of residents heading towards crisis. That is why today, we use data to target our prevention efforts. As rising costs of care services and the cost-of-living crisis continue to play out in households and town halls up and down the country, work that brings health, social care, and public and voluntary sector partners together in focused locality settings has never been more important. Data-driven services have a vital role to play in this.

Another lesson is the importance of good governance. This goes hand-in-hand with innovation. As we have developed our model, our approach to information governance, ethics and procurement has evolved and matured. We have engaged extensively with the Information Commissioner’s Office and the Equalities and Human Rights Commission about our work and continue to develop our ethics programme. We have reprocured the platform using a different methodology and have changed the underlying business case for the OneView programme. As such, the documentation around information governance, ethics and procurement referred to in the report is out of date. Nevertheless, this report will help local authorities that are at the beginning of their data analytics journey and highlight the challenges they are likely to face.

Local authorities need to be clear-eyed about the potential uses for this type of infrastructure: holistic support, automation, prevention, crisis response and so on. But we should avoid overspecification in advance of delivery. We couldn’t have foreseen the pandemic (or at least the risk was too low to justify major upfront investment in data infrastructure), but having good data infrastructure is likely to have saved lives. Priorities will change; good data infrastructure will support delivery across these.

Our experience with OneView so far has centred on more effective data-sharing across the Council. In future, we are looking to use more sophisticated approaches, such as using advanced analytics to identify who would benefit from early intervention or who is at risk of falling through the cracks in public services.

Ultimately, we understand there are risks to working in more data-driven ways. But there are also risks in not doing so. What might the next pandemic or cost-of-living crisis response look like if each council could quickly identify which of its residents were at risk or in need? Can we effectively prevent poor outcomes like debt and homelessness by waiting for people to come to us when they are in crisis? Will we be able to keep children safe or work effectively with healthcare services without improved use and sharing of our data? We remain convinced that improved use of data will change the way that government works in the same way it has disrupted many other sectors. And we see this as a good thing: more efficient delivery for government, better services for residents.

Fiona Taylor
Chief Executive, London Borough of Barking & Dagenham

 

Executive summary

Data analytics for decision-making

As the use of data-driven systems and artificial intelligence (AI) in public services accelerates, it is vital to understand how these technologies can be used safely, equitably and beneficially in the public sector. The growing use of advanced data analytics to automate or support decision-making will shape the future delivery of public services and will have a substantial effect on individual people and broader society.

In addition, the recent boom in applications of newer technologies such as generative AI and large language models (LLMs) has increased the importance of understanding the impact of data-driven technologies. Yet governments, regulators, public bodies and researchers lack the evidence to fully understand how these systems are designed and deployed, or whether intended goals translate into positive outcomes for services and society.

This report offers evidence drawn from a case study of data analytics in public services. It documents the experience of one UK local authority – the London Borough of Barking & Dagenham (LBBD or the Council) – beginning to roll out a major programme of data analytics as part of its delivery of services. It examines how Council staff working in children’s social care and on the COVID-19 response experienced the early deployment of data analytics between May and September 2020.

While offering only a snapshot in the development and deployment of these tools, the rare opportunity to understand how they were perceived and used by different actors across a public service provides valuable evidence to inform debates. We have used these findings to draw out insights and recommendations that will build knowledge and support future decision-making when it comes to the use of these tools in public services.

We recognise and appreciate the Council’s willingness to open up its practices at an early stage of development. This supports transparency and enables wider learning about data analytics in context.

Data sharing and analytics

Policymakers from central and local government, as well as academics and researchers, have identified significant potential benefits of using data more effectively in local government. Better data sharing could help services understand the communities they seek to support, improving stretched and disjointed services. Data sharing and analytics could bring information together, enable earlier and more targeted interventions, tailor services to individuals, forecast and anticipate future requirements, offer better-connected care and triage support to those most in need. The claim that data analytics can reduce the cost of delivering services is particularly salient for local authorities because of the significant decline in real-terms grant income over the past decade.

As well as these promised benefits, however, there is emerging evidence of the limitations of data analytics systems. These exist in relation to functionality, fairness, legitimacy and public acceptability in domains as diverse as facial recognition in policing,[2] the level of scrutiny applied to visa applications[3] and exam grading in education.[4] While some of these specific uses of AI have attracted public attention in the UK, public awareness of the use of data analytics tools in the public sector remains low: just 19% of the public, for example, are aware of the use of AI for assessing eligibility for welfare benefits.[5] There is no clear mechanism for ensuring transparency of data analytics across the public sector and there is no systematic understanding of how data practices are evolving.

The observations and analysis of this report aim to recognise and support the potential benefits of these systems, while filling in knowledge gaps and identifying areas of practice and policy where there is potential for harm and a need for more research. It offers evidence about the complex realities of rolling out these types of systems.

While this report provides only a snapshot of the Council’s practice at an early stage of development in 2020, it will be useful to local authorities considering or actively deploying similar data analytics systems, and to the Government and regulators currently grappling with data and AI governance. We hope it supports a collective understanding across policymakers and practitioners that informs decision-making about when and how these systems can be used.

Data analytics at the London Borough of Barking & Dagenham

In 2020, at the beginning of the COVID-19 pandemic, the Ada Lovelace Institute was invited to undertake a short, independent piece of ethnographic research into the early-stage deployment of a data analytics system in the London Borough of Barking & Dagenham. We investigated the Council’s experience of using the OneView data analytics system, developed by private technology provider Xantura, in two areas: children’s social care and the response to COVID-19. LBBD presented a unique site of research, as it was at that time a pioneer in using data analytics in the delivery of public services.

Our research took place against a backdrop of challenging circumstances for local authorities seeking to implement data-driven technologies, which still persist today. Ethical frameworks to guide the use of data and AI were still developing, and law and regulation had yet to catch up with deployed technologies. Councils were under financial pressure with decreasing centrally provided funding. It was in this context that LBBD decided to put data at the heart of its strategies for public service provision, investing in an internal data science team to lead its work.

At LBBD we observed the OneView system bringing together multiple Council data sources – Adult Social Care, Children’s Social Care, Housing, Revenue & Benefits and Education – and observed the analysis and predictive modelling on that aggregated data. We saw this data utilised to support borough-wide service provision, forming part of the information that frontline social workers used to make decisions about interventions and care.

To help create a more comprehensive picture of service use for the Council, the OneView system matched data from different sources. This data could be combined into new analyses to better understand factors that affected resident populations. For example, the system could produce a summary of information held by different Council services about an individual to identify those ‘at-risk’, allowing social workers and other frontline staff to understand what support was already being offered, and to intervene earlier than they would otherwise have done. This could mean identifying someone in contact with both children’s services and housing services, or making predictions about future outcomes for individuals or groups.

Methodology

To gather evidence on these practices, we used a combination of ethnographically informed research methods – including online organisational ethnography, semi-structured online interviews, informal conversations and documentary analysis. The research surfaces real-life detail that provides a nuanced, contextualised understanding of how the Council used advanced data analytics between May and September 2020 to support the provision of local government services.

We did not seek to fully document the internal modelling used by the OneView system, nor did we assess the impact on residents whose data was used in the system. To understand in depth how different staff understood and used advanced data analytics in their work, the study focused on three outputs from OneView:

  1. Case summaries synthesising information from multiple data sources in a single view for frontline social workers.
  2. Predictive alerts about individuals who, according to predictive modelling, were at risk of specific events – such as becoming homeless, being stepped up or down in children’s social care, or being admitted to hospital – within the next 12 months.
  3. COVID-19 case management to filter and group residents according to COVID-19 risk factors, deployed in the early months of the COVID-19 pandemic in the UK.

Beyond the technology

It is clear from this research that ensuring a data analytics system delivers value to frontline users and the public is complex, and requires expertise and extensive investment beyond the technology itself. Currently the public sector does not have clear, definitive guidance on how to navigate the technical and ethical issues arising from novel technologies, and there is no agreed assessment or evaluation system for these tools. This leaves individual frontline services grappling with complex sectoral challenges. These challenges will be compounded as more complex, novel and opaque AI applications are adopted.

We found that in this early phase of the Council’s data analytics rollout:

  • There was a clear, consistent and valued high-level vision for service transformation based on greater use of data analytics. Rising demand for services and financial pressures may have catalysed new data practices, but they were not predominantly seen as a cost-cutting measure. Interviewees understood the aim to be improving relationships with residents.
  • Impact and evaluation tools were still being developed after tools were rolled out and operational.
  • There wasn’t one perspective on analytics use. Staff views differed based on how and where they were using the system, and how it performed in relation to their roles and responsibilities.
  • OneView proved useful for staff as a COVID-19 case management system. Factors that contributed to its value were its clear and narrow purpose, transparent and visible risk factors, and staff confidence in how to use the outputs in their work and how to describe the benefits.
  • Case summaries were felt to be potentially useful for frontline staff engaging with residents in early stages of intervention where the staff had less information.
  • The use of case summaries and predictive alerts for children’s social care was not valued or accepted by most frontline staff interviewed. This was partly due to concerns about the accuracy of predictive models in the context of children’s social care.
  • Staff felt there wasn’t adequate transparency around OneView predictive modules, in particular on what factors contributed to case summaries and predictive alerts, and the rationale for recommendations and outputs. Some frontline staff were unconvinced that the analytics were as objective, neutral or accurate as had been described to them.
  • These transparency questions – and concerns about accuracy, objectivity and legitimacy – contributed to a lack of trust and decreased use by frontline social workers of the tool designed to assess child risk.
  • Staff across the Council recognised that ethics and values were important for OneView but lacked a coherent account of what ethical practice meant and how it was being embedded. The multiple different conceptions of ethical practice ranged from good intentions to improve outcomes for residents, to compliance with information governance obligations. The lack of shared, accessible and clearly defined processes to operationalise ethics across the system meant that there was a risk that some harms would not be mitigated.

These findings reflect only a snapshot period in the rollout of these tools for the Council rather than an assessment of their final version.

Conditions for beneficial use of data analytics in local authorities

 

While the approach taken by LBBD was uniquely tailored to its services, the questions, approaches and challenges arising from embedding tools like these will be common across many local services. The observations from this snapshot will therefore be useful to inform the future of data and AI practice in public services. Using the insights from this research and the existing literature, we have identified several conditions that should be met before deploying data analytics tools in public services.

  • Our research uncovers several prerequisites for data analytics to be used and trusted by frontline workers. First, the required output from the system must be clearly specified and understood for all users. Secondly, the tools must be seen by the public as legitimate. And finally, the accuracy of the system must be high enough to be trusted. When risk scoring is applied to more complex social situations, where terminology is unclear or value-laden and where affected publics may be unaware or uncomfortable with the use of data analytics, the tools are less likely to be trusted and used by the frontline professionals they are intended to support.
  • To assess the efficacy of data analytics, it is necessary to have a clear articulation of successful outcomes specific to different stakeholders and a strategy for measuring impact. Without these it is not possible to determine whether the deployment of data analytics has delivered the anticipated benefits or is working effectively. This is also important for pilot programmes, where success criteria should be used to assess whether a data analytics system should be widely deployed.
  • Ethical principles for the use of data analytics should be defined, holistic, accessible and usable by everyone involved in using the analytics system. These principles should be consistent with – but not limited to – other obligations, including equalities and data protection obligations.
  • Local authorities require better guidance to ensure that their use of data analytics complies with both data protection and equalities obligations (particularly the monitoring obligations). It was not within the scope of our work to assess LBBD’s compliance with the UK General Data Protection Regulation (UK GDPR), nor did we observe breaches of this legislation. Nonetheless, we observed that the lack of clear regulatory guidance about the similarities and differences between ‘special category data’ under the UK GDPR and ‘protected characteristics’ under the Equality Act 2010 – as well as the different obligations under these pieces of legislation – meant that internal discussions about data protection and equalities obligations risked conflating and confusing these concepts and obligations.
  • The development, implementation and evaluation of data analytics must look at any tool in the context of the whole system into which it has been introduced – including the technical and social elements. Our research also found that the development and deployment of the OneView system had an impact not only on IT systems in the Council but also on the day-to-day work and practice of frontline workers. Interviewees highlighted how the system could affect the relationships between residents and council staff that are crucial for effective, trusted social work and which require data use that is seen as legitimate. For example, they cited the potential damage to trust if a resident felt that the social worker had information about sensitive issues that the resident had not shared with them.
  • Procuring and implementing a system like OneView needs to be well thought through, consulted on, tested, discussed and evaluated against defined success criteria, with the outcome that all staff should be able to understand, describe and use the system to support their day-to-day work. Data analytics systems may prove to be useful in providing local authority services, but they should not be seen as a quick, cheap or easy solution.
  • In conclusion, we found that introducing a data analytics system into an existing, complex context such as a local authority is a significant task that requires considerable resource, effort and the involvement of multiple in-house and externally contracted staff – including technical staff, decision-makers and frontline staff.

 

To support the public sector in meeting the conditions listed above, we have developed a series of specific recommendations. These are for local authorities seeking to implement data analytics systems, regulators and policymakers providing guidance and support on data protection and procurement, and companies developing and supplying systems to the public sector.

Recommendations

We recommend that local authorities implement the following actions:

  • Ensure that data analytics systems are explainable, in line with the guidance produced by the Information Commissioner’s Office (ICO) and the Alan Turing Institute.[6] These explanations should:
    • be accessible to all stakeholders, including frontline workers and the people whose data is used in the system
    • include the purpose and target group, factors and underlying values that are used as features in models, and the rationale for using those factors
    • include mechanisms for human review where data-analytics-informed decisions produce undesirable outcomes and redress may be required.
  • Complete algorithmic transparency reports for all data analytics systems that provide clear information for residents about a system, upload these to the repository overseen by the Responsible Technology Adoption Unit (RTA) and the Central Digital and Data Office (CDDO), and regularly review and update the reports.
  • Include the development of clear and actionable success criteria and plans for how these will be evaluated in the procurement and implementation of analytics systems, including in pilot deployments. In developing success criteria and evaluation plans, local authorities should:
    • develop success criteria and evaluation methods for the system as a whole with the participation of those who will be most affected by the use of the system
    • where benefits are anticipated for a particular group – for example, frontline social workers or service users – ensure that this group participates in developing success criteria and evaluating whether the benefits have been achieved.
  • Carry out equalities impact assessments when developing and deploying data analytics systems.
  • Develop, share and train users in ethical principles for the use of data analytics that are holistic, accessible and usable by everyone involved in using the system. To realise this, local authorities should:
    • consider the needs of different communities
    • be consistent with – but not limited to – other obligations, including equalities and data protection obligations
    • develop and implement clear practices that operationalise ethical principles, such as documentation practices and testing/evaluation schemes that support understanding of the impact of these systems
    • clearly assign practices to particular stakeholders, including the ‘upstream’ developer of that system where necessary.
  • During the procurement process, establish clear requirements and processes to ensure that technical teams can access the underlying data and model of the system for algorithmic auditing and testing purposes.
  • Develop, implement and evaluate data analytics in the context of the whole system into which it has been introduced – including both technical and social elements. This includes data analytics systems and tools developed by private companies.

We recommend that regulators and policymakers consider the following points:

  • The Equality and Human Rights Commission (EHRC) and the ICO should continue to collaborate to ensure their guidance is accessible, fit-for-purpose and enables staff across a wide range of local authority functions (and other public-sector institutions) to handle the use of, or exclusion of, special category data, in particular with regard to the:
    • use in data analytics and predictive analytics systems
    • use in equalities monitoring of the use of these systems
    • compliance with the Equality Act 2010, the UK GDPR and Article 14 of the Human Rights Act 1998.
  • The CDDO and the RTA should continue the push for the Algorithmic Transparency Recording Standard to be a mandatory requirement and extend that requirement to local government.
  • The Crown Commercial Service (CCS) should develop model contract clauses for the use of data analytics in local authorities. The clauses should:
    • state that developers must ensure that tools are compliant with EHRC and ICO guidelines
    • ensure local authorities have a contractual right to gain the appropriate level of access to the underlying model and training data, so that they can perform evaluations and test accuracy and efficacy.
  • The CCS should also design and pilot an Algorithmic Impact Assessment (AIA) standard for local authorities to use when procuring data analytics systems (and other AI-powered systems).[7] These assessments are performed in the early stages of the design and development process of a data analytics tool and can help identify potential risks or issues for the local authority to address with the developer. AIAs could also enable more public participation in the technology procurement process.
  • Relevant regulators and central Government departments should be resourced and empowered to improve processes and standards for data analytics use in public-sector delivery.

We recommend that companies developing and supplying data analytics tools and systems to the public sector implement the following actions:

  • Provide clear explanations for how tools and systems work, as well as access to systems to enable audits and evaluations of how a tool produces outputs. Failing to provide this information may make tools and systems unusable, as frontline staff will lack confidence in their use. To deliver on this, companies must provide public-sector clients with:
    • the access needed to audit and evaluate tools and systems before procurement, and at regular intervals afterwards
    • clear information on where data used to train systems comes from, available via a document such as a datasheet
    • easily understandable documentation explaining how a system operates.
  • Allow for independent evaluation of the efficacy of data analytics systems in practice, rather than only in lab settings.
  • Design these systems in close consultation with frontline workers and residents who may be impacted by their use. Specifically:
    • Work with local authorities to design data analytics systems with the participation of residents who will be impacted by them, to ensure that systems better reflect the lived experiences of those they are meant to serve.
    • Work with frontline workers from the early design stages to study how a data analytics system will be used in practice.
    • Create ways for frontline workers and residents to identify and report errors and issues from the beginning of deployment, including in pilots.
  • Ensure their practices are compliant with laws and ethical obligations, and enable regulatory compliance for public-sector clients. Specifically, companies should ensure they:
    • understand and operate within the ethical and legal obligations of public-sector clients, and work to enable clients to meet those obligations
    • where necessary, give members of a local authority’s data science or technical team access to the underlying models and training data, so that they can perform bias auditing and evaluations
    • support public engagement efforts with residents and frontline workers who will be impacted by these tools.

Acknowledgements

This report was written by Laura Carter with significant contributions from Imogen Parker, Renate Samson and Octavia Reeve.

The research was carried out by former Ada staff members Tom Walker and Jenny Brennan, and independent consultant Dan Artus.

The initial analysis of this research was carried out by Tom Walker and reviewed by Tim Adams, Beverley Barnett-Jones, Vicky Clayton, Luke Geoghegan, Sarah Gorin, Josefine Magnusson, Paul Waller, Calum Webb and Thomas Vogl.

We are grateful to staff at the London Borough of Barking & Dagenham for sharing learnings and being open about their practices at an early stage of the organisation’s data analytics development.

We would also like to thank Professor Hannah Knox, Associate Professor of Anthropology at UCL (University College London) for her oversight, and independent consultant Jennifer Cearns for her engagement with the project.

Publication of the report was overseen by Fran Bennett, a member of the Oversight Board at the Ada Lovelace Institute and interim Director of the Institute from May 2023 to June 2024. Fran has previously worked with local government data science, including as a Director of Mastodon C (a public-sector data science consultancy) until May 2019.

How to read this report

If you are part of a technical team responsible for procuring, developing and/or implementing data analytics in a local authority:

  • See Insight 4 for the impact of opacity in data analytics systems on frontline workers’ trust in their outputs.
  • See Insight 5 for the benefits of explainable and understandable outputs on trust.
  • See Insight 5 for our recommendations to local authorities on explainability of data analytics systems.
  • We also recommend that local authorities complete algorithmic transparency reports and upload these to the repository overseen by the RTA and CDDO (as described on in the Recommendations section).

If you are a local authority decision-maker involved in procuring data analytics, or if you are in central Government and providing guidance to local authorities

  • Insight 1 describes the challenges of evaluating data analytics deployment without a clear and consistent articulation of success criteria.
  • Our recommendations to local authorities on success criteria and evaluation are in the Key recommendations section of Insight 1.
  • We also recommend that local authorities, as part of the procurement process, establish clear requirements and processes to ensure that technical teams can access the underlying data and model of the system for algorithmic auditing and testing purposes (see Recommendations in the Conclusions section).

If you are an elected official or manager with overall responsibility for local authority data analytics:

  • See Insight 3 for our findings on the impact of the development and deployment of data analytics on the day-to-day work and practice of frontline workers.
  • See the Key recommendations section of Insight 3 for our recommendations on evaluating data analytics systems in the context of the whole system into which they are being introduced – including both technical and social elements.

If you are responsible for the ethical use of local authority data analytics:

  • See Insight 6 for our findings on the wide range of concepts and practices which were understood to be part of ‘ethical’ practice across the entire system.
  • See the Conclusions section for our recommendation on developing ethical interventions that are holistic, accessible and usable by everyone involved in using the system.

If you are responsible for data protection or equalities monitoring in a local authority:

  • See Insight 2 for a discussion of the lack of clarity about the similarities and differences between ‘special category data’ under the UK GDPR and ‘protected characteristics’ under the Equality Act 2010.
  • In the Key recommendations section of Insight 2, we make specific recommendations to the ICO and the EHRC to ensure guidance for local authorities and other public-sector institutions is accessible and fit-for-purpose to help them comply with data protection and equalities legislation.
  • In the same section, we also recommend that local authorities carry out equalities impact assessments when developing and deploying data analytics systems.

If you are part of a company building technology for local authorities:

  • Insight 4 describes the impact of a lack of clarity in how outputs are generated, and Insight 3 describes the impact of the development and deployment of data analytics on the day-to-day work of frontline workers.
  • Our recommendations to companies on the design of, access to and explanations of data analytics systems are within the Recommendations in the Conclusions section.

If you are a frontline worker interested in the opportunities and challenges of data analytics in local government services:

  • We found that data analytics systems may be useful: however, they are not a quick, cheap or easy solution to local authority problems. Rolling out a data analytics system is a complex task that requires considerable time and effort from everyone involved – including technical staff, decision-makers and frontline social services staff – to embed the system into an existing complex system such as social care.
  • We describe the landscape of the developments, uses, opportunities and challenges of predictive analytics in the public sector in the Introduction.
  • Throughout the report, we illustrate our findings with quotes from interviewees in LBBD with direct experience of the OneView system.

If you are a researcher:

  • We describe our understanding of the OneView system as it was in use in LBBD in 2020 in The OneView system section.
  • We used a combination of ethnographically informed research methods, including online organisational ethnography, semi-structured interviews, informal conversations and documentary analysis, which we describe in detail in the Methods section.

Introduction

Local authorities are increasingly looking to data analytics to help them improve the delivery of public services and to reduce expenditure. In the context of a decade of reductions to real-terms local government funding, data analytics are attractive: they promise efficiency improvements, better forecasting, and targeted and effective services. High-profile examples of harmful outcomes from uses of data analytics,[8] however, have prompted questions about functionality, public acceptability and the possibility of discrimination.

The social impact of data analytics use in public service delivery is hard to estimate, as there are no currently accepted practices around this. Between May and September of 2020 the Ada Lovelace Institute undertook ethnographically informed research into the deployment of one data analytics system in a local authority: the use of the OneView system in the London Borough of Barking & Dagenham (LBBD or the Council).

We aimed to document how LBBD – an early adopter of data-driven approaches to delivering local government services – used the OneView system, and to understand how Council staff experienced the deployment of different outputs from this system.

Definitions used in this report

  • Data analytics is an umbrella term that refers to a broad range of activities involving generating, collecting and using data. In local government services, it can include a wide range of activities, which may overlap or directly interact with each other. Below are the specific activities we discuss in this report:
  • Data matching: comparing or combining data from at least two different datasets.[9]
  • Synthesising analytics: processing data (including data that has been through a matching process) into a summary output (including text-based outputs).
  • Predictive analytics: predictions about future outcomes using current and historical data (including data that has been through a matching process) combined with statistical modelling, data-mining techniques and machine learning to output a prediction about the future.[10]

Using data analytics in the public sector

Local authorities have increasingly become interested in using data analytics to help deliver or improve local government services, from identifying at-risk children,[11] to allocating school places,[12] to fixing potholes.[13] These analytics are used both at an aggregate level, in the form of statistical analysis to better understand factors that affect their populations, and at an individual level, to inform the work done by social workers and other frontline staff.

Local authorities use a variety of data analytics methods. Some applications involve matching data from different sources: for example, combining information about a single individual who is in contact with both children’s services and housing services,[14] or linking datasets to create a more comprehensive picture of service use.

This data might be combined into new outputs: for example, a new database entry for an individual person that includes information held by multiple council services, or a dashboard showing service use over time. It could bring together information already held in departmental silos. This could create a more comprehensive and legible case file on a resident or family to help Council staff better understand individuals’ situations and tailor services accordingly (currently often a time-consuming, manual process).[15] It could also be used to generate a numerical risk score to inform decisions made by social services, or justify increased or decreased scrutiny of applications for benefits (a process called ‘risk-based verification’).[16]

Data may also be used to make predictions about future outcomes for individuals or groups. Some parts of the public sector have piloted or begun using different forms of analytics to try to identify people with high likelihood of particular events that might lead to a need for support or intervention. This can range from people understating their income in tax returns,[17] to older people at risk of frailty,[18] to individuals assessed as likely to not pay rent.[19]

The logic is to identify at-risk individuals so that local authorities can intervene, to respond earlier than they would have done otherwise. This intervention can include directing the attention of dedicated early-intervention services to those individuals,[20] or changing the way that other services operate so that they are able to intervene in individuals’ lives at an earlier point.[21]

The goals for predictive analytics in this context include improving outcomes for residents, saving staff time, improving the quality of data and reducing costs by managing demand for local government services.[22]

Policymakers from central and local government, as well as academics and researchers, have argued for the benefits of data analytics in local government. These include enabling earlier and more-targeted interventions, tailoring local government services to individuals, offering more connected care, forecasting and anticipating future requirements, and triaging support to those most in need. A partnership led by the University of Essex argued in 2019 that in local government, predictive analytics in particular could ‘help to focus the allocation of scarce resources, identify adverse events, and ascertain the effectiveness of tested interventions’.[23]

In 2021, then Minister for Supporting Families Eddie Hughes MP wrote that: ‘Data sharing projects can make real world improvements to support for children and families.’[24] In other words, data and algorithmic systems could lead to a more responsive government, with more effective and efficient services that reduce the burden on the state.[25]

The claim that data analytics can reduce the cost of delivering services is particularly attractive for local authorities.[26] Even prior to the COVID-19 pandemic, local authorities were already responding to a 33% reduction in central government funding over the previous decade and a series of council tax freezes, which had contributed to an 18% decline in real-terms revenues and a 21% decline in spending since 2009.[27] The pandemic increased the level of need for council services while simultaneously reducing council revenues.[28]

The Institute of Fiscal Studies estimated in August 2020 that councils across England had a shortfall of £2 billion relative to forecasts of spending pressures,[29] and the National Audit Office reported in 2021 that 94% of councils expected to make cuts to spending on services in 2022.[30] These predictions were made before the cost-of-living crisis the UK is experiencing in 2023.[31] From this perspective, it is understandable that local government services and local authorities saw, and may still see, data-driven services as a way of managing the current financial pressures.

As well as these promised benefits, however, there is emerging evidence of the limitations of data analytics systems.

These exist in relation to functionality,[32] fairness,[33] legitimacy and public acceptability,[34] in domains as diverse as policing, the health system, immigration and borders, and education. The deployment of data analytics in delivering public services in many different countries has been the subject of specific critique.

In 2019 United Nations Special Rapporteur on Extreme Poverty and Human Rights Philip Alston’s review of such systems in social protection and assistance warned of a grave risk of ‘stumbling zombie-like into a digital welfare dystopia’ in response to the rise in data-driven technologies being used to ‘automate, predict, identity, surveil, detect, target and punish’.[35] There is also a lack of comprehensive evidence about the cost–benefit analyses of these technologies, and a lack of robust evaluation.[36]

High-profile examples of predictive analytics, in particular, in the UK and abroad have triggered important technical, social and ethical debates. The capacity of predictive analytics systems to identify risk accurately and reliably has received scrutiny from What Works for Children’s Social Care, an initiative to foster evidence-informed practice in England, which in September 2020 published the results of an 18-month project to develop models with four local authorities that could predict outcomes for individuals.

Summarising its findings, the report stated: ‘We do not find evidence that the models we created using machine learning techniques “work” well in children’s social care.’ It noted that on average, the models failed to identify four out of five children at risk; when they did identify a child as being at risk, the models were wrong on six out of ten occasions.[37] Private-sector providers responded by suggesting that their systems draw on a wider range of data sources than those used in the What Works for Children’s Social Care project.[38]

There is currently no coherent set of approaches to consider, understand and monitor the social impact of the wide variety of data analytics being used in public service delivery.[39] While specific cases have garnered public attention in the UK,[40] we lack a systematic understanding of how data practices are evolving, and the landscape of transparency mechanisms to tackle this is fragmented.[41] Taken individually or combined in the limited ways currently possible, these transparency measures leave us far from being able to scrutinise and evaluate the functions – or effects on communities and individuals – of data analytics in use in the public sector.

Data analytics in the London Borough of Barking & Dagenham

LBBD is situated around nine miles east of central London. It is a local authority[42] responsible for delivering services[43] to approximately 218,900 residents.[44]

The borough has faced multiple challenges in the last two decades. In 2002, the Ford plant in Dagenham ceased vehicle assembly, resulting in a large number of local job losses. A 2016 report on the borough listed a series of challenges associated with the plant closure: low wages and labour market insecurity, ‘accompanied by the longer-term consequences, including ill health, a sense of rootlessness and a loss of ambition’.[45] On the 2019 Index of Multiple Deprivation – an official measure of relative deprivation for small areas across England – LBBD was ranked as the most deprived borough in London and the fifth most deprived local authority in England.[46]

Chris Naylor (chief executive of LBBD between 2015 and 2021) characterised the Council’s ‘new normal’ as ‘perma-austerity; unsustainable rises in demand for services, conceived for different times, now struggling to cope; mega changes in expectations and the erosion of trust, driven in part by new technology, but also the rapid decline of old world power paradigms; environmental degradation; rapid and unpredictable demographic changes that challenge prevailing patterns of cohesion and identity; and an economy that isn’t working for too many people’.[47]

In response to these multiple challenges, as well as to austerity measures,[48] the Council launched a major transformation initiative in 2015. In 2017 the Council launched Community Solutions, which integrated multiple Council services into a ‘universal front-door’ model[49] with the goal of ‘identifying the root cause of a person’s or family’s problems and helping to resolve those problems before they escalate’.[50] The 2020 Corporate Plan reiterated the Council’s focus on preventative work, focusing on ‘addressing the root causes of poverty, deprivation and health inequality’.[51]

Key actors

  • Community Solutions: Council service comprising integrated frontline services supporting individuals and families in LBBD experiencing problems including unemployment, homelessness and domestic violence.[52]
  • Insight Hub: Council team using data and behavioural science to understand and forecast resident needs.
  • EY: consultancy contracted by the Council to deliver the OneView project.
  • Xantura: developers of the OneView software, and subcontractors to EY in the LBBD OneView project.

For the Council, data analytics was key to this transformation ambition. Senior leadership publicly stated that by making more intensive use of data and insight about the borough’s residents, the Council could build more ‘intimate’, trusting relationships with its residents.[53] Internal documentation revealed that the Council’s approach was defined by a proactive, rather than reactive, approach to service delivery: ‘A lot of the demand on our services is entirely preventable yet we must realise the potential of data and insight, in order to reduce preventable demand via earlier identification of risk, thus bringing benefits to both the resident and the Council.’[54]

In 2016, the Council created an internal Insight Hub: this was a team of people working to use data to understand and forecast resident needs and to develop behavioural interventions.[55] The Insight Hub’s work was described using two slogans: ‘Turning data into insight-led action’ and ‘Putting predictive analytics into the front line’. The Insight Hub team was described within the Council not as a ‘data team’, but as one focused on ‘insight’, in part because of the inclusion of a behavioural scientist, with the aim of allowing the Council to understand how residents might respond to interventions and in turn helping it to tailor how those interventions were provided.

The activities of the Insight Hub included extracting and tidying data, analysing and visualising, and finding new datasets that could be used to support the Council’s goals. In its early stages, members of the Insight Hub team engaged with staff in predictive modelling in a variety of areas, with data scientists gathering advice from housing enforcement officers on variables to include in a machine learning model to identify rogue landlords, and with waste collection crews to test a route optimisation model for collection of bulky waste such as sofas and mattresses.

Staff members positioned the Insight Hub as separate from the day-to-day data work of reporting, consciously diverging from what interviewees saw as the common practice in local government of having a single team. ‘A lot of boroughs have derived their predictive analytics or their data scientists from their performance teams’, one interviewee said. ‘We absolutely said there were two separate functions […] One was about running the business, and one was about looking at the future and looking at where we should target our resources.’

Within the Council, the Insight Hub uses the metaphor of the Council as a ‘ship’, in which its team members were ‘lookouts in the crow’s nest’, scanning the horizon, while other data analysts focusing on reporting and performance monitoring were on deck keeping the ship moving forward. The Insight Hub had strong support from senior management in the Council, and financial support as part of £24 million allocated for transformation funding.

Timeline of OneView implementation in LBBD to 2020

  • April 2017: Creation of Insight Hub
  • December 2018: Procurement of EY Xantura system approved by LBBD[56]
  • July 2019: Homelessness predictive model launched
  • September 2019: Selected predictive models launched in Children’s Social Care
  • December 2019: Completion of the Build Phase of OneView for children’s services[57]
  • January 2020: Use of Children’s Social Care models paused
  • February 2020: Hospital admission predictive model launched for testing in Adult Social Care
  • April 2020: Homelessness and hospital admissions models paused for capacity and COVID-19 reasons

Community Solutions, the integrated frontline services at LBBD, procured the OneView system in 2018. The system was built and is maintained by Xantura, as part of a contracted partnership between LBBD and EY,[58] a company providing consultancy, tax and audit services.[59] EY had been working in partnership with London Councils (the local government association for Greater London)[60] since 2013 on the London Ventures programme, which identified private-sector companies with new ideas related to the public sector and found opportunities for local authorities to use them.[61]

Procurement documents from December 2018 state that the value of the contract to implement OneView was £1.025 million over an initial four years.[62]

Within the Council, the Insight Hub was responsible for day-to-day implementation, with the head of Community Solutions holding overall responsibility for the project.

Chris Naylor, writing in 2019, stated that the Council’s approach ‘will require the use of data and insight on a scale never-before seen in the public sector’,[63] while internal documents regularly describe the Council as ‘a data-driven organisation’. By 2020, interviewees across the Council felt that the use of data was integrated into the way that the Council thought: one blog post stated that the Council saw ‘insight as one of our greatest assets’.[64]

Against this backdrop, the Ada Lovelace Institute was invited to undertake independent research into LBBD’s data practices.

About this research

This report aims to document how LBBD adopted data analytics in the delivery of local government services, and to understand how Council staff experienced the deployment of different data analytics methods and outputs. It represents our observations about the system between May and September 2020.

In late 2019, LBBD approached the Ada Lovelace Institute with an invitation to undertake independent research into its data practices. LBBD was one of the first local authorities to adopt data analytics as part of its delivery of Council services. Consequently, its experiences offer insights for other local authorities who are considering similar uses of data analytics.

Although the Council’s own practices have evolved, the evidence presented in this report has the potential to advance collective understanding of how data analytics is being developed and used in the public sector – in particular for those interested in how analytics can be deployed in the pursuit of positive societal outcomes and with the legitimacy, acceptance and support of frontline workers and those people affected by them.

Understanding the interplay between data analytics and the delivery of local government services, and interrogating how issues such as transparency, accountability, ethics, privacy, trust and bias are navigated by users and deployers of predictive analytics systems, is critical to ensure that data and data-driven technologies are used in ways that are responsible and transparent and that work for people and society.

This study is not a comprehensive account of every aspect of the deployment of a data analytics system. Importantly, it is explicitly not an evaluation of the accuracy, effectiveness or outcomes of predictive analytics for families, data subjects, residents or local government services. It is beyond the scope of this research to understand the perspectives of residents affected by local authorities’ use of predictive analytics. Also beyond the scope of this report is the extent to which data analytics can address – or exacerbate – existing inequalities. These topics remain important for future research.

Independence of the study

This is an independent research study. The Ada Lovelace Institute has neither sought, nor accepted, any funding or benefits from the Council or actors involved in implementing predictive analytics within the Council. The study’s independence was established in a memorandum of understanding between the Ada Lovelace Institute and LBBD, signed at the outset of the project, which states that the research aims to shed light on the development and use of predictive analytics in the Council.

During the period between the completion of the research in September 2020 and the publication of this report in 2024, staff from the Ada Lovelace Institute presented preliminary findings to staff at LBBD and offered recommendations to the Council, some of which have since been adopted. LBBD had the opportunity to review and offer comments on this report, but editorial control remained with Ada. The recommendations in this report to local authorities draw on the experiences of LBBD as an early adopter, and are targeted at local authorities which are deploying – or considering deploying – predictive or synthesising analytic systems.

For more information, see Methods section.

The OneView system

OneView[65] is a software platform[66] developed by the private technology company Xantura. LBBD procured the system in 2019 to bring together data from multiple Council sources and develop and deploy ‘data analytics and predictive demand models for children’s social care (including early help), homelessness and adult social care’, with the aim of prioritising support for ‘the children, young people and households who are most vulnerable and at greatest risk’.[67]

For frontline workers in social services, OneView produced case summaries that brought together information about an individual from multiple, siloed Council data sources, which otherwise involved an intensive manual process to access. OneView brought that data together into a single document, as well as predictive alerts about individuals who were at risk of certain events, such as presenting as homeless in the next 12 months.

OneView also produced dashboards which displayed aggregate information about services. During the research period, we observed an additional output: in response to the 2020 COVID-19 pandemic, LBBD began using OneView as a case management system to filter and group residents according to a set of risk factors related to COVID-19, with the intent of contacting residents at high risk who could be in need of support.

This section describes the OneView system to the extent that we were able to observe it in effect during our research period (including elements of the system which had been deployed and then paused). As discussed in the Methods section, in October 2023 Xantura withdrew on behalf of their employees consent to participate in this research: our interviews with Xantura staff did not contain any proprietary information, but in response we have removed quotes from Xantura employees which served to illustrate the non-proprietary workings of OneView. Interviewees quoted in this section are representatives of LBBD.

Data sources

During the research period, researchers saw 49 data-sharing agreements (DSAs) between the Council and Xantura for OneView, across five departments: Adult Social Care, Children’s Social Care, Housing, Revenue & Benefits and Education. They covered data from social care case notes to council tenancies to the School Census.[68] Together, they detailed 1,418 data elements that could be shared, and interviewees at the Council stated that these accurately reflected the data currently shared with the OneView system. The Council would not have had all this data for every resident, as many of the data elements would have been completed only if a resident had been part of an active case in a department. There would also have been duplicate fields – name and address being collected multiple times by multiple systems, for instance – that would have been merged during the matching process in OneView. The types of data included:

  • data considered ‘personal data’ under UK GDPR, for example:
    • name
    • address
    • date of birth
    • National Insurance number
    • NHS number
  • data considered ‘special category personal data’ under UK GDPR, for example:
    • gender
    • ethnicity
    • sexual orientation
    • religion
    • hospital admission and discharge dates
    • benefits entitlements
    • homelessness circumstances and needs
    • reason for referral to a particular service
    • case notes from a particular service
  • other data used for insights but not classified as personal or special category data, for example:
    • number of tenants in a household
    • housing arrears
    • property type
    • estate and ward where a household is located
    • school details.

The DSAs stipulated that the LBBD data was shared with Xantura for ‘modelling purposes’ or ‘display purposes’, or both. ‘Modelling purposes’ appeared to refer to both modelling and training, while ‘display’ referred to deployment and monitoring.

This data was extracted from different Council software and case management systems,[69] and cleaned and structured by two data scientists in the Council’s Insight Hub (see ‘Key actors’ box). Some scripts were run against this data by Insight Hub data scientists to remove sensitive data from the datasets before they were uploaded to the OneView Information Governance Bridge. This was typically based on flags against restricted data in the relevant case management system, such as the addresses of domestic violence safehouses.

Multiple interviewees agreed that the initial stage of collecting and processing data from various Council services was a time-consuming process, as data scientists in the Insight Hub learned how to extract data from the different systems. The data scientists involved explained that while extracting data from some Council services’ internal systems could be straightforward, in other cases it was a ‘laborious’ process that took several months.

Information Governance Bridge

The pseudonymising of personal data, such as names and dates of birth, occurred in the Information Governance Bridge. Information produced by Xantura, as part of a public webinar, stated:

The platform automatically ‘pseudonymises’ data to comply with all data protection and ethical responsibilities. It does this by taking the personal data only (names, addresses, dates of birth) and matching it across the separate data extracts to create a single view of resident, family and household over time.

This is then ascribed a unique identifier, encrypted and added to the sensitive data on the Council’s database. Only the Council can then re-identify that data according to the agreed data-sharing protocols. This process means that we can safely and ethically include data from any Council or wider partner.[70]

We were not able to access more detailed information about the algorithms used to match data.

Internal LBBD documents produced during July 2020, for example, noted that household composition records were ‘not always accurately matched’, with particular issues with hostels and houses in multiple occupation (HMOs), meaning that ‘decisions may be based without a correct picture of household composition and/or [this] reduces confidence in tool’.[71] Staff said that the OneView implementing team was constantly receiving feedback and adjusting OneView to respond to changes of this sort.

Internal documents and interviewees indicated that Xantura has no access to de-pseudonymised data, and that keys for de-pseudonymising the data are kept in the Council’s system.

Processing, modelling and prediction

 

In data analysis, a model is the translation of a social question into quantifiable, computational variables.[72] Increases in computational power over the last decade have enabled the use of machine learning to develop models by looking for patterns in existing datasets,[73] a process sometimes called ‘training’. These models are then used to transform new input data into new outputs using algorithms: sequences of computational steps.[74] This use of machine learning to develop models and algorithms is sometimes referred to using the broad term artificial intelligence or AI.

OneView used a range of different data analysis models and algorithms to produce its outputs. These included:

  • matching algorithms deployed in the Information Governance Bridge which link data about the same individuals from different datasets
  • natural language processing (NLP)[75] algorithms which undertake unstructured text analysis on case notes to identify risk factors
  • predictive models which issue alerts about individual cases, based on identified risk factors.

Once the pseudonymised, matched data from the Council is in OneView, the system applied a range of models to generate the predictions used to trigger alerts and the case note summaries used by Council teams. The models are applied to both unstructured data, in the form of social worker case notes, and structured data, which refers to the rest of the labelled data shared by the Council, such as a person’s council tax debt figure.

Risk modelling

A key component of the OneView modelling process was ‘risk modelling’. Risk modelling takes inputs both from the analysed case notes and the structured data to identify risk factors in a case.

The modelling was proprietary. The Council’s Data Ethics Workbook states: ‘The algorithm itself is protected for the purposes of intellectual property, however it demonstrates the risk factors it has considered as key when generating the natural language summaries which service professionals can access.’[76]

Identifying risks is an important part of the Children Act (2004). When a child dies or is seriously harmed because of abuse or neglect, a case review takes place to identify how the various agencies involved worked together, to improve the way they safeguard children. Interviewees at both senior management and frontline levels referred to case reviews on multiple occasions, noting that failing to share information or identify risks were often part of the problem identified in retrospect.

One interviewee involved at a frontline level said: ‘When serious incidents happen, they were not on what were considered to be high-risk children. More often than not […] people were not alerted to something that had changed or got quite quickly more concerning and they’d missed it. So I would definitely welcome any support.’ In interviews and on its website, Xantura also describes its mission as, in part, to ‘prevent cases like Peter Connelly and Victoria Climbié from happening again’.[77]

OneView applied NLP to unstructured text from social care case notes, using a combination of keyword identification and sentiment analysis to assess whether any of a list of risk factors are present. This analysis informs the risk modelling, and is part of the process of automatically generating case summaries.[78]

Further information on the risk factors was provided in interviews to the research team during the research period, but Xantura’s withdrawal of consent prevents us publishing this information.

We were not able to obtain information about whether there was an attempt to standardise social workers’ use of terminology and the terminology used by Xantura’s models to define ‘risk’ and ‘risk indicators’. There was no information about how sentiment analysis was undertaken or what synonyms were applied to the keyword identification to associate text with predefined risks or risk indicators.

Some fields, such as ethnicity, were said to be used not directly in modelling or display in OneView but instead for monitoring.

Outputs

The main OneView system produces three types of output:

  • dashboards displaying aggregate data, designed to model future service demand and show how caseloads are currently distributed
  • case summaries including structured information (such as names) as well as computer-generated text about causes for concern, contextual factors and possible interventions
  • predictive alerts about individuals who the model predicted were at risk of specified events – for example, presenting as homeless – in the next 12 months.

 

The separate COVID-19 module also provided a tool to group and filter residents according to COVID-19 risk criteria.

Dashboards

Our research did not examine the dashboards in detail. Internal LBBD documentation states that OneView ‘provides management with reporting and dashboards to support demand management and decision making’.[79] Dashboards displaying aggregate data are available for use by managers and those involved in commissioning services. These are designed to model future service demand and show how types of caseload are distributed across the team in terms of volume, severity and geography.

One LBBD staff member, describing this in a public presentation on OneView for Children’s Social Care, said:

‘We can use the dashboard to see the future expected demand in each tier of service. And where these cases are coming from to enable us to better manage our resources. They also provide an understanding of how our current caseloads are distributed among the team, both in terms of volume, severity and geography, to enable us to better support those resources. We have the ability to build specific cohorts of children and families with different risk characteristics, which enables us to provide proactive support where appropriate.’[80]

Case summaries

Of all the outputs, frontline staff were most familiar with the case summaries. Many staff described OneView as a means of bringing together data from multiple sources within the Council – or providing a ‘single view of a resident’, as internal documents often describe it. Internal documents also describe OneView as exclusively or primarily a ‘data-sharing tool’.[81]

Case summaries took the form of a text document generated by the OneView system using NLP to bring together data about a specific individual or family and express it in the form of text. We were not able to obtain detailed information about exactly how OneView selects which information to include, and what is not displayed.

Each case summary included both structured information (such as the names of other people in a household), and automatically generated text. The content of the case summaries varies depending on the service: for Children’s Social Care, it includes sentences under the headings of:

  • causes for concern
  • historic and wider risks for consideration
  • contextual factors (such as factors that might make it harder for the family to look after individuals in the household)
  • intervention analysis (what the service thinks may work for the individual).

Internal LBBD documents stated that OneView provides caseworkers with ‘access to [a] summarised case record, including information from multiple systems, supporting triage of the case which is unlikely to be known by the referral officer or social worker’[82] and specified how officers within different services were expected to use these case summaries.

Table 1: December 2019 presentation of how OneView benefits were anticipated to be realised[83]

Community Solutions – Multi Agency Risk Hub ‘Officers to use OneView once at start of every case they action (and as appropriate)’
Community Solutions – Early Help ‘Officers to use OneView when allocated a new case, and then monthly or at agreed review points (e.g. case closure) to check for changes that may be unidentified’
Care & Support – Assessment ‘Officers to use OneView once at start of every case they action & if Single Assessment goes on for 30–45 days, check at least once a month’
Care & Support – Care Management ‘Officers to use OneView when allocated a new case, and then monthly or at agreed review points (e.g. case closure) to check for changes that may be unidentified’

Case summaries are accessible to case workers from their case management platform. It was not possible to gauge the extent to which these procedures were followed in all services. One manager said that they had made reviewing case summaries mandatory for their team, while another said: ‘We are not pushing workers too hard to implement it […] this is still early days.’

Predictive alerts

The OneView system generated alerts when the models identified individuals thought to be at risk of events up to 12 months in the future, such as presenting as homeless, being stepped up or down in Children’s Social Care[84] or being admitted to hospital. These alerts were sent by email to specific frontline teams at agreed intervals (weekly or fortnightly), in the form of a link to a specific case summary for that individual.[85] As of 2020, OneView was ‘the most mature form of predictive analytics’ in use at the Council.[86]

Multiple factors are used to determine whether an alert is issued, including criteria suggested by the relevant Council service. For example, internal documents show that as of November 2019, for an alert to be sent to the Early Help service – identifying children who were not currently known to social care but who were at risk of requiring statutory intervention in the next nine months – two types of criteria needed to be met. First, the probability of a statutory intervention (such as a child being placed on a child protection plan) needed to be greater than 80%, and second, there had to have been a ‘predefined triggering event’, such as the child being excluded from school.[87]

Interviewees described similar criteria that were introduced for alerts in other services, such as a requirement in housing services that alerts were only issued if an individual had specific levels of Council Tax debt recorded in OneView. The OneView ‘Risk Alerts User Guide’ states that: ‘Based on historical data (up to 3 years based on current data extract), each model utilises several variables and predictors to establish a list of risk indicators based on pattern and characteristics of historical cases – e.g. what were the factors present in the 12 months before an individual receives crisis intervention’.[88]

As of November 2019, five alert systems models had been deployed:

  1. Early Help targeting: probability that a family in Universal+ will be a Child in Need (CIN)/Child Protection (CP) or Looked After Children (LAC) case in the next nine months.
  2. Child in Need step-up/step-down model: probability that a family Child in Need case will be in CP or LAC in the next 9 months.
  3. Child Protection step-up/step-down model: probability that a family Child Protection case will be in LAC in the next 12 months.
  4. Looked After Children step-down model: Probability that a family Child in Need case will be in CP or LAC in the next nine months.
  5. Homelessness prediction: Probability that a family or individual will present as homeless in the next two months.[89]

Three further models had been developed but had not yet been deployed:

  1. Exclusion risk: probability of a school exclusion incident in the next nine months.
  2. Attendance risk: probability of school attendance dropping below 85% over three rolling terms in the next nine months.
  3. Arrears risk: probability of arrears worsening in the next nine months.[90]

A further model was launched in February 2020 in Adult Social Care, to ‘identify individuals at an increased risk of a hospital admission in the next six months’.[91]

Staff tended to see the alerts that OneView generates as the most complex element of the system. One interviewee involved in the implementation of OneView said: ‘The alerts are a lot more technical, almost at the black box end. There are all sorts of indicators that go in. It goes through an artificial neural network that spits out the probability that this person will become homeless in the next two months.’

Alerts were paused in January 2020 in Children’s Social Care and in April 2020 in Housing and Adult Social Care due to capacity constraints.[92] Interviewees, however, continued to discuss the risk alerts as part of the OneView system. In many interviews, staff did not reference OneView’s predictive analytics function. An interviewee at director level suggested that staff ‘don’t get the predictive bit: OneView as an operational tool that can support assertive outreach to people who might need our support in future, to prevent them from ever needing that support at all. Understandably, that hasn’t clicked for people.’

COVID-19 module

The COVID-19 case management system, described in the Council as a separate module within OneView, was accessed by discrete teams involved in the pandemic response. It allowed staff to filter and group residents by risk factors and to triage cases to help prioritise who to contact first to offer support. People identified in these lists were assigned to case workers, who could then access the COVID-19 OneView module to see contextual information about them before making contact.

The COVID-19 module also contained health and community data sources that remained separate from the main datasets in OneView – namely data from the NHS shielding list,[93] and data that residents shared with the NHS when referring themselves as ‘vulnerable’. These datasets were not used in the other predictive modelling processes in Children’s Social Care, Adult Social Care and Housing.

Developing and implementing OneView

As well as the technical components of OneView, the Council invested time and resources to build understanding among its staff about its use of data, as part of broader efforts to create an organisational culture that recognises the value of data.

Important steps taken by the Council to ensure that there was buy-in and support for the use of OneView from the start included:

An extended design process: An initial design process in mid-2018 defined the overall scope of the project, with conversations including at least three meetings and several workshops involving staff at management level within the Council, as well as representatives from EY and Xantura. This was followed by a scoping phase in Children’s and Adult Social Care.

Involvement of Council technical staff in model design and assessment: Responsibility for the project was held internally, by the Insight Hub, the Council team focused on the use of predictive analytics to inform the Council’s strategy. The Insight Hub played a role in developing some of the predictive models in OneView. Describing the set-up of OneView, one technical staff member said: ‘We were constantly questioning the way that [Xantura and EY] were doing predictive analytics. We set up workshops where we asked them to explain the models and how they worked: it was really important to understand how they were doing it.’

Dialogues and working groups with frontline workers: Where the Council was not involved in developing certain predictive models, such as those used in Children’s Social Care, Adult Social Care or Housing, it held dialogues between Xantura and frontline workers to ensure their needs were taken into account.

Training for staff: Interviewees said that everyone using OneView was required to complete a 90-minute training session. One staff member in Children’s Care and Support described their experience of this process:

‘We’ve got four teams and our service is part of a bigger directorate. We had a meeting of all services and a presentation of OneView, then talking on tables, questions. Then we had questions in our service’s meeting, which has about [30–50] people, looking more specifically at things. In team meetings we’d look at it together.’

Several staff who had participated in the training agreed that it had been managed well. Training for staff included producing, in mid-2019, an interpretation guide for case summaries that explained specific elements of a standard case summary.

Use in service delivery

Before contacting a resident on the basis of a OneView case summary or alert, a frontline worker was reportedly required to check whether another member of staff was already working with the resident or their family, and to ask that person to initiate the conversation instead.

If the resident or family had been the subject of a Council intervention but was no longer receiving support, the frontline worker might still contact the staff member who had been responsible for the previous contact to understand the situation. In some services, OneView alerts were designed to identify residents or families who had had no previous contact with the Council. In these cases, the staff member would contact the resident and aim to begin a conversation about their situation.

The following six sections describe the key insights we obtained from our examination of this system.

Insight 1: Local authorities using data analytics should clearly articulate success criteria prior to deployment

This should include articulating what success looks like, for whom, and how it will be measured, as well as the intended standards of accuracy, reliability and validity of system outputs for the use case in question.

Key findings

 

Without a clear and consistent articulation of what success looks like, and a strategy for measuring impact, it is difficult to assess whether the deployment of data analytics can be shown to be delivering improved outcomes. Local authorities should articulate – before adopting or procuring a data analytics system – clear success criteria against which the introduction of new data-driven tools will be evaluated, with measurable indicators identified. Where these tools have an impact on residents, their needs should be the starting point, with input from frontline services providers as well as Council management. This includes the roll-out of pilot or test programmes: they should be able to articulate the success criteria which must be met in order for the decision to be taken to fully deploy a tool.

 

LBBD’s leadership had a vision of what it aimed to achieve by deploying OneView: for frontline workers, for residents and for the Council as a whole. Procurement documents identified a wide range of anticipated benefits of the system, affecting different constituencies. However, to our knowledge, these goals were not articulated in the form of success criteria until the system had already been deployed: the criteria we were able to identify related to staff use of the OneView tool and did not include improved outcomes for residents, despite this clear priority for the system. We did not find evidence that at this stage of rollout the use of predictive or summarising analytics improved overall outcomes for established Council services or for residents.

 

Failing to set and review progress against success criteria risks placing residents and frontline workers in the position of experimental subjects without their knowledge or consent. The people who are affected by data-driven systems should have a role in defining what success – and what risks or harm – looks like.

Our research found that multiple different descriptions and measures of success were suggested or foreseen at different stages of the development and deployment of OneView. These included cost savings, better-informed interventions by Council staff to better support residents and better overall outcomes for residents.

But these anticipated benefits were often described in general terms, often focusing on ‘improving’ services without identifying how that would be measured either quantitatively or qualitatively. Descriptions of benefits often overlapped – for example, cost savings were often listed separately from other benefits that contributed to these savings, such as reduced demand for services or time saved for frontline staff. This suggested a lack of a clear framework for describing and evaluating what successful deployment would look like.

Within the Council, different teams had distinct views about how the success of the OneView system should be measured and documented. Procurement documents emphasised cost savings, implementers saw success in terms of ‘accuracy’ and ‘actionable-ness’, while frontline workers tended to frame success in terms of ‘relevance’ and ‘usefulness’. We did not find evidence that frontline staff or residents had participated in developing success criteria.

We found that data analytics systems could usefully complement the work of frontline staff by providing a sense-check and backup to their own analysis.

This was particularly valuable to frontline workers given the known risk of serious incidents arising primarily in unknown or overlooked cases rather than in established high-risk cases. Much of the benefit perceived by frontline workers was connected to better use of and easier access to data, and data matching, rather than to the predictive or synthesising analytic functions of the system.

Procuring the OneView system: anticipated benefits

Prior to implementing the OneView system, the Council identified areas that might benefit from the implementation of data analytics and predictive modelling. The report that supported the Council decision to procure systems from EY and Xantura for the scoping phase of implementation identified the following potential benefits and service enhancements:[94]

  • For the council: reduced demand for services, cost savings of £1.2 million by year three of implementation and better information about the impact of interventions.
  • For frontline workers: access to data from different agencies, and a single view of a household or an individual to help them select appropriate interventions.
  • For residents: improved outcomes and quality of life.

However, the Council did not set targets, success criteria or metrics for measuring progress against these intended impacts at the time of procurement, except in the case of cost savings. As a result, it was difficult to identify whether progress was being made towards these benefits.

Our research also found that benefits were articulated in internal documents. We did not find evidence that frontline staff or residents participated in developing success criteria. As the implementation of OneView proceeded, different benefits were described, but without specifying concrete measures to evaluate whether these benefits were being achieved.

Developing the OneView system

Our research found that measurements of success were discussed in a number of different areas – for the Council as a whole, for frontline workers and for residents – between the procurement decision and the deployment of the OneView system for use in service delivery. This included refining – and lowering – the cost estimates given in the procurement document.

Cost savings: stepping down and productivity gains

The procurement report in December 2018 had stated a ‘potential savings case of £1.2m by year 3 with potential £1.2m annually thereafter’,[95] across children’s, adults’ and homelessness services. By December 2019, the ‘Build Closure Report’ for children’s services[96] presented a revised benefits case[97] that predicted total savings of £1.132 million by the end of the third year, made up of £300,000 in cost reduction by stepping down cases to lower levels of support; £777,000 in cost avoidance by using earlier intervention to prevent escalating need; and £55,000 in productivity gains as a result of staff spending less time collating information from different sources.[98] The same benefits case appeared in a May 2020 review of the OneView service.[99]

In LBBD, the productivity cost savings in Children’s Social Care were based on an expected reduction in average time spent ‘gathering information from multiple systems and sources and writing case notes for an individual’, from 6 hours to 5.4 hours per case in December 2019.[100] However, the estimates for productivity gains varied enormously across different documents: the Data Ethics Workbook, dated three months later in February 2020, gave the expected benefits case as a ‘40% reduction in effort for front-line workers in the creation of case notes and the processing of applications’.[101] Neither document gave a source for these figures or explained how they were calculated.

Factors of success

The December 2019 ‘Build Closure Report’ for the deployment of OneView in Children’s Social Care (an internal document)[102] set out six ‘key factors of success’ for the ‘Run’ phase of development.

Two of these referred to technical adoption of the system within LBBD:

  • ‘One View is embedded within Children’s, Housing and Adult teams and is being used to inform operational and strategic decisions.’
  • ‘One View is a core part of business as usual and included within training and operational procedures.’

Three referred to user confidence in the OneView system:

  • ‘One View users speak positively of the capabilities and recognise the value it brings to their work.’
  • ‘Stakeholders understand the benefits delivered to data and those anticipated by One View.’
  • ‘The process for obtaining decisions related to One View is clear and transparent.’

Only one referred to improvements in service delivery:

  • ‘Insights are identified from the data and there is evidence of how this is being used to improve service delivery.

The document did not specify how these ‘factors of success’ had been determined, or how their implementation would be measured.

Wider benefits

The ‘Build Closure Report’ also proposed a set of four ‘wider identified benefits’ of implementing the OneView system:

  • Improved data quality from the identification of errors, which according to the document had already been observed during the implementation phase.[103]
  • Improved quality of decision-making: the document suggests that this could be measured by a reduction in serious incidents of harm, evidenced through ‘reduced numbers of serious case reviews, reduced numbers of case reviews that have information gathering as the main concern, and fewer case audits outcomes of “poor multi-agency working”’.[104]
  • Improved staff morale, which the document suggests could be evidenced using a staff survey, or through reduced turnover.[105]
  • Improved effectiveness of commissioning through the use of dashboards.[106]
Accuracy

LBBD documents use the term ‘accuracy’ when talking about the effective deployment of OneView, referring to a comparison of risk alerts about case escalation, to whether cases were escalated by frontline workers in practice. For example, documents such as the August 2020 Data Protection Impact Assessment reference ‘accuracy rates’ in another London borough’s deployment of Xantura’s software: ‘In Children’s, the platform is currently demonstrating 80–85% accuracy in identifying cases that would have escalated to crisis point without early support in another London borough.’[107] (It is not clear who assessed this accuracy rate: the other borough, Xantura or a third party.)

During testing, the OneView team reviewed historical data from January 2019 and found that: ‘88% of risk alerts that the model would have generated did step up to Looked After Children, providing a high level of confidence [in] our ability to predict which children and families are at risk.’[108]

Improved resident experiences

The implementing team identified potential benefits to residents: in particular, the possibility of allowing residents to only tell their story once. Interviewees said that residents regularly expressed frustration with the need to explain their situation on multiple occasions to different Council services.

One interviewee who headed a Council service also described instances where a resident was involved with up to five separate Council services that were failing to connect with each other. Here, the aim was to reduce the amount of time that the resident spent explaining their needs to the Council and thereby improve their overall experience of the interaction.

Improving outcomes

In interviews, meetings and public presentations, interviewees emphasised the overarching goal of improving outcomes as the primary motivation for the Council’s use of OneView.

One interviewee said that the Council’s leadership would not necessarily see the need to quantify impact in terms of outcomes, because of the perceived potential for predictive analytics to support prevention, a key element of the Council’s strategy: ‘We’re doing what we believe will work, and not what we can fill in a Key Performance Indicators form […] The Council believe that they need to do something different, and that means taking a risk.’

Using the OneView system: relevance and usefulness

In June 2019, a partner at EY wrote in a LinkedIn blog post that OneView had supported successful outcomes for residents in Hackney and Maidstone, and was in the process of helping LBBD to reduce demand for social care and housing services.[109] While there were efforts to measure OneView’s impact on the Council’s housing service in February 2020, an attempt to measure the impact of OneView as a whole did not take place until mid-2020.

Interviewees said that the need to ensure that data was collected, matched and analysed appropriately meant that measuring OneView’s impact had not been possible earlier on. Council staff at multiple levels of seniority described OneView as a system that was still in development during the research period and that needed further refinement. Several interviewees said that during the research period, the Council was not yet able to measure the impact of OneView on individual residents: ‘We can’t measure how [many] of those people walked into the Homes and Money Hub on the basis of a OneView risk alert […] and walked out the door with a good outcome […] We know it’s working but we don’t know how.’

One interviewee suggested that the collection of regular data would help here: they suggested that once they were able to compare contemporary data with that from six months previously, the Council would start to be able to make statements about what impact OneView was having.

Internal discussions about how the Council should define and measure OneView’s impact continued during the research period. Multiple interviewees suggested that there was a need for more evidence that the use of predictive analytics was having a positive impact. Supporting papers for a review meeting of the OneView system in May 2020 laid out criteria for ‘minimum viable products’ in three domains where the system was used: Children’s Social Care, Adult Social Care and Housing. These were titled ‘What does good [look] like?’.[110] For Children’s Social Care, the criteria were:

  • Case Summaries bring together information from the core data sets and provide the essential information staff require to understand the case and wider council interactions with the family
  • The language used in Case Summaries is clear and easy to understand
  • ‘Case Summary ‘risk summary’ provides an accurate view of the risk to the child based on information held within source systems
  • Case Summaries are trusted by staff as an accurate picture of the needs and circumstances of the individual and time sensitive information is up to date
  • Household composition is reliable and staff can rely on the matching rules to provide an accurate view of household composition in the most part; matching discrepancies are the exception not the norm
  • There is a clear process for identifying those at risk and that may benefit from an early intervention or preventative service, either in the form of targeted risk alerts or operational dashboards
  • Dashboards provide commissioning insight to support service planning, by understanding trends and typical risk factors
  • Dashboards provide key operational metrics to enable managers to prioritise the workload of staff and have visibility of the pipeline

Indicators for how these could be measured were not included.

One interviewee involved in implementing OneView noted in June 2020: ‘There is a level of scepticism on the ground: “Actually, is this really going to help me, and tell me more than I already know?” We haven’t proved that yet.’ In a meeting of senior managers for OneView in June 2020, staff members agreed that ‘we all need to agree what success is. If this isn’t meaningful and doesn’t add value we might as well pack up and go home.’ One interviewee emphasised the importance of ensuring that this reflected ‘how we help the service to do what they want to do’. By July 2020, the LBBD OneView Working Group was noting in team meetings the need to show examples of the impact that the system was having.

‘Actionable-ness’

While ‘accuracy’ was used in discussions of success criteria during the build phase, interviewees were keen to emphasise that this was not how users of the software thought about success. OneView was considered to be effective if it identified an individual as being at risk and a Council service subsequently provided them with an offer of support.

This method built on work conducted on similar projects in the past. For example, the Data Ethics Workbook, describing OneView’s impact in children’s services in another London borough, records that 55% of ‘alerted cases’ were ‘subsequently allocated to a statutory service, indicating the model correctly identified family in need of support’.[111] Xantura has also said publicly that ‘the number of proactive contacts suggested by OneView that resulted in an offer of support’ is one of the business case metrics that it tracks with other clients.[112]

The Council focused on assessing this ‘actionable-ness’ by tracking what happened to a resident after OneView identified them as being at risk. An internal presentation notes: ‘The rate of effectiveness has been high, despite ongoing data issues and areas for improvement.’[113]

A more difficult-to-measure interpretation of actionable-ness was used by interviewees: that OneView alerts and case summaries could be used as a ‘prompt’ for frontline workers that could stimulate them to undertake further investigation. One interviewee, explaining this point of view further, said:

‘We cannot rely solely on an algorithm to detect children at risk of harm, or who no longer need our help. But that doesn’t mean to say that it can’t be useful in terms of helping us look at a situation and say ‘This algorithm is suggesting that this family are at more risk – have we checked out everything we need to check out? Let’s check our assumptions.’

This was often referenced alongside comments expressing the hope that providing more data would help to limit bias in human decision-making (discussed further under Insight 5).

‘Relevance’ and ‘usefulness’

A range of staff felt that the OneView case summary was most helpful to frontline staff engaging with residents in the earlier stages of intervention. They generally agreed that this was partly because services at higher tiers in social care (such as Family Support and Safeguarding) already had access to significant amounts of information about a family, whereas Early Help services often began looking at a family with much less information to work with. One LBBD interviewee said that OneView was ‘the tool that has hooked people because it’s operational: it’s very useful and tangible.’

Interviewees said that OneView was less useful in service areas where more complex sets of circumstances give rise to vulnerability and, in turn, intervention, such as the higher tiers of intervention in Children’s Social Care:

‘What we’re finding is that for the kinds of datasets that we actually need to help in decision-making in social care, OneView is not set up to provide it. So what OneView is helpful for is for early help, for getting them to understand or to target, where […] there might be debt coming up or housing issues or whatever it might be, and they can then get in and disrupt some of those kind of poor practices or poor behaviours that might then subsequently lead to a safeguarding concerns which then would come into social care’s world.’

However, frontline workers tended to consider the success of the predictive components of OneView in terms of whether the alerts they received were relevant and useful in helping them to identify residents or families in need of support from their specific service. In one implementing team meeting in June 2020, the team described testing a model that aimed to identify families who were not known to existing Council services but who were at risk of needing statutory intervention from Children’s Care and Support in the future.

The test had aimed to identify 33 families using the predictive model, and then to discuss with the service whether they were the type of families that the service would want to work with. However, the test was stopped after the first nine families had been identified, because the frontline workers felt that all nine were already known to their service and therefore they considered the identifications ‘incorrect’ – in the sense that they were not useful. The implementing team disagreed, reporting that ‘the nine they thought were incorrect are actually correct’.

Seven of the nine families had previously received support from the Council but were no longer doing so (in the implementing team’s definition, they were not ‘known to services’). In the other two cases, the families’ status on being known to Council services changed because the information about each family in underlying Council systems was no longer inaccurate. ‘It’s jumped around, the definitions bit’, noted one staff member, while another said: ‘We need to have another session [with the service] to make sure they’re really clear on the ask’.

Accuracy

Frontline workers considered the accuracy of the OneView system in terms of the accuracy of the underlying data (rather than the accuracy of the model, as the term tended to be used during the development phase). An interviewee in Children’s Social Care, however, discussing a case summary for a family with which they were already familiar, was concerned that the document did not mention a key risk that the service was actively considering at the time: domestic violence. ‘Reading this it feels like an average family that are known to us, but this is not an average family. We’re really concerned about [the lack of mention of domestic violence].’

In considering the predictive alerts, some interviewees focused on the idea that OneView’s predictive alerts were not sufficiently accurate at this stage: ‘If I ask 20 social workers who they think is mostly likely to fall over next week, I would guarantee that’s more accurate than OneView.’

Staff also questioned whether it was possible for predictive analytics itself, however it was implemented, to capture the complexity of a family’s situation: ‘When you’re dealing with families with multiple complex factors, members coming in and out of families, different dynamics, there are so many things that have to be taken into account that I think it’s quite difficult to pick up on any kind of predictive analysis system.’

Key recommendations

Local authorities should include the development of clear and actionable success criteria and plans for how they will be evaluated when they procure and implement analytics systems, including in pilot deployments.

Local authorities should develop success criteria and evaluation methods for the system as a whole with the participation of those who will be most affected by the use of the system. Where benefits are anticipated for a particular group – for example, frontline social workers or service users – this group should participate in developing success criteria and evaluating whether the suggested benefits have been achieved.

 

Insight 2: Local authorities need better guidance to support compliance with data protection and equalities monitoring obligations

Key findings

 

Local authorities need more support to ensure that their use of data analytics complies with both their data protection obligations under the UK GDPR and their equalities obligations (particularly the monitoring obligations) under the Equality Act 2010 and the Human Rights Act 1998.

 

We did not seek to assess LBBD’s compliance with the UK General Data Protection Regulation (UK GDPR), nor did we observe breaches of this legislation. Nonetheless, we observed that a lack of guidance from the relevant regulators about the similarities and differences between ‘special category data’ under the UK GDPR and ‘protected characteristics’ under the Equality Act 2010, as well as the different obligations under these pieces of legislation, meant that discussions risked conflating and confusing these concepts and obligations.

 

This is compounded by the involvement of private-sector-developed technologies that include technical bias-monitoring capabilities which may not align with legal equalities monitoring obligations.

Equality is a key concern for the LBBD. The LBBD draft Corporate Plan 2020–22 described how the borough aims to build preventative services that ‘identify and address the root causes of poverty, deprivation and health inequality’.[114]

This is reflected in the documents we obtained relating to the OneView system. The Data Ethics Workbook mentions the Equality Act 2010 as a relevant piece of legislation, stating that it covers ‘restrictions around discriminating based on protected types of data’,[115] and that the system would ‘never create proxies’ for the use of any variables which are ‘covered by the conditions of the Human Rights Act (i.e. Gender, Sexual orientation etc.)’.[116]

The Data Protection Impact Assessment (DPIA) submitted to the ICO in August 2020 states that: ‘Promoting equalities is a key part of our wider strategic objective – we believe the One View system has considered unintended data bias in the way the system has been designed, we do not, at this stage know what, if any unintended bias might become evident.’[117] Our researchers were also informed that an equalities impact assessment had been completed for the OneView project.

Equalities and bias

Many of the discussions related to equalities in LBBD documentation, and in our interviews, were framed in terms of ‘bias’ in the OneView system itself. In statistics, bias is a technical concept: it measures systematic deviation from an accurate result.[118] This is different from fairness, which is a normative concept,[119] and from non-discrimination and equality, which are legal concepts.[120] In a public webinar, Xantura acknowledged concerns about bias but argued that its technology offers an improvement on human bias decision-making in this respect: ‘At least with the systems we’re building, we can monitor their bias on an ongoing basis and we can also systematically address it.’[121]

The Data Ethics Workbook distinguishes three kinds of bias and explains how the OneView project aims to counter them:[122]

  • Unconscious/practitioner bias: bias that ‘can be inadvertently built into the algorithms that drive an analytic process, often due to underlying unconscious bias by the writer or programmer’.
  • Social bias: bias which results from using training data that itself is biased due to historic decisions arising from ‘human cognitive bias’.
  • Measurement bias: bias that arises from using only data about ‘residents who interact with Council services’ rather than data which is representative of the population.

The Data Ethics Workbook also explains how the OneView project aims to counter these different sources of bias.[123]:

  • Minimising the risk of unconscious bias through the ‘large and diverse team’ at EY and Xantura,
  • Regular reviews of outputs with LBBD to ‘highlight any potential concerns’.
  • Not including ‘protected characteristics’ in risk modelling ‘unless absolutely necessary’, and ‘never creat[ing] proxies’ for variables covered by the Human Rights Act (although the DPIA notes that ‘Protected characteristics not being included in risk modelling is not an assumption of protection from bias’).[124]
  • Using ‘absolute objective information wherever possible’.
  • Including data from ‘generic and universal services such as council tax teams and Council registrar offices’, as well as identifying additional datasets that could be added in the future to address ‘specific concerns about under/over-representation of a certain demographic’.

The final point reflects language in the DPIA: ‘Our aim is to incorporate as full a representation of the Barking and Dagenham population which helps to ensure that the data is not biased towards cohorts of the population who may appear more frequently within local data sets.’[125] The DPIA also states that the Council ‘believe[s] the OneView system has considered unintended data bias in the way the system has been designed’ but does not ‘at this stage know what, if any unintended bias might become evident’.[126]

Interviewees at multiple levels – including LBBD staff involved in developing OneView for use at the Council, and those using its outputs – said that bias was not discussed in any depth during the development and implementation of OneView. Neither bias nor equalities is mentioned in the OneView User Guide[127] or the general FAQ document provided to staff.[128]

Staff working more directly with OneView also noted that bias was not a topic of discussion: staff members who had been part of the development of OneView noted that ‘it hasn’t directly come up in workshops that we’ve been involved with, but [has] been dispersed in other conversations’, and ‘it’s more about equality impact assessments’. When interviewers raised the question of whether there might be bias in a dataset, several interviewees said that this was the first time that they had heard it discussed. As one put it: ‘That’s a very interesting concept; I’ve not thought about it before.’

When asked directly about bias, most interviewees said that they were not aware of it being discussed (but that others might have discussed it). Broadly speaking, interviewees did not report knowing much about how OneView monitors, mitigates or addresses bias. When asked whether Xantura was responsible for monitoring for bias, one interviewee at a more senior level said: ‘I don’t believe so. I know that it is monitored, but who does the monitoring I’m not quite sure.’

A small number of interviewees discussed bias specifically in relation to predictive analytics. One staff member, acknowledging that the data collected by the Council contained bias, said: ‘The model’s going to learn from data that’s got bias in it.’ Another staff member discussed the risks in more detail: ‘Bias is built into our collection of data […] We could reinforce those biases, and start targeting people for reasons that they can’t control. That we start making decisions on the basis of prediction, that’s the dystopian future.’

Staff tended to agree that Xantura was responsible for monitoring issues related to bias and that it had done so effectively. Some of these interviewees felt that bias had been eliminated. An interviewee involved in the development of OneView said: ‘I feel that [the Insight Hub] are mitigating it to ensure that there is no bias, because obviously that is an understandable concern or risk.’

Protected characteristics

The most common response among those who were aware of risks related to bias was that Xantura had dealt with the issue by not including certain identity characteristics, except those which the DPIA states ‘may be significant in the modelling process’.[129]

However, while the DPIA states that ‘protected characteristics not being included in risk modelling is not an assumption of protection from bias’,[130] interviewees frequently gave the opposite perspective. As one put it: ‘That’s how Barking and Dagenham have addressed the issue, by making sure that those categories aren’t included.’ The Data Ethics Workbook describes the exclusion of protected characteristics from risk modelling as a protective measure:

‘To further protect from bias, and unless absolutely necessary, protected characteristics are not included in the risk modelling process. While age and disability may be significant in the modelling process, other characteristics such as ethnicity or religion are not relevant to modelling and are entirely discounted.’[131]

The Data Ethics Workbook discusses ‘protected types of data’ in the same terms:

‘We don’t include most protected types of data in our platform (e.g. ethnicity, religion, gender reassignment, sex, sexual orientation). Age, disability and pregnancy have been shown to have a direct impact on risk of homelessness and likelihood of needing care and support services and are therefore included. Factors such as marriage and civil partnership are necessary to include for the household composition.’[132]

However, interviewees working more directly with data expressed uncertainty about this approach. ‘If you make a model with the police stop and search data, there’s a bias there’, one interviewee said. ‘Even if you take race out, there’s still bias within the data.’

Bias-monitoring procedures

Screenshots provided by the Council indicated that during the research period, Xantura was monitoring for bias according to age, as well as eight other factors. Xantura-produced bias reports were available for viewing in the Fusion Data Exchange, a platform that shows a range of dashboards on model performance and bias. The Fusion Data Exchange was available to either one or two Council staff members during most of the research period.

During the research period, OneView was producing bias reports for the predictive alerts which were in use: for example, one bias report generated by Xantura showed that predictions to inform a ‘step-down’ alert in the Early Help Children’s Services were skewed, with disproportionately fewer predictions for children aged one and below due to a lack of data on children this age.

The DPIA provided a more detailed description of OneView’s approach to monitoring for bias – ‘Outputs from the model are compared with historical case diversity profile (e.g. to identify any variation)’ – and notes that any such variations would be reviewed with staff involved in implementing OneView.[133]

This was also reflected in discussions of bias reports: interviewees did not mention any instances of such variations during the research period. A member of the Insight Hub, however, indicated that they were not aware of how Xantura checked for bias. ‘We know protected characteristics aren’t used [in the predictive models], but they are used by Xantura to check the outputs to check there isn’t bias. It would be good to understand the process that they go through to do that.’

Human and machine bias

When asked directly about bias, interviewees tended to discuss the risk of bias in human decision-making. Some staff said that the predictive models used as part of OneView would be likely to display the same levels of bias that exist in current Council decision-making.

One interviewee felt that the issue of bias in data could be resolved by providing more data: ‘It’s the same problem as human bias; it happens when the information is not there.’

Conflating protected characteristics and special category data

Interviewees recognised that there are different grounds on which a person can be discriminated against – but documentation and interviewees used terminology which may have confused distinct (albeit related) concepts. For example, the Data Ethics Workbook states that the Equality Act 2010 contains: ‘restrictions around discriminating based on protected types of data’.[134] This potentially conflates ‘protected characteristics’ (on the grounds of which the Equality Act prohibits discrimination) and ‘special category data’ (which merits special protection under the UK GDPR).[135] The Data Ethics Workbook also mentions another set of data: ‘variables which are covered by the conditions of the Human Rights Act’.[136]

Key recommendations

We recommend that local authorities carry out equalities impact assessments when developing and deploying data analytics systems.

We recommend that the EHRC and the ICO continue to collaborate to ensure their guidance is accessible, fit-for-purpose and enables staff across a wide range of local authority functions (and other public-sector institutions) to handle the use of, or exclusion of, special category data, in particular with regard to the:

  • use in data analytics and predictive analytics systems
  • use in equalities monitoring of the use of these systems
  • compliance with the Equality Act 2010, the UK GDPR and Article 14 of the Human Rights Act 1998.

We also recommend that the Crown Commercial Service develops model contract clauses that local authorities could use to ensure developers of these tools make them compliant with EHRC and ICO guidelines. This model contract language could also help to ensure local authorities have a contractual right to gain the appropriate level of access to the underlying model and training data, so that they can perform evaluations and test its accuracy and efficacy.

Companies developing and supplying technology tools and systems to the public sector must ensure their practices comply with laws and ethical obligations, and must ensure that they enable regulatory compliance for public-sector clients. This may entail giving members of a local authority’s data science or technical team access to the underlying models and training data of the data analytics system, so that they can perform bias auditing and evaluations.

 

Insight 3: Introducing data analytics into public-service delivery changes the day-to-day work of frontline staff and this should be evaluated by local authorities as part of implementation

Key findings

 

Our research found that the development and deployment of the OneView system had an impact not only on IT systems in the Council but on the day-to-day practice of frontline workers. Interviewees highlighted in particular how the system could impact the social behaviours and relationships which are crucial for social work.

 

As a result, the development, implementation and evaluation of data-driven tools must be done in the context of the whole system into which they have been introduced – including both technical and social elements.

We found wide variation in the opinions of Council staff about data analytics, ranging from strong resistance to the idea of analytics, through to optimism about the potential of the systems, to positive views of the tool even in early stages of deployment. Some staff welcomed the additional information provided by the case summaries but also expressed concerns about the impact of new sources of information on residents’ trust in the Council and their relationships with social workers. More broadly, some staff expressed concerns that deploying the OneView system could shift power away from residents and back towards Council services.

Opinions of data analytics varied by role, and also by type of analytics

There was clearly widespread approval across the Council of the way in which the Council’s use of data had been transformed in recent years, and staff generally agreed that this had played a role in improving local government services in the borough. Staff often praised the Insight Hub – the team leading the use of predictive data analytics – in particular for their role in enabling this process. For example, interviewees pointed to multiple instances in which data had been used to target the Council’s resources more effectively and rethink the design of services,[137] or to influence national legislation. One example of the latter is the Council’s use of a model to estimate how many vulnerable people were living close to betting shops, which the Council submitted as evidence to support (ultimately successful) lobbying by other actors to reduce stakes on fixed-odds betting terminals to £2.[138]

Interviewees often suggested that attitudes on the use of data differed between frontline (or ‘operational’) and other staff, with frontline staff being less supportive. There was a distinction made in numerous conversations between groups who were excited about exploring the potential of using data in their roles and those who wanted to continue with longstanding professional practices.

Most interviewees who articulated benefits of the OneView system tended to ascribe these to the case summaries rather than the predictive alerts. Staff who did mention the predictive functionality tended to be at more senior levels. One emphasised that the Council had intentionally avoided a route wherein:

‘The minute that somebody comes up on that data we go knocking on the door going “Excuse me, the data has told us that you’re going to experience domestic violence in five years’ time, and we want to help you now”. Of course, that’s not what we do. What we do is work with the services who are best placed to have the conversation with that family about their support needs.’

Changes to practice

A variety of interviewees said that initially, fears among some frontline staff centred on the idea that algorithms would replace entire decision-making processes, or even individual roles. The Council’s Frequently Asked Questions document, which staff said was developed following initial conversations, lists ‘Is this automated decision-making? Is this going to change my role?’ as one of its questions.[139]

LBBD internal documentation for OneView users emphasised that it was not intended to replace human decision-making, but frontline staff expressed concerns about undue prioritisation of OneView outputs, and about the risk that the system could divert attention away from other sources of information, both of which could impact social work practice.

While the Data Ethics Workbook includes among OneView’s potential benefits the ‘refocusing of staff on higher value activity through automation of case note generation’,[140] interviewees from both the Council and its partners repeatedly stressed that OneView was not intended to replace professional judgement. Like multiple other internal documents, the FAQ document addresses such concerns directly, stressing that: ‘The focus of OneView is all about supporting our staff and services to improve outcomes for residents, and not replacing roles and processes.’[141]

Interviewees mentioned worries that despite the fact that staff would be trained to prioritise their personal judgement, they might rely too heavily on the outputs generated by OneView. This was raised because of both concerns that not all information available in OneView was correct or up-to-date and uncertainty about the accuracy of OneView’s predictive alerts. An interviewee at frontline level with knowledge of several services in Children’s Care and support suggested that over-reliance was a low risk because ‘the tool is still in its infancy and is not working so well’, but that this risk was likely to increase ‘as it becomes more reliable and it helps more’.

Changes to relationships

When Council staff were asked to discuss how OneView affected their relationships with residents, two themes emerged, both arising from concerns about data sharing. The importance of data sharing for safeguarding is well-established and reflected in Government guidance, serious case reviews and legal duties placed on practitioners.[142] Despite that, the use of OneView raised concerns.

Some staff members felt that residents might be concerned about how information shared with frontline workers would affect frontline services’ view of them. This was noted in an interviewee in relation to Children’s Social Care: ‘People are often quite wary of information and what is being shared with us, and how that would impact on their assessment. Especially when it’s things that are difficulties, obviously. That’s a major concern of social workers.’ One interviewee reported that staff across the service they worked in were ‘quite suspicious of [OneView], and find it very intrusive […] You’ve got a hard job to convince us’.

Another interviewee said: ‘I can imagine there being an issue at one point where [a Council worker says to a resident]: “You lost your house because of this [referencing data they had seen on OneView]”, and the resident says “Where has that come from? Why do you know?”’ Multiple interviewees described fears of a similar scenario, with one suggesting ‘there is a high risk that we are damaging our relationship with our communities’ if OneView was not implemented carefully.

Staff also discussed concerns in relation to trust in general, and how bringing together greater quantities of data about an individual or family might conflict with the Council’s relational practice model for social work. Interviewees repeatedly talked about the importance of consent to use people’s data, and reported that these questions were also discussed at senior levels. However, interviewees told the research teams that consent forms and processes did not explicitly mention the use of predictive analytics or OneView.

Changes to power dynamics between the Council and residents

A wide range of frontline workers discussed instances where they felt it had helped them to have access to data on a family’s levels of council tax debt. Parents tended not to share such financial information, and one interviewee said this meant it was harder to provide support to families: ‘Sometimes as a social worker, families hold those things back from you, and suddenly you get a call saying “We’ve been evicted”, or “We’ve got a red letter through” or “There’s a bailiff”.’

Multiple interviewees, however, suggested that staff were concerned that one of OneView’s functions – to bring together data from multiple sources in one place – would give Council officers more power in a situation where power imbalances already exist. One interviewee, describing the concerns of staff members in Children’s Social Care, said: ‘There was a real ethical challenge [for them] about: “I’m working with this family because they’ve got a child protection need. And, you know, I don’t necessarily need to know how much debt they’re in, or whether there’s been any previous contact with the housing service.”’

One frontline interviewee said: ‘One of the things that came up quite a lot [in discussions with the OneView team] was the stuff about debt. I thought “That’s really tricky”. Not to say that one area of your personal data is more precious than another, but […] If we start saying: “There’s an issue of your schooling, we’ve come to discuss that”, and I can see a mental health issue is there [in OneView], I can wiggle that into my questioning. But then to speak about debt is very personal. I don’t know how comfortable people would feel about that.’

They speculated that other residents might perceive that data about their housing situation was equally personally sensitive: ‘It’s just a very tricky thing; I don’t think there’s any real answer to that.’

Key recommendations

Local authorities must develop, implement and evaluate data analytics in the context of the whole system into which it has been introduced – including both technical and social elements – to ensure that their impact on relationships and practice is understood. This includes data analytics systems and tools developed by private companies.

Companies developing and supplying technology tools and systems to the public sector should design these tools and systems in close consultation with frontline workers and residents who may be impacted by their use.

Insight 4: Frontline workers are unlikely to rely on or trust outputs from data analytic systems with unclear rationales for how they were generated

Key findings

 

LBBD put considerable time and effort into building understanding among staff who would be using the system, as discussed in the ‘Developing and implementing OneView’ section. However, at the time of our research, the usefulness of the predictive alerts and case summaries – and therefore staff trust in the OneView system – was undermined by the inability of staff to meaningfully understand the underlying logic, particularly how risk factors were assessed and used.

 

LBBD staff in general reported very little clarity about the factors that contributed to the case summaries and predictive alerts and the rationale which produced the outputs. Researchers for this project were not able to obtain information to clarify this.

 

This lack of understanding – not of the technical details of the algorithms but of the risk factors used as inputs, and how these contribute to outputs – rendered the outputs from the OneView system less useful for frontline workers, as they did not necessarily feel that they could rely on them in their day-to-day work.

 

This was exacerbated by missing information and inaccuracies in some of the outputs, meaning that social workers did not trust – or disagreed with – the outputs.

This section discusses two specific outputs (see the ‘Timeline of OneView implementation in LBBD to 2020’ box) from the OneView system deployed by LBBD during the research period:

  • Predictive alerts about individuals who, according to predictive modelling, were at risk of specific events – such as presenting as homeless, being stepped up or down in Children’s Social Care or being admitted to hospital – in the next 12 months. This is a form of predictive analytics (see the ‘Definitions used in this report’ box): predictions about future outcomes using current and historical data (including data which has been through a matching process) combined with statistical modelling, data-mining techniques and machine learning to output a prediction about the future.[143]
  • Case summaries synthesising information from multiple data sources in a single view. This is a form of synthesising analytics (see the ‘Definitions used in this report’ box): processing data into a summary output.

Both outputs rely on data that has been processed through data matching: comparing or combining data from at least two different datasets.[144]

Our research found that despite the lengthy engagement process, including training, few staff interviewed felt that they could meaningfully understand or explain the logic behind OneView alerts and case summaries – even those staff who had participated in model design workshops. The following sections detail the areas where staff described gaps in their understanding of the OneView system, or concerns about the system.

Lack of shared understanding about risk factors

As discussed in the Risk modelling section, a key component of the OneView system in 2020 was its risk modelling, which takes inputs both from the analysed case notes and structured data to identify risk factors in a case. These risk factors were used in the generation of both predictive alerts and case summaries.

The Council’s Data Ethics Workbook, based on a central government Data Ethics Framework template from 2018,[145] was completed at the outset of the project. It stated that ‘it is possible to see the factors that have been considered to be significant’ in OneView’s identification of an individual or family for intervention.[146] However, some interviewees using OneView in the Council’s Children’s Care and Support service said they did not feel this was the case.

Interviewees, including both implementing team members (more technical staff) and frontline workers, complained of a lack of access to information about both the ‘risks’ and the ‘risk indicators’ which are central to the operation of OneView’s case summaries and alerts. Almost all interviewees said that they did not know how the risk factors were defined, or where they could find a set of definitions for them.

One interviewee who had been involved in a series of workshops to develop OneView said that they ‘weren’t part of choosing keywords or deciding what constitutes a risk. Maybe someone else in Social Care was, but I wasn’t’. Council interviewees also did not have a clear idea of whether all data was used in the risk modelling process. Some fields, like social worker case notes, were generally known to be used in modelling, but several staff involved in implementing OneView said they were not sure exactly which other fields were used in modelling.

Case summaries shared with the research team displayed the following top-level risks: child exploitation, family stress, financial exclusion, homelessness, mental health, missing person, physical abuse, physical health, sexual abuse, substance misuse, risky / undesirable behaviour, abuse or neglect, criminality, debt, domestic violence, educational issues and robbery.

For example, a sentence in a case summary might take the following form: ‘In terms of trends, concerns around Undesirable Behaviour (behavioural issues, disruptive behaviour, risky behaviour) appear to be becoming more significant.’ Other risks are also listed in the case timeline (chronology), but it is not clear if or how they relate to the indicators.

The research team requested definitions for risk factors such as ‘general concerns’, ‘disruptive’ or ‘risky’ behaviour during the research period, but did not receive them. There was no available information on how a risk was calculated or how an individual was assessed to be ‘at risk’.

Several interviewees referred to situations in which frontline workers might read a case summary before having engaged with the individual, such as when a new staff member looked at a case. Another interviewee suggested that different local government services might respond to the terminology used in different ways: ‘I’d like to think that a social worker would look at [the word “criminality”] and see “Here are some issues that need addressing” rather than “Red flag, red flag”. For a school, “criminality” is an immediate red flag. For us, we would want to see more details about what that means.’

Key information missing

A common view among both social workers and management staff was that a broader range of data was needed to inform OneView’s predictions. Interviewees felt that OneView needed to include data from health services and the police, with one comment typifying interviewees’ views on the topic: ‘Social care needs the health stuff and the police stuff before it becomes a more robust tool that operations will use more frequently.’

The Council’s July 2020 ‘Value Framework’ noted the proposed next step of adding additional data from Adult Physical Disability and Youth Offending Services​, as well as ‘further intervention data’ from Early Help, all of which ‘had been identified as of value to’ the Family Support and Safeguarding team, with potential to explore adding data from the police, probation, substance misuse services, adult mental health and refuge services once initial steps had been made.[147]

One interviewee working in Children’s Social Care, however, emphasised the inability of data to capture the nuances of a person’s relationships: ‘The quality of the analysis we do is going to be much superior, because of the information that we have access to. The machine is not going to observe a child with their parents and assess the bond that happens between them.’

Opaque outputs

Staff felt that they were unable to identify how case summaries and predictive alerts had been generated from data held about individuals. Frontline interviewees frequently said that they were unsure which data had been used to generate specific comments about a child such as ‘grief’. There was speculation from staff about whether including specific words when writing case notes might make OneView more likely to detect an issue and include it in a case summary.

Staff also consistently said they did not know how the system generated predictive alerts, what information alerts were based on or how they might have been calculated. Several staff described trying to ask for more information about how the models worked, and being unsuccessful.

Staff also disagreed with some of the outputs from OneView. In one case, the research team observed frontline workers responding to predictive alerts that a case would ‘step up’ from ‘Child in need’ to ‘Child protection’ in the next six months. One frontline worker noted that the case summary highlighted several risk factors, such as ‘criminal behaviour’, but that: ‘The challenge for us is that [the case summary] doesn’t really tell us what those things are about.’ Discussing the overall prediction that ‘stepping up’ was likely, they said that they wanted to understand more about how the prediction had been made:

‘I know the machine is picking up something, but I don’t understand what it is, and I can’t see from the case summary what it is. I can’t see anything helping me to understand why the data that OneView has picked up suggests that the children are at higher risk now than they were six months ago.’

They then added that, in their professional judgement, the case was likely to require a lower level of intervention rather than (as OneView predicted) higher levels of intervention: ‘From our perspective, the case is going in the right direction. We were not in any shape or form considering stepping up the case […] we were working towards closure.’

There were diverging views within the Council about the amount of information that should be provided about how OneView and its predictive models work. When asked questions about the technical details of OneView, interviewees at both leadership and frontline levels tended to suggest that the research team speak to members of the team implementing OneView (chiefly the Insight Hub). Several interviewees involved in implementing OneView suggested that sharing detailed information about a model would not be effective, because staff lacked the knowledge about predictive modelling and data analytics needed to be able to critique such models.

Asymmetries in power also existed between Xantura and the implementing team. Technical staff said that they did not always have access to all the technical information they needed about a system, with some noting that they would like to have a better understanding of what techniques Xantura was using to produce models or monitor bias.

Xantura was described by several interviewees as explaining systems ‘in a very technical way’, with multiple Insight Hub team members indicating that their role was to ‘translate’ descriptions into language that Council services could easily understand. Staff involved in designing and implementing OneView, including in the Insight Hub, often took on a gatekeeper role, and some felt that explaining the processes behind the system was one of the most difficult challenges they faced.

Impact on service delivery

Staff reported that the opacity of case summaries and predictive alerts made it difficult to integrate the system into their work. One frontline worker expressed uncertainty about whether they could include information from a OneView case summary in a report about a family because of difficulties explaining that OneView was in use: ‘How would I use that in a report? Somebody like a family would look at me and say “What’s OneView?”’

Other staff involved in implementation suggested that OneView’s role in bringing an individual or family to the Council’s attention might not be explained to the resident, even if the resident specifically asked why they had been identified as being at risk. One manager with experience of a service using OneView to contact individuals said that residents rarely asked where the Council had gathered information from, and that part of the Council’s overall strategy was to change this:

‘Some obviously have said no, they don’t want to talk to us. “Where did you get my information?” You know, the usual. But largely, they’ve not queried that. It’s something that we expected because of the demographic makeup of Barking and Dagenham, if you look at our demographics, unemployment, levels of education, we get little challenge from our residents for a number of reasons, which we try to change – part of what we’re trying to do is to work on aspiration and stuff like that.’

A staff member compared the OneView system to a ‘traditional approach’ in which a school might refer a family to the Council: because the school would also have spoken to the family, the interviewee felt that ‘the conversation kind of just flows’. In contrast, they said, ‘One of the areas that we were a little bit reluctant with was, if OneView identifies the family, and we pick up the phone, what do we actually say to them? […] Nothing has actually happened to this family. There’s been no incident.’

One interviewee said that they found it hard to understand the origin of some data in OneView’s case summaries and explained that this could be an issue for some tiers of Children’s Services if they needed to present data in court. Meetings and internal documentation refer to plans to include more information of this type in the future. The impact of OneView on practice is discussed further under Insight 6.

Transparency is necessary for trust

The link between understanding and trust featured strongly in discussions about the use of predictive analytics with interviewees. One interviewee summarised their concern: ‘It’s not that I don’t trust, but we always need to know why.’

Some staff involved in frontline service provision emphasised that a clear explanation of where the data included in OneView came from, and the processes undertaken to produce an output, was as important as a model’s predictive capacity: ‘It’s not just data for us – it’s about understanding what is actually happening and what that’s telling us.’

Another interviewee said they would not always feel confident in using information from the case summary about issues such as debt, because they did not know where the information had come from. They also cited uncertainty about the definitions used, such as whether statements about average levels of debt were made in relation to households in OneView, in the borough or in the ward.

As a result, as one interviewee told us, frontline workers were not able to rely on the outputs from OneView in their work:

Interviewer: ‘Has OneView ever changed the way you assessed an individual?’

Interviewee: ‘If it changed the way we practised it would have meant that we put a lot of value and trust into the system – and the system is not there yet, I’m sorry. We can’t base our decisions on what the system tells us, because what it tells us […] we can’t rely on it.’

The Council was aware early on in the deployment of OneView of the scepticism, hesitation and concern experienced by some frontline workers and users of the system. A December 2019 document, reviewing feedback from multiple users in Children’s Care and Support, found that ‘users are more confident in the tool when there is transparency on the data sources and date of refresh’,[148] and that they are ‘keen to understand the method of the underlying modelling and risk factors used’.[149]

Other frontline staff expressed a similar view, saying that giving them more information about how OneView works would improve their impressions of it:

‘It would probably help us see the value of the tool if we could see what it was based on. It’s about the trust […] If you can’t understand how it works, you can’t trust it to see the steps it’s taken.’

They suggested that scepticism among staff meant that OneView ‘is at quite a high risk of being ignored’.

Key recommendations

Local authorities deploying systems like OneView, which produce synthesising (such as case summaries) or predictive (such as alerts) outputs, should ensure that the systems are explainable, in line with the guidance produced by the ICO and the Alan Turing Institute.[150] These explanations should:

  • be accessible to all stakeholders, including frontline workers and the people whose data is used in the system
  • include the purpose and target group, factors and underlying values that are used as features in models, and the rationale for using those factors
  • include mechanisms for human review where data-analytics-informed decisions produce undesirable outcomes and redress may be required.

Local authorities deploying data analytics systems should complete algorithmic transparency reports and upload these to the repository overseen by the Responsible Technology Adoption Unit (RTA) and the Central Digital and Data Office (CDDO).[151] They should regularly review the reports and update them as necessary.

Companies developing and supplying data analytics tools and systems to the public sector should provide clear explanations for how tools and systems work, as well as access to systems to enable audits and evaluations of how the tool produces outputs. They should provide public-sector clients with all the access needed to audit and evaluate tools and systems before procurement, and at regular intervals afterwards.

Insight 5: Outputs that are explainable and understandable can benefit frontline work

Key findings

 

The OneView COVID-19 tool had a clear purpose: to identify vulnerable individuals who might need additional support from the Council in the early weeks and months of the COVID-19 pandemic. In contrast to the case summaries and predictive alerts (see Insight 1), Council staff had a clear understanding of the factors which contributed to the outputs and how the model was prioritising individuals: as a result, staff were able to use the outputs in their work and describe the benefits.

 

The differences between the use of OneView in Children’s Social Care and in the COVID-19 response illustrate that having a transparent, explainable and accessible reason for using data analytics increases the trust that users have in the system.

Purpose: contacting vulnerable people to proactively offer support

Our research period (May–September 2020) meant that we observed the Council’s use of OneView during the early months of the COVID-19 pandemic in the UK, as part of a broader programme of support.

A key intention of the Council during the initial stages of the COVID-19 pandemic was to engage in ‘proactive contact’, which involved identifying and offering support to vulnerable residents earlier than would otherwise have been the case. For example, if the Council was aware that a resident’s council tax debt was accruing, they could then contact that resident to ask if they would like support managing their finances (provided through the Council’s Homes and Money Hub), or to assess if the resident might be eligible for additional financial support.

During the initial stages of the COVID-19 response, responsibilities for proactive outreach activity were split according to whether a service had an existing point of contact within the Council (meaning that they were ‘in service’). If so, that staff member would then contact them.

Those who were not ‘in service’ would be contacted by the Extended Intake Team (EIT). The EIT’s normal work focused on assessing people for services and referrals on to different Council’s services: this remit was ‘extended’ – increased in size – as a result of redeployment of staff from other services that had been scaled back or suspended during the initial lockdown.

Output: identifying residents to contact

To define who the Council needed to contact, services needed to identify and prioritise residents who were vulnerable. To do this, the Council used external as well as internal data:

  • Lists produced by Clinical Commissioning Groups (CCGs): Local NHS CCGs produced lists of their patients that they deemed to be at high risk from COVID-19 and whom they believed should isolate (known as ‘shielding’).
  • Self-referral to the NHS: residents who considered themselves to be clinically vulnerable but hadn’t been informed of their shielding status could complete an online self-referral; CCGs asked the local authority to call people on this second list to ensure that they had no additional needs that needed to be met.

Together, the CCG and self-referral lists made up what was known as the ‘shielding cohort’. This numbered more than 8,000 residents. Initially, the Council focused on contacting individuals on the CCG’s shielding list. The EIT was responsible for contacting those over the age of 65 who had not used Council services.

Data sources

The separate COVID-19 case management module included additional health and community data sources: data from the NHS shielding list, and data that residents shared with the NHS when referring themselves as ‘vulnerable’. These sources were not fed into the main instance of OneView but remained separate. Staff stated that they expected that the Council would be asked to delete data at a later date, as it included ‘really sensitive data: permissions were hastily drawn up. We would never have been provided with that data in the past, [only] when saving lives depends on it’.

Risk factors and model

An internal LBBD presentation described the understanding of risk factors in this context as being based on three ‘lines of enquiry’:

  • risk of being ‘directly affected’ by COVID-19: this included elderly residents as well as those with medical conditions
  • risk of being ‘indirectly affected’ by COVID-19 as a result of isolation and social distancing, for example, residents experiencing domestic violence
  • residents likely to experience a knock-on effect: for example, financial vulnerability.

The COVID-19 module used data already in OneView to identify residents who had one or more of these risk factors, and to prioritise them by number of risk factors in a simple counting algorithm. Those the Council recorded as having more risks were allocated to a higher level of priority.

Impact on practice

Frontline workers were able to clearly explain how the COVID-19 system prioritised residents for contact. As two managers overseeing the EIT put it: ‘Early on we were doing the most vulnerable; people with ten vulnerabilities, more complex needs first. The less needs, the lower priority you are. When you have a list of 8,000 people – and that’s growing – unless you have that filter, you’re the rabbit in the headlights.’

Staff were also able to explain to residents why they were being contacted. One interviewee said that the COVID-19 pandemic had been ‘quite helpful’ when starting these conversations because ‘it has just been an easy explanation to say: “This is what we do because of COVID”’.

Key recommendations

Our key recommendations for local authorities echo those in the previous section.

Local authorities deploying systems like OneView, which produce synthesising (such as case summaries) or predictive (such as alerts) outputs, should ensure that the systems are explainable, in line with the guidance produced by the ICO and the Alan Turing Institute.[152] These explanations should:

  • be accessible to all stakeholders, including frontline workers and the people whose data is used in the system
  • include the purpose and target group, factors and underlying values that are used as features in models, and the rationale for using those factors
  • include mechanisms for human review where data-analytics-informed decisions produce undesirable outcomes and redress may be required.

Local authorities deploying data analytics systems should complete algorithmic transparency reports and upload these to the repository overseen by the Responsible Technology Adoption Unit (RTA) and the Central Digital and Data Office (CDDO).[153] They should regularly review the reports and update them as necessary.

Insight 6: The lack of standardised understanding of ‘ethical data analytics’ among local authorities risks inadvertent harm as new technologies are deployed

Key findings

 

We identified multiple different conceptions of what ‘ethical’ practice looked like in LBBD’s use of data and predictive analytics in social care settings during the research period in 2020. These included good intentions and compliance with information governance obligations, as well as improving outcomes for residents. Discussions of ethics also included consideration of whether the OneView system exhibited bias, as well as the transparency of its use and the avoidance of actively harmful uses.

 

We identified a general consensus among interviewees that the ultimate responsibility for ethical use of OneView rested with the Council rather than being delegated to EY or Xantura. We also identified a wide range of specific interventions put in place to ensure ethical practice – including data protection measures, specifically placing OneView as a decision-support, rather than decision-making, system within service provision, and the production of ethical guidance and a workbook. However, these interventions did not necessarily address all the ethical concerns, nor were they visible to all individuals involved in deploying and using OneView. However, the lack of clarity about what constituted ‘ethical’ practice across the entire system meant that there was a risk that some possible harms would not be mitigated.

 

Ethical principles for the use of data analytics should be holistic, accessible and usable by everyone involved in using the analytics system. They should be consistent with – but not limited to – other obligations, including equalities and data protection obligations.

Location of ethical responsibility

Our research found that ethical use of local authority data was considered crucial. Xantura’s public communications emphasised their focus on ethical practice. The company regularly uses the term ‘ethics’ when marketing its work to local authorities, stating in a September 2020 blog post that: ‘Partnership working and ethical data sharing is exactly the reason why Xantura was created 12 years ago.’[154] As such, the company stresses that the way data analytics is used is critically important. Xantura’s CEO explained in an external webinar in 2020: ‘The use of algorithms isn’t new. And the ethical problem isn’t new; the devil really is in the detail of the implementation.’[155]

Xantura suggests that local authorities determine the way in which its technology is used. In the same 2020 webinar, the CEO explained: ‘Our ethical approach and principles are framed in our context in children’s services through support tiers that already exist.’[156]

Broadly speaking, interviewees described social workers as being the people most critically engaged in ethical discussions within the Council. ‘That’s the thing about AI’, one interviewee said. ‘Social workers are not traditionally fans.’ Often, social workers were presented as having their own conception of ethics, with a perceived gap between data teams, and those engaged with social work and relational practice. Others actively presented managers of social care services as the guardians of these approaches, describing them as ‘the buffer that anything needs to get through […] [so that the Council can] genuinely use data without compromising social work practices’.

Different ideas about what constitutes ethical use of data

Ethical considerations were visible in many of the interviews, and ethical use of the OneView technologies was seen as very important. Several interviewees indicated that the Council was aiming to create a positive narrative about the use of data analytics. This included efforts to generate certainty that the Council’s approach was ethical and could be justified with ‘confidence’.

However, we identified multiple different conceptions of what ‘ethical’ practice looked like in the use of data and predictive analytics in social care settings. This included good intentions and compliance with information governance obligations, as well as improving outcomes for residents. Council staff did not necessarily use terms like ‘ethical’ when discussing the use of predictive analytics, but they were nonetheless keen to talk about issues that needed to be considered in this use.

Many of the interviewees who talked in greatest depth about concerns with predictive analytics, including those working in Adult or Children’s Social Care, also said that they were not opposed in principle to the use of predictive analytics. One interviewee, after outlining a series of potential problems, stressed: ‘This conversation sounds very negative, but that’s not my intention.’ Another cited several concerns about the system, but then said: ‘I think there is a place for it. It’s not because we’re dealing with humans in social care that we can’t be using these tools. I think we just need to be careful […] it’s how they’re being used.’

Discussions of ethics also included consideration of whether the OneView system exhibited bias, as well as the transparency of its use and the avoidance of actively harmful uses, including using OneView to take decisions unilaterally.

Good intentions

LBBD staff often felt that the Council’s intent in using OneView influenced whether the use of data analytics was ethical. This is typified by a passage in the Data Ethics Workbook for OneView: ‘The capability is designed to be positive in nature; data is used only for positive outcomes and there will be no negative impact on any resident’s life.’[157] A Council interviewee gave a more detailed account:

‘It’s the intent as well as the practice. The intent is for residents to be empowered and enabled to help themselves […] And then you have to go through the caveats of making sure that by the decisions you’re making when you’re sharing information or modelling stuff, you are not undermining that basic assumption.’

In general, Council employees felt that using data analytics to offer residents support could be more readily justified than using it to sanction individuals, though they recognised that offers of support were not always welcomed: for example, one staff member noted that an individual might perceive what a local authority sees as ‘support’ as a punishment, saying in response to questions in a public webinar that: ‘We accept [that] offers of support can appear punitive and we continue to work with our clients around the narrative and conversations that are triggered with residents.’[158]

Council staff were wary about any use of data analytics which could be seen as ‘profiling’. The word ‘profiling’ was mentioned only occasionally and was universally seen as having negative connotations.[159] One staff member said: ‘What we did not want to do was to end up simply profiling families because of generalised characteristics.’ This uncertainty was also visible among frontline staff, particularly when a resident had not been contacted by the Council before.

This view was not shared universally, as one interviewee suggested: ‘I can’t see how we’re not profiling people to see who needs help. I don’t think there’s an inherent bias in there, but we are absolutely using this system to see who needs help in different ways.’ The project’s DPIA addresses this directly: ‘The risk model does not predict the likelihood of behaviour within certain cohorts. This is profiling and OneView does not do this. It looks at the level of safeguarding risk in a household, independent of personal identifiers.’[160]

Compliance with information governance obligations

Compliance with regulatory requirements also featured strongly in understandings of ethics.

When questions about ethics arose, most LBBD interviewees started by discussing compliance with regulations – with the terms ‘data sharing’ and ‘data protection’ referenced most regularly. This was the case across different areas of the Council, from leadership to frontline staff. When asked about the Council’s processes for ensuring ethical practice, interviewees tended to list documents such as data-sharing checklists or the DPIA.[161] Typifying this tendency, an interviewee with a management position in the Council said:

‘Have we had some challenges around ethics? Of course we have. But we’ve taken a very clear line on this, which is that Barking and Dagenham Council wants to help people and in order to do that, we have used the data that we have permission to use and that’s quite important, because we do have permission to use this data.’

Council interviewees often described ethics as a ‘hurdle’ or a ‘barrier’ to be negotiated – and sometimes suggested that the Council had already done so by completing information governance processes. When asked about processes within the Council to discuss ethics, on several occasions interviewees began by describing the Council’s discussions with the ICO.

The research team frequently heard accounts from senior managers of initially sceptical Council employees ‘converting’ to support the use of predictive analytics because they were satisfied with the regulatory compliance efforts. ‘Some of the people that I thought would never have changed their minds around this […] because of their views around ethical issues have really, really turned’, one manager suggested. This also fed into a frequently referenced narrative that the Council was on a ‘journey’ towards using data and predictive analytics more regularly in its work.

One interviewee gave a detailed description of their thought process:

‘At first I was thinking, “Oh dear, is this actually AI? Are we allowed to do that?” Then I got into attending some meetings, looked at the DPIA and thought: “Well, they have everything in place, it’s not relying on a computer to decide, there is human intervention”. Any triggers the social workers get, it’s the same as if an individual phones into the Council and is worried about somebody. It’s all internal; my understanding is it’s doing the same job as what a social worker does.’

Interviewees who described this change in perspective often said that they were reassured by the Council’s attention to information governance processes. A small number of interviewees even felt that the Council should be doing more to encourage data sharing.

Interviewees stated that the Insight Hub added more information to the Council’s website about the OneView system in 2020, as well as listing the five service areas from which data was extracted, highlighting that access permissions mean: ‘Council staff can only see the information that is relevant for them to do their job.’ The Council’s website also reiterates the Council’s compliance with data-sharing legislation and references the fact that: ‘Data analytics is only undertaken using anonymised or pseudonymised data.’[162]

Improving outcomes

Xantura has stated publicly that its mission ‘is to improve outcomes for vulnerable people’.[163] In one webinar, Xantura’s CEO states that more than 90% of its work is focused on preventing situations from worsening.[164]

LBBD staff often spoke about ethics in similar terms: staff regularly noted that because using data analytics was designed to prevent residents from experiencing problems, it was justifiable from an ethical standpoint. This was described elsewhere by senior managers and policy staff as fulfilling the Council’s duty of care. As one staff member put it at the Overview and Scrutiny Committee discussion on ethics and transparency: ‘Our application of these things creates greater safety as opposed to less.’[165] When asked whether the ethical difference lay in terms of whether data was used for punitive or supportive purposes, one interviewee at a senior leadership level said: ‘I think where you’re helping people and it’s relational, I think that is positive.’

Transparency

Interviewees in senior leadership positions agreed that being transparent with the public about the Council’s use of data was important to the Council, in part because they saw transparency as integral to the Council’s overall strategy of building trusting relationships with residents, but also because ensuring understanding among residents was seen as important to OneView’s success as a project. A staff member said: ‘We need to be transparent to allay concerns [from members of the public]. When people don’t understand automated decision-making, you can see why there’d be concerns […] I think that’s really important that the Council is transparent about how we’re using data.’

During the research period, staff described plans to run multiple public deliberation events in the future, and then to introduce a ‘data transparency charter’, described by one interviewee as ‘a publicly visible thing that tells residents how we use their data and what we’re using their data for’ that would be publicised widely across the borough. The DPIA also noted this, stating that the charter ‘must address the concerns of residents’ that had been identified in public deliberations.[166]

Internal documents cited work conducted with other local authorities to suggest that the Council’s own residents were likely to support the use of OneView. In response to the question: ‘Would data subjects expect you to use data this way?’ the DPIA stated: ‘We know from the work Xantura have undertaken with Tower Hamlets that residents were supportive of the use of data in this way if it improved services and outcomes, and that residents expect the Council to share information.’[167]

Some interviewees, however, were uncertain whether the public would consider it acceptable to use OneView to link their data together. As one interviewee put it: ‘I’m not sure, necessarily, that the man in the street would expect the data that that’s held on them in a social care system to be held alongside other data as well for other uses.’ Staff at all levels said that the Council needed to do more to explain to residents how data analytics was being used. ‘I think there’s something about us all being signed up to how it will be used. And that’s not just us’, one interviewee said. ‘You would want to know [how OneView is being used], wouldn’t you, as a resident? […] So that the residents feel comfortable with how their data is being used.’

Concerns about transparency were partly attributable to a perception from multiple interviewees that broader public reporting on predictive analytics did not reflect the realities of its use.

Avoiding harm

The Council focused its assurances on the idea that the use of predictive analytics would improve outcomes for residents. The FAQ document describing OneView was emblematic of this approach, saying: ‘The use of data will be restricted, and only the essential information will be shared that is necessary to fulfil the function. It is not envisaged that there will be unwarranted detriment, harm or distress to any data subject or individuals.’[168]

The idea of measuring whether OneView could cause harm to the borough’s residents was not mentioned spontaneously by interviewees, and when the research team raised the issue directly, several interviewees said that it was the first time they had thought about the issue. One interviewee, when asked if the Council was looking into whether OneView could have a negative impact on anyone, said: ‘Not that I am aware of. I don’t know what negative impact would look like. It could be something [to think about].’

However, interviewees were able to identify hypothetical actions by the Council, and to exclude them on ethical grounds. As an example of an action that staff saw as being clearly against residents’ interests, several interviewees emphasised that the Council would never sell data about its residents to third parties. Frontline workers also mentioned potential impacts on practice and on power relationships between the Council and residents, which they feared might have a harmful effect (discussed further in Insight 3).

Supporting decision-making

Interviewees regularly talked about ‘automated decision-making’ as a term with strong negative connotations and took the view that ethical use of OneView would be using it to support decisions by staff – rather than as a decision-making tool in and of itself. This was viewed as reducing the risk of negative outcomes. Interviewees agreed that use of data analytics was justifiable if it was intended only to help Council staff decide whether to offer residents support – rather than sanctioning or punishing residents.

Ethical interventions

The importance of ethical considerations can be seen in the interventions put in place, particularly within the Council. Members of the Insight Hub said that they saw questions related to ethics as a key part of their overall role. As in other aspects of their work, they frequently described their role as ‘translating’ between the technical language used by data professionals and the terminology used by various Council services. However, all interventions were not necessarily well known by interviewees.

Data protection

LBBD implemented a data protection framework, including adding additional access controls on staff accessing data through OneView, and changing the consent form to reassure staff of the legal basis for data processing.

Positioning OneView as a decision-support tool

Council management also actively reassured staff that OneView was not an automated decision-making tool. One Council employee in a management role said:

‘A key thing […] has been the concept of decision-support rather than decision-making. In the early incarnations, one of the most prominent challenges was: ‘Is this a computer telling us what decision to make?” Which is absolutely not the intention of the approach we were taking at all. It’s entirely about a decision-support tool. If you’re a social worker, to build the best picture that you can about the individual in front of you, why would you not want to draw on all the information you have? Not to make the decision for you, but to inform it. And if you can do it quicker than in the past.’

Xantura states publicly in a webinar about its work more generally: ‘We do not prescribe what action should be taken in these cases and we do not offer opinions in our case summaries: we just present the facts and how they have changed over time.’[169] One frontline interviewee said: ‘I remember reading about it being a support tool. This was the line that we looked over in our team meeting, and I was like “Yes!”’

Internal Council training materials state that the outputs generated by OneView are designed not to contain judgements or to obligate staff to take specific actions. The DPIA notes that ‘the tool acts only as a support to case workers by providing factual information’, while the Data Ethics Workbook (discussed below) emphasises that OneView only presents ‘absolute objective information wherever possible’.[170]

This refers to the inclusion of structured data such as a household’s total council tax debt, or the percentage of time that a child attended school. An FAQ document for frontline workers in Children’s Social Care highlights the view that the inclusion of a ‘score’ would be likely to influence staff: ‘It is important to note that no “risk score” is ever shown to a professional to prevent the platform from influencing or guiding decisions.’[171]

However, not all staff agreed that the information in OneView contained no judgements or assumptions. Notably, a synthetic (i.e. not generated from data about real people) case summary shared with the research team included statements about potential risks, such as the following: ‘The child in the house has a history of poor attendance/exclusion and is now probably at risk of NEET [being ‘not in education, employment or training’] for children who have previous attendance/exclusion issues.’[172]

Ethics documentation

The Council prepared OneView-related documents including a Data Ethics Workbook, the DPIA and data checklists for each element of data extracted for the system. While staff involved in designing and delivering the tool mentioned the above documents on occasion, others – whether at management level or involved in frontline delivery – did not mention them at any point.[173] They were not described as documents that staff used on a day-to-day basis, and it is not clear that all staff were aware of their existence.

Ongoing internal discussions

During the process of developing OneView, several interviewees described conversations that had been taking place since 2019 about introducing a sub-group of the Information Governance Group focused on ethics but said they had yet to test it. The remit of the Data Ethics Sub-Group (DESG) would be to make decisions on applications submitted for ethical approval for ‘any project that will facilitate data-informed insight’, including AI or machine learning, facial recognition and projects involving processing personal data or systematically sharing special category information.

During the research period, interviewees shared documents indicating ongoing internal discussions about the membership of the sub-group, how often it should meet, what levels of authority it should include and how to mitigate conflicts of interest. There was particular interest in whether to involve people from outside the Council, including representatives of data subjects such as support workers, officers from other Councils or staff from the NHS or Metropolitan police. As one staff member explained: ‘If you just have Barking and Dagenham Council people, there’s immediately perhaps an unconscious bias there, so maybe we need to bring external people in.’

However, at the end of the research period some staff felt that the Council still needed to do more to clarify ethical questions linked to OneView: ‘There’s a bit of work that needs to be done there […] whether we need the ethical framework that we all sign up to, that sits in the heart of this.’

Some felt that as a next step, the Council should be looking in more depth at potential risks: ‘A lot of discussion still needs to be taking place about some of the risks […] If we were to take it forward we’d need to […] continue to review the impact that it has, and look at things like bias.’ Several interviewees who were directly involved in OneView’s development felt that there was a lack of accessible frameworks or resources to support them to undertake this work, noting the length of the ICO’s Explaining Decisions Made with AI report in particular.[174]

Key recommendations

When developing ethical principles for the use of a data analytics system, local authorities should ensure that they are holistic, accessible and usable by everyone involved in using the system.

Ethical principles must be translated into clear practices for local authorities, identifying who within a particular team is responsible for evaluating risks and taking the appropriate actions. In some cases, those practices may need to be assigned to the upstream developer, who may be best placed to identify or mitigate a risk.

Ethical principles should consider the needs of different communities and be consistent with – but not limited to – other obligations, including equalities and data protection obligations.

The broader context for our insights and recommendations

The previous insight chapters draw on our ethnographic research: in this chapter, we look at our insights in the context of the broader research and findings on data analytics in local government and the public sector, including developments since our research period in 2020.

Insight 1: Success criteria for data analytics

We found that while LBBD’s leadership had a vision of the impact of OneView for frontline workers, for residents and for the Council as a whole, clear goals were not articulated in the form of success criteria, to our knowledge, until the system had already been deployed. The criteria we were able to identify related to staff use of OneView and did not include improved outcomes for residents.

Studies of algorithms in use in the public sector have criticised their deployment in ways that effectively place individuals – in this case, Barking and Dagenham residents whose data is used in case summaries or predictive analytics – in the role of research subjects, without the protections afforded by ethics review processes.[175] In other parts of the public sector, analytics systems have sometimes been deployed as ‘pilots’ or ‘trials’,[176] without clear limits on how long these experimental deployments will last and under what conditions, or clarity on how the pilots will be evaluated or the potential consequences for individuals.

As of 2020, an estimated 10% of local authorities were piloting predictive modelling in social work areas: it is not clear how many of these ‘pilots’ were informed by research ethics guidance or approval, or whether the residents whose data was used in these models were aware of, or consented to, this use. [177] Recent research has also critiqued the ways in which ‘pilots’ and ‘experiments’ of algorithmic systems can create moral and legal ambiguity in particular settings, particularly in cases where people affected by these pilots are not involved in their design and implementation.[178]

The use of predictive analytics in LBBD can be seen as an example of a ‘risk-focused prevention paradigm’: a form of early intervention based on the assumption that risk factors can be identified now for problems later.[179] Evaluating risk-focused prevention paradigms is difficult, particularly when there are multiple forms of intervention that can result from a OneView output.[180] As interviewees pointed out in our ethnographic research it is particularly difficult to evaluate programmes where we do not know the counterfactual: what would have happened without the OneView output.

However, there is a large and growing literature on evaluating algorithmic processes, and on accountability for decisions made using outputs from algorithmic systems. Our previous work with the AI Now Institute and the Open Government Partnership identified several different mechanisms that are already in place at the level of individual systems, including impact assessments, audits and regulatory inspection, and procurement conditions.[181] We have previously identified effective participation as a key element to improving understanding, risk anticipation and management, and data governance in data-driven systems.

Participation should involve those with a stake in the outcomes from these systems, those likely to be directly affected (both positively and negatively), and those likely to be under- or over-represented in the data that underpins these systems.[182]

Success criteria are particularly important for predictive analytics, the usefulness of which in a social care context has been challenged by more recent research. A few months after our research period, What Works for Children’s Social Care built predictive models using NLP and machine learning techniques and found that these models missed four out of every five children at risk.[183] Machine learning techniques may be limited in their power to predict outcomes more generally.

One study, in which 160 teams of researchers used machine learning methods to build predictive models using data from the longitudinal ‘Fragile Families and Child Wellbeing Study’ to predict outcomes for children at age 15, found that no models were very accurate. It also found that predictive models using thousands of variables and complex machine learning methods were only slightly better than simple linear or logistic regressions using four selected variables.[184]

Algorithmic systems also have the potential to create unforeseen harmful consequences: academics at the University of Cardiff’s Data Justice Lab have compiled a list of these ‘data harms’, ranging from automated job application software which discriminates against applicants with a history of mental health problems, to data matching errors resulting in individuals being denied food aid.185 186 When developing success criteria, inclusive and participatory methods of defining these criteria can help to ensure that harmful consequences are foreseen and mitigated during procurement and deployment.

Participatory methods of developing algorithmic accountability frameworks have been shown to increase their effectiveness, and can overcome gaps in technological literacy among residents and other affected groups.[187] Nonetheless, even the most inclusive criteria may still miss potential consequences: as a result, it is important that evaluations of how a system performs against predefined success criteria also identify where a system has caused – or has the potential to cause – harm.

Insight 2: Data protection and equalities obligations

The risk that algorithmic models could perpetuate discrimination, including on the grounds of race and gender, is well documented.[188] As a result, it is important to monitor and mitigate this risk, particularly in the delivery of public services. However, it is also important to note that collecting data about protected characteristics is one of the ways that discrimination can be identified, as part of equalities obligations.

Discussions of bias in the LBBD OneView system were often clouded by confusion between at least three sets of individual characteristics that may or may not be used in the system: data about protected characteristics under the Equality Act 2010, special category data under the UK GDPR and variables which are covered by the Human Rights Act (1998). These may overlap, but local authority responsibilities relating to the three pieces of legislation are different.

Equality Act 2010

The Equality Act 2010 prohibits discrimination on the grounds of nine protected characteristics: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation.

Section 149 of the Equality Act lays out a further duty on the public sector: the Public Sector Equality Duty (PSED):

(1) A public authority must, in the exercise of its functions, have due regard to the need to –

(a) eliminate discrimination, harassment, victimisation and any other conduct that is prohibited by or under this Act

(b) advance equality of opportunity between persons who share a relevant protected characteristic and persons who do not share it

(c) foster good relations between persons who share a relevant protected characteristic and persons who do not share it.

The Equality Act 2010 does not require public authorities to carry out equalities impact assessments, but case law indicates that documentation of compliance with the PSED is useful for local authorities.[189]

The PSED is non-delegable.[190] In August 2020 (after our research period) the Court of Appeal found in R (Bridges) v South Wales Police (2020) that it was not sufficient for South Wales Police to discharge their duties under the PSED by relying on the manufacturer’s guarantee that a data-driven system (in this case a facial recognition system) was unbiased.[191] In 2022, the EHRC published guidance for public bodies using artificial intelligence,[192] which clarified that ‘the PSED applies even if you are: commissioning someone outside of your organisation to develop the AI for you; buying an existing product; [or] commissioning a third party to use the AI on your behalf.’[193]

Human Rights Act 1998

Article 14 of the Human Rights Act 1998 prohibits ‘discrimination on any ground such as sex, race, colour, language, religion, political or other opinion, national or social origin, association with a national minority, property, birth or other status’. Guidance on the Act for public authorities states that ‘other status’ can include, for example, ‘sexual orientation; whether a person was born inside or outside of marriage; disability; marital status; age; trade union membership; homelessness’.[194]

The guidance states that public authorities should ‘assess its policies and functions which are relevant to the rights under the Convention for discriminatory impact’, and document these decisions.[195] While the guidance does not prescribe how this should be done, it notes that ‘in relation to characteristics protected by the Equality Act 2010, this closely overlaps with its obligations under the public-sector equality duty’.[196]

UK General Data Protection Regulation (GDPR)

The UK GDPR gives extra protections to certain kinds of data, termed ‘special category data’. This includes, for example, personal data that reveals someone’s ethnic origin or religious belief, data on sexual orientation, and health and genetic data.[197] This data is considered to require special protections because of the possibility that collecting and using it could interfere with fundamental rights and freedoms.[198] Special category data includes explicit mention of this data (for example, a field in a database for ‘ethnicity’), and can also cover information used to infer data that would be classed as special category, for example if surname data is being used to infer ethnicity.[199] Guidance from the ICO makes clear that ‘some of the protected characteristics under the Equality Act are classified as special category data’, either directly (for example, sexual orientation’, or because they relate to information about a person’s health (e.g. pregnancy).[200]

The UK GDPR prohibits the processing of special category data outside of 10 specified exceptions. These exceptions – ‘conditions for processing special category data’ – include for the purposes of employment, social security and social protection.[201] The ICO recommends carrying out a DPIA, and considering other data protection principles including data minimisation (collecting and retaining the minimum required amount of data), security measures for securing the data and transparency measures.[202]

Managing perceived tensions between data protection and equalities monitoring

There has been a lack of clarity between data protection requirements under the UK GDPR, and the duties of public-sector organisations to monitor their compliance with equalities and human rights legislation.

The CDEI (now the Responsible Technology Adoption Unit (RTA), renamed in February 2024) has noted that this uncertainty about regulatory compliance has presented a barrier to organisations collecting demographic data specifically for the purposes of monitoring for bias in AI systems.[203]

The EHRC has produced guidance to manage this challenge. It states that ‘data protection law does not prevent public authorities from processing personal data for the purposes of the general or specific [equalities] duties’,[204] but notes that a substantial amount of equalities monitoring data is likely to be special category data under the GDPR, and so requires special protections. The ICO has also produced guidance on fairness and discrimination in the use of AI technologies but notes that this guidance does not address compliance with the Equality Act 2010.[205]

Insight 3: Data analytics and practice

Our research found that the development and deployment of the OneView system had an impact not only on IT systems in the Council but on the day-to-day work and practice of frontline workers, including the social behaviours and relationships which are crucial for social work.

The provision of social services by a local authority is a sociotechnical system: a complex system ‘of social and technical components intertwined in mutually influencing relationships’.[206] It includes technical elements – computer systems, records, data, synthesis algorithms, predictive analytic algorithms – but also social elements – such as residents, frontline social workers, management staff, relationships between different local authority staff internally and with the people they work for, corporate actors, finances, legal requirements, and central and local government policy. It is misleading to try to understand only the technical components (or indeed only the social components) of such a system: doing so fails to give us a full picture of what is actually happening within the system.

Inserting new technology into an existing system produces what has been called a ‘ripple effect’: a changing of behaviours and of embedded values.[207] Human decision-making in systems which include data analytic components can be hampered by both ‘decision-automation bias’, in which users over-rely on outputs because of overconfidence in the system, and ‘automation-distrust bias’, in which users disregard outputs and over-rely on human decisions.[208] This is particularly true for ‘adaptive’ algorithms, which update their own behaviour based on new information, including information influenced by previous versions of the same algorithms.[209]

Broader international research on the use of predictive analytics relating to children identifies that this ripple effect can lead to a loss of trust in the broader systems for children’s services. It also raises concerns about the broader systemic effects of introducing predictive analytics, including potentially undermining practitioner decision-making, wasted resources if a system proves ineffective and the diversion of resources away from broader structural problems.[210]

Insight 4: Limited usefulness of opaque analytics

Research into explainable AI more broadly has argued that there are two routes by which the user of an AI system can gain trust in that system:

  • Intrinsic trust: the user comprehends the AI’s reasoning process, which matches what the user would consider a reasonable human reasoning process
  • Extrinsic trust: the user observes a trustworthy evaluation of the outputs of the AI.[211]

We observed that the lack of transparency about the factors that contributed to the case summaries and predictive alerts and the rationale which produced the outputs resulted in these outputs not being useful for frontline workers. The OneView system was not transparent about how these outputs were generated, so staff could not have intrinsic trust in the system. Staff also reported concerns about the outputs themselves – including missing information – which meant they did not have extrinsic trust in the system.

Since our research period in 2020, other studies have noted limitations in the usefulness of data analytics for service delivery. Several months after our research period, What Works for Children’s Social Care published a report documenting its work developing predictive models in children’s social care which found that there was no clear support for predictive analytics among social workers.[212]

In some cases, deployers have stopped using analytics systems that are not useful. Research by the Data Justice Lab in 2022 identified 61 automated decision-making systems which had been deployed in local government services and then cancelled. In 31 cases, the cancellation had been influenced by ‘government or political concerns about the effectiveness of the systems’.[213]

According to media reports, concerns about effectiveness contributed to the decision by the London Borough of Hackney to stop using another Xantura system, the Early Help Profiling System – which sent monthly alerts to social workers about families it identified as in need of extra support – at the end of the pilot phase in 2019.[214]

Insight 5: Usefulness of transparent analytics with a clear purpose

We observed that users of the OneView COVID-19 module were able to describe and understand the logic underpinning the risk factors which the module identified, as well as the prioritisation of individuals by number of risk factors: as a result, users could be said to have intrinsic trust (as defined above) in the system.

Calls for transparency in the use of algorithms have been growing in recent years, including both the technical components of algorithmic systems and how systems are being used.

In a 2019 review of 84 sets of ethical principles, ‘transparency’ was the most common (though not universally appearing) principle.[215] The UN Special Rapporteurs on Extreme Poverty and Human Rights and on Freedom of Opinion and Expression have both noted that a lack of transparency in the use of algorithmic systems risks infringing human rights.[216]

The term ‘transparency’ can refer to a narrow, technical explanation of the mechanics of a system: how an input is transformed into an output within what is often seen as an ‘efficient, but opaque, black-box system’,[217] especially when the system is proprietary, or where there is a perceived need to avoid actors ‘gaming’ the system.[218] Attempts to implement this narrow definition of transparency are the subject of ‘explainable AI’ (or ‘XAI’) research. This field, however, is predominantly relevant to machine learning engineers, who often use these explanations to debug or improve machine learning models.[219]

Transparency can also, however, be interpreted in a broader sense: as visibility of not just what an algorithmic model does in a technical sense but how the model operates within a broader system. The lack of this broad transparency may prevent people who are at risk of harm from challenging the outputs and results of an algorithmic system.[220] As a result, transparency is particularly crucial in public-sector applications.[221]

Since our research period, the ICO and the Alan Turing Institute have produced guidance on providing explanations for decisions made with the assistance of AI technologies.[222] They state that key principles underlying explanations are transparency, accountability, context-specificity and reflection on impacts,[223] and they suggest specific tasks for implementers to undertake,[224] which include:

  • collecting and pre-processing data in an ‘explanation-aware manner’
  • building systems in such a way as to ensure that information can be extracted for different explanation types for different audiences
  • translating ‘the rationale of [the] system’s results into usable and easily understandable reasons’
  • training implementers to use outputs from a model ‘responsibly and fairly’.

The guidance for the voluntary Algorithmic Transparency Recording Standard, developed by the UK Government’s CDDO and CDEI (now the Responsible Technology Adoption Unit (RTA), renamed in February 2024), states that the Standard is most relevant for tools that interact with the public directly, or which ‘have a signification influence on a decision-making process with direct or indirect public effect’.[225] It includes two tiers of reporting: one for the general public and one for a more informed audience.

However, it is important to note that while transparency may be a necessary component for trusted, effective data analytics, it may not be sufficient. Previous work by the Ada Lovelace Institute has highlighted that it is not sufficient simply for people to have more awareness or understanding of uses of data in order for them to trust those uses. Concerns about data are correlated with both high and low levels of understanding, and these concerns may persist (or even be strengthened) as more information is provided about data use.[226] Research from the Ada Lovelace Institute and others has also called for developers of AI tools to create ‘trustworthy’ systems, in which there are clear responsibilities and obligations on different actors in an AI system’s supply chain and lifecycle to address poor performance or remedy hams caused by a system.[227]

Insight 6: Variations in conceptions of ‘ethical’ data analytics

We identified multiple different conceptions of what ‘ethical’ practice looked like in the use of the LBBD OneView system. There are many different ethical frameworks for the use of data-driven technologies in general:[228] AlgorithmWatch has compiled 167 different examples of guidelines which specifically relate to automated decision-making.[229] These include broad statements of principles from academic experts,[230] UN agencies,[231] technical professional bodies[232] and civil society,[233] as well as specific ethical guidelines for domains such as humanitarian work[234] and social care.[235]

The broad variation in what is included across these different sets of principles and guidelines demonstrates that ‘ethics’ is understood to cover many different concepts:[236] there is no single understanding of what ‘ethical’ use of data looks like. As a result, it is not surprising that interviewees working with the LBBD OneView system included a wide range of different concepts in their discussions of ethical use of the system. A lack of clarity about what constitutes ‘ethical’ practice across the entire system, however, means that there is a risk that some possible harms will not be mitigated.

Ethical principles seek to offer an internal heuristic framework for organisations to use in determining what they feel constitutes appropriate design and use of algorithmic systems. But they must be tailored to address the specific contexts in which an organisation operates. Research has shown that many firms have struggled to translate principles into clear practices and responsibilities for different members of an organisation.[237] Principles can act as a series of ‘lenses’ which allow organisations to view a particular decision or problem from different angles.[238]

As a result, codes of ethics should consider the needs of different communities, in order to recognise that there may not a universally agreed concept of ‘public benefit’.[239] Some researchers argue that ethics should consequently be a practice: considered systemically and holistically in order to recognise and address collective concerns.[240]

It is also important to recognise that ethical guidelines alone are not sufficient to safeguard individuals – nor are they sufficient to address whether analytics systems should be used at all.[241] Ethical frameworks and guidance are tools which contribute to – but are not sufficient to ensure – justice,[242] in the form of tackling systemic inequalities within – and outside of – social services provision.

Conclusions

Our research investigated the OneView system as deployed in the London Borough of Barking & Dagenham (LBBD) between May and September 2020. We examined in detail three types of outputs from this data analytics system: case summaries, predictive alerts and the COVID-19 case management system.

LBBD was one of the first local authorities to deploy such a system. The experiences we observed provided us with insights that other local authorities considering data analytics systems, or that are in the process of procuring, developing or implementing such systems, can potentially learn from.

LBBD’s leadership had a vision of OneView’s goals and what it aimed to achieve: for frontline workers, for residents and for the Council as a whole.

However, what our research found was that the Council’s use of OneView’s predictive alerts and case summaries did not yet have clear benefits for service provision, according to LBBD staff. We did not find evidence that the use of predictive or summarising analytics improved overall outcomes for established Council services or for residents. The lack of transparency about which information was used, or not used, in the production of the case summaries and predictive alerts resulted in these outputs not being trusted by or proving useful for frontline social workers.

In contrast, we did find that the use of OneView as a COVID-19 case management system, which had a clear and narrow purpose for using data analytics as well as transparent and visible risk factors, proved useful for staff, who were able to use the outputs in their work and describe benefits.

These findings demonstrate that data analytics can prove useful when the required output from the system can be clearly specified and understood by all users. When the output is expected to identify or address more complex situations, the system produces summaries or predictions that are more opaque and therefore less likely to be trusted.

As a result, a clear articulation is needed of successful outcomes specific to different stakeholders and a strategy for measuring impact are needed. Without this it is not possible to assess whether the deployment of data analytics delivered the anticipated benefits.

Articulating clear success criteria against which the introduction of new data-driven tools will be evaluated, and identifying measurable indicators, is crucial to ensure that data analytics tools deliver on their promises. Where benefits are anticipated for residents, they should be involved in developing the success criteria and evaluating the system. This is also important for pilot programmes, where success criteria should be used to assess whether a data analytics system should be widely deployed.

Our research also found that the development and deployment of the OneView system had an impact not only on IT systems in the Council but on the day-to-day work and practice of frontline social workers. Interviewees highlighted in particular how the system could impact the social behaviours and relationships that are crucial for social work. As a result, the development, implementation and evaluation of data analytics must look at any tool in the context of the whole system into which it has been introduced – including both technical and social systems.

We identified multiple different understandings of what ‘ethical’ practice looked like in the use of data and predictive analytics in social care in LBBD in 2020. These included good intentions and compliance with information governance obligations, as well as improving outcomes for residents. Discussions of ethics also included consideration of whether the OneView system exhibited bias, the transparency of its use and the avoidance of actively harmful uses.

However, the lack of clarity about what constituted ‘ethical’ practice across the entire system means that there is a risk that some possible harms will not be mitigated. To address this, ethical principles for the use of data analytics should be holistic, accessible and usable by everyone involved in using an analytics system. They should be consistent with – but not limited to – other obligations, including equalities and data protection obligations.

We did not set out to assess LBBD’s compliance with the UK General Data Protection Regulation (UK GDPR), nor did we observe breaches of this legislation. Nonetheless, we observed that a lack of clear regulatory guidance about the similarities and differences between ‘special category data’ under the UK GDPR and ‘protected characteristics’ under the Equality Act 2010, as well as the different obligations under these pieces of legislation, meant that discussions risked conflating and confusing these concepts and obligations. Local authorities need more support to ensure that their use of data analytics complies with both their data protection obligations under the UK GDPR and their equalities obligations (particularly the monitoring obligations) under the Equality Act 2010 and the Human Rights Act 1998.

Overall, we found that introducing a data analytics system into an existing complex system like local authority provision of social care is a task that requires considerable resource, effort and involvement – including technical staff, decision-makers and frontline social services staff, both in-house and external contractors. Procuring and implementing a system like OneView needs to be well thought through, consulted on, tested, discussed and evaluated against defined success criteria, with the outcome that all staff should be able to understand, describe and use the system to support their day-to-day work.

Data analytics systems may prove to be useful in providing local authority services, but they should not be seen as a quick, cheap or easy solution.

Recommendations

We recommend that local authorities implement the following actions:

  • Ensure that data analytics systems are explainable, in line with the guidance produced by the Information Commissioner’s Office (ICO) and the Alan Turing Institute.[243] These explanations should:
    • be accessible to all stakeholders, including frontline workers and the people whose data is used in the system
    • include the purpose and target group, factors and underlying values that are used as features in models, and the rationale for using those factors
    • include mechanisms for human review where data-analytics-informed decisions produce undesirable outcomes and redress may be required.
  • Complete algorithmic transparency reports for all data analytics systems that provide clear information for residents about a system, upload these to the repository overseen by the Responsible Technology Adoption Unit (RTA) and the Central Digital and Data Office (CDDO), and regularly review and update the reports.
  • Include the development of clear and actionable success criteria and plans for how these will be evaluated in the procurement and implementation of analytics systems, including in pilot deployments. In developing success criteria and evaluation plans, local authorities should:
    • develop success criteria and evaluation methods for the system as a whole with the participation of those who will be most affected by the use of the system
    • where benefits are anticipated for a particular group – for example, frontline social workers or service users – ensure that this group participates in developing success criteria and evaluating whether the benefits have been achieved.
  • Carry out equalities impact assessments when developing and deploying data analytics systems.
  • Develop, share and train users in ethical principles for the use of data analytics that are holistic, accessible and usable by everyone involved in using the system. To realise this, local authorities should:
    • consider the needs of different communities
    • be consistent with – but not limited to – other obligations, including equalities and data protection obligations
    • develop and implement clear practices that operationalise ethical principles, such as documentation practices and testing/evaluation schemes that support understanding of the impact of these systems
    • clearly assign practices to particular stakeholders, including the ‘upstream’ developer of that system where necessary.
  • During the procurement process, establish clear requirements and processes to ensure that technical teams can access the underlying data and model of the system for algorithmic auditing and testing purposes.
  • Develop, implement and evaluate data analytics in the context of the whole system into which it has been introduced – including both technical and social elements. This includes data analytics systems and tools developed by private companies.

We recommend that regulators and policymakers consider the following points:

  • The Equality and Human Rights Commission (EHRC) and the ICO should continue to collaborate to ensure their guidance is accessible, fit-for-purpose and enables staff across a wide range of local authority functions (and other public-sector institutions) to handle the use of, or exclusion of, special category data, in particular with regard to the:
    • use in data analytics and predictive analytics systems
    • use in equalities monitoring of the use of these systems
    • compliance with the Equality Act 2010, the UK GDPR and Article 14 of the Human Rights Act 1998.
  • The CDDO and the RTA should continue the push for the Algorithmic Transparency Recording Standard to be a mandatory requirement and extend that requirement to local government.
  • The Crown Commercial Service should develop model contract clauses for the use of data analytics in local authorities. The clauses should:
    • state that developers must ensure that tools are compliant with EHRC and ICO guidelines
    • ensure local authorities have a contractual right to gain the appropriate level of access to the underlying model and training data, so that they can perform evaluations and test accuracy and efficacy.
  • The CCS should also design and pilot an Algorithmic Impact Assessment (AIA) standard for local authorities to use when procuring data analytics systems (and other AI-powered systems).[244] These assessments are performed in the early stages of the design and development process of a data analytics tool and can help identify potential risks or issues for the local authority to address with the developer. AIAs could also enable more public participation in the technology procurement process.
  • Relevant regulators and central Government departments should be resourced and empowered to improve processes and standards for data analytics use in public-sector delivery.

We recommend that companies developing and supplying data analytics tools and systems to the public sector implement the following actions:

  • Provide clear explanations for how tools and systems work, as well as access to systems to enable audits and evaluations of how a tool produces outputs. Failing to provide this information may make tools and systems unusable, as frontline staff will lack confidence in their use. To deliver on this, companies must provide public-sector clients with:
    • the access needed to audit and evaluate tools and systems before procurement, and at regular intervals afterwards
    • clear information on where data used to train systems comes from, available via a document such as a datasheet
    • easily understandable documentation explaining how a system operates.
  • Allow for independent evaluation of the efficacy of data analytics systems in practice, rather than only in lab settings.
  • Design these systems in close consultation with frontline workers and residents who may be impacted by their use. Specifically:
    • Work with local authorities to design data analytics systems with the participation of residents who will be impacted by them, to ensure that systems better reflect the lived experiences of those they are meant to serve.
    • Work with frontline workers from the early design stages to study how a data analytics system will be used in practice.
    • Create ways for frontline workers and residents to identify and report errors and issues from the beginning of deployment, including in pilots.
  • Ensure their practices are compliant with laws and ethical obligations, and enable regulatory compliance for public-sector clients. Specifically, companies should ensure they:
    • understand and operate within the ethical and legal obligations of public-sector clients, and work to enable clients to meet those obligations
    • where necessary, give members of a local authority’s data science or technical team access to the underlying models and training data, so that they can perform bias auditing and evaluations
    • support public engagement efforts with residents and frontline workers who will be impacted by these tools.

Methods

The study had two specific aims: to describe how the Council’s data analytics systems worked at the time of the research, both technically and in terms of the organisational and social processes in which it was embedded; and to document understanding among Council staff of the data systems in use. Data for the study was collected in May to September 2020, during the initial response to the COVID-19 pandemic: as a result the study also aimed to understand how the pandemic affected the Council’s data practices and use of predictive analytics.

To do this, it used a combination of ethnographically informed research methods, including online organisational ethnographic research, semi-structured interviews, informal conversations and documentary analysis. It sought to surface real-life detail that could provide a more nuanced, contextualised understanding of how LBBD was using advanced data analytics, including predictive analytics, to support the provision of local government services.

Selection of Council case study

LBBD has received national awards for its use of data, and in 2020 was pioneering using data analytics in the delivery of public services. As a result, it is a unique site of research for the use of predictive analytics in the public sector.

Since 2018, LBBD has procured and used the OneView system, which brings together multiple Council data sources, performs analysis and predictive modelling to generate information and alerts displayed to frontline caseworkers. To understand in depth how staff understood and were using advanced data analytics in their work, the study focused on how staff understood and were using three outputs from OneView:

  1. Case summaries synthesising information from multiple data sources in a single view.
  2. Predictive alerts about individuals who, according to predictive modelling, were at risk of specific events – such as presenting as homeless, being stepped up or down in Children’s Social Care or being admitted to hospital – in the next 12 months.
  3. COVID-19 case management to filter and group residents according COVID-19 risk factors.

The case summaries and predictive alerts had been in use for several months at the time of our research, while the COVID-19 case management was a new development which coincided with the outset of the study. Describing these implementations at varying stages of their development allowed researchers to understand how staff understandings of predictive analytics differed according to the way in which they were used.

For each service, the study aimed to interview a range of Council staff with responsibility for different aspects of the implementation of advanced data analytics in the service: frontline workers using the system to help them deliver services to the public; technical staff responsible for managing data or implementing OneView in that service; and management (those with either senior management or operational management responsibilities for a particular service, such as the Assessment and Interventions service within Children’s Care and Support). At least 10 individuals were interviewed for each of the two services, with participants recruited to ensure that all groups above were represented.[245]

Data collection

All data was gathered during the research period agreed with the Council, which ran between mid-May and mid-September 2020. It comprises:

  • 97 one-to-one, semi-structured online interviews with Council employees, employees of companies or providers involved in implementing (or supporting the implementation of) predictive analytics, and councillors. For a breakdown of interviewees by role, see table below.
  • Five online group interviews with between two and five individuals, all of whom were working on a shared project or had shared experiences of service provision. This aimed to create opportunities for a more conversational form of interaction, and to help show how individuals experienced the use of predictive analytics on the ground.
  • Observational research in 14 online meetings (on Microsoft Teams) in which predictive analytics was being developed, shaped or discussed. This aimed to identify how employees of the Council and its partners discuss the functioning and the value of predictive analytics systems.
  • Online walk-throughs of predictive analytics systems with seven Council employees and external partners, explaining how the data systems that they use function (using screensharing). This method was used as a tool to help people talk about their experiences of using data in their work.
  • Ongoing email exchanges and online discussion with research participants about the use of predictive analytics, and the use of data in general, in projects they were working on.
  • Asking for internal documentation, including internal reports, minutes of meetings, protocols for working, job roles and organisational charts. This aimed to build the research team’s understanding of context for interviews, and of the relationships and processes that were in use during the research period.
  • Asking research participants to take screenshots to illustrate information in interviews or in email exchanges.
  • Collection of available technical documentation on the data systems being used and developed (including internal information such as technical descriptions, contractual agreements, job descriptions and manuals, and publicly available information such as procurement documents and descriptions of relevant Council services).
  • Factual information based on observation of practices and access to internal documentation, drawn from observations from researchers through the ethnographically informed research and included without individual citations.

To avoid placing excessive pressure on public-service providers during the COVID-19 pandemic response, the research team initially proposed postponing the research until later in the year. However, the team’s contacts at LBBD stated that they were keen to proceed according to the original timeline, noting that they were interested in participating in research into new functions of their data analysis and prediction capabilities. As a result, the Ada Lovelace Institute continued with the research, with all data collection shifting to online instead of in-person methods.

Research ethics

In 2020 the proposed research study received ethical approval from the University College London Research Ethics Committee. Participants were under no obligation to take part in the research, and were informed of their right to participate and their right to withdraw from the outset. Participants were made aware that the results of this research project would be published in a public-facing report by the Ada Lovelace Institute. Also that aspects of the research may be used at lectures and presentations at conferences and to inform academic publications.

Data was primarily recorded through written field notes. Researchers made audio recordings of some interviews and small group discussions to facilitate transcription at a later date; explicit consent for this was sought from all participants before any data recording took place. Audio/video recordings were used only for transcription. Data generated during the ethnographic research was stored on cloud servers managed by the Nuffield Foundation.

Unattributed and anonymised quotes from these recordings, and excerpts from written field notes, are used in this final report, along with other academic publication outputs and/or presentations. We endeavoured to remove all identifying information pertaining to individual participants. Where we believed that some information included might have enabled a research participant to be identified, we returned to that individual to obtain explicit consent for its inclusion in this report.

Where interview participants withdrew from participation after our analysis of the data had been completed, we removed any data from the withdrawn participants. This includes information provided during group meetings. In October 2023, EY and Xantura withdrew consent to participate, including on behalf of their employees. We have in response to this:

  • removed analysis and quotes directly attributable to EY and Xantura employees from the report, including commentary on some of the details of the system, technical approach to bias management and early plans for evaluation.
  • written to EY and Xantura research participants to explain that consent had been withdrawn at an institutional level.

Material related to EY and Xantura in this report is based on publicly available sources or on analysis of anonymised data from interviews with LBBD staff and materials provided by LBBD.

Challenges in conducting this research and limitations

The research took place during the early months of the COVID-19 pandemic, during a nationwide lockdown. The extent of this crisis meant that Council staff – like many other workers – were working under unusual pressure and may not have had capacity to be part of this research; this is likely to have had an impact on our recruitment of participants.

To mitigate pressure that the research placed on Council employees, the participant information sheet specified that no one was under any obligation to be involved in the research; that participants could agree to be involved in some, all or none of the activities that were proposed to them; and that this could be changed at any time. The research team took its cue from contacts in LBBD as to who it was appropriate to approach, and which employees were unlikely to be able to participate in the research as a result of their workload.

Our observation of predictive analytics enabled us to take a snapshot in time of a new system in its experimental phase. The Council was constantly tweaking and iterating the system and its implementation throughout and after the research period. Many of those involved in the deployment of OneView during the research period acknowledged that it was in an experimental or testing phase.

We are publishing this report several years after we undertook the research. Between the research period and publication, we reported interim findings to the Council. This report does not address changes to the OneView system which have been made since 2020, whether in response to our interim findings or for other reasons. Our recommendations are not targeted at LBBD, rather they are for the benefit of other local authorities considering deploying (or beginning to deploy) data analytics.

Gaps in our understanding of the OneView system

We have provided a description of the technical operation of OneView and the organisational processes which buttress it above (see The OneView system section). However, it is worth emphasising, even at this descriptive stage, just how difficult it was for the Ada research team to get a nuanced, detailed and complete understanding of OneView, either from the users of the system in the Council or from the technical provider, Xantura. For example, as discussed further in Insight 1, almost all LBBD interviewees said that they did not know how the risk factors against which risk modelling was conducted were defined, or where they could find a set of definitions. The research team requested definitions for these risk factors from the Council and from Xantura during the research period but did not receive them.

Sampling

Researchers began by conducting semi-structured interviews with specific individuals who had been involved in developing and implementing predictive analytics, both within the Council and outside it, to develop an initial understanding of the structures and processes used within the Council.

The study then recruited research participants iteratively throughout the research period, using a combination of snowball sampling (asking interviewees and meeting attendees to suggest other people who could help to build our understanding of the Council’s use of predictive analytics) and purposive sampling (identifying individuals with relevant knowledge or experience who were mentioned in internal documentation or meetings and contacting them directly).

The study included participants from multiple levels of the Council, including senior leadership, operational staff and policy staff, as well as representatives of the Council’s technical providers and partner organisations.[246] Researchers interviewed 72 people, and also observed meetings that were attended by at least 25 people in addition to that number.

Interviewees were classified according to the following categories:

Role of interviewee Number of interviews conducted
LBBD technical staff
(involved in working directly with data and/or implementing the predictive analytics systems in use)
29
LBBD frontline staff
(involved in delivering services to the borough’s residents directly)
22
LBBD senior leadership
(including members of the Council’s senior leadership team, operational directors and heads of Council services)[247]
21
LBBD policy and strategy staff
(involved in developing policy or strategy-focused outputs for the Council)
14
Other
(including representatives of Council partners, technical providers involved in implementing predictive analytics and councillors)
11
Total 97

These numbers include participants who later withdrew from the study.


Footnotes

[1] “Funding Gap Growing as Councils ‘Firmly in Eye of Inflationary Storm’” (Local Government Association, October 20, 2023) <https://www.local.gov.uk/about/news/funding-gap-growing-councils-firmly-eye-inflationary-storm> accessed 7 May 2024.

[2] Matthew Ryder and Jessica Jones, ‘Facial Recognition Technology Needs Proper Regulation – Court of Appeal’ (Ada Lovelace Institute Blog, 14 August 2020) <https://www.adalovelaceinstitute.org/blog/facial-recognition-technology-needs-proper-regulation/> accessed 24 July 2021.

[3] ‘Home Office Drops “racist” Algorithm from Visa Decisions’ (BBC News, 4 August 2020) <https://www.bbc.com/news/technology-53650758> accessed 25 February 2023.

[4] Alex Hern, ‘Do the Maths: Why England’s A-Level Grading System Is Unfair’ The Guardian (14 August 2020) <https://www.theguardian.com/education/2020/aug/14/do-the-maths-why-englands-a-level-grading-system-is-unfair> accessed 12 June 2023.

[5] Ada Lovelace Institute, How do people feel about AI? A nationally representative survey of public attitudes to artificial intelligence in Britain (2023) <https://www.adalovelaceinstitute.org/report/public-attitudes-ai/> accessed 6 June 2023.

[6] ICO and The Alan Turing Institute, ‘Explaining Decisions Made with AI’ (ICO 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/> accessed 26 January 2023.

[7] Ada Lovelace Institute, Algorithmic impact assessment: A case study in healthcare (2022)

https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ accessed 13 June 2023.

[8] See, for example, ‘Home Office Drops “racist” Algorithm from Visa Decisions’ (BBC News, 4 August 2020) <https://www.bbc.com/news/technology-53650758> accessed 25 February 2023; Louise Amoore, ‘Why “Ditch the Algorithm” Is the Future of Political Protest’ The Guardian (19 August 2020). <http://www.theguardian.com/commentisfree/2020/aug/19/ditch-the-algorithm-generation-students-a-levels-politics> accessed 16 February 2021.

[9] ICO, ‘When Do We Need to Do a DPIA?’ (17 October 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/data-protection-impact-assessments-dpias/when-do-we-need-to-do-a-dpia/> accessed 16 December 2022. This is sometimes also termed ‘data linkage’; see for example Office for National Statistics, ‘Developing Standard Tools for Data Linkage’ (February 2021). <https://www.ons.gov.uk/methodology/methodologicalpublications/generalmethodology/onsworkingpaperseries/developingstandardtoolsfordatalinkagefebruary2021> accessed 30 June 2023.

[10] Shared Intelligence, ‘Using Predictive Analytics in Local Public Services’ (Local Government Association, 5 November 2020) <https://www.local.gov.uk/publications/using-predictive-analytics-local-public-services> accessed 5 April 2023.

[11] Jonathan Bright and others, ‘Data Science for Local Government’ (Oxford Internet Institute 2019) <https://www.ssrn.com/abstract=3370217> accessed 16 March 2021.

[12] Sarah Marsh and Niamh McIntyre, ‘Nearly Half of Councils in Great Britain Use Algorithms to Help Make Claims Decisions’ Guardian (28 October 2020) <http://www.theguardian.com/society/2020/oct/28/nearly-half-of-councils-in-great-britain-use-algorithms-to-help-make-claims-decisions> accessed 9 December 2020.

[13] Sam Trendall, ‘Blackpool Claims £1m Savings after Using AI to Fix Potholes’ (PublicTechnology.net, 4 February 2020) <https://www.publictechnology.net/articles/news/blackpool-claims-%C2%A31m-savings-after-using-ai-fix-potholes> accessed 5 February 2020.

[14] See, for example, the Camden Resident Index, documented in Lina Dencik and others, ‘Data Scores as Governance: Investigating Uses of Citizen Scoring in Public Services’ (Data Justice Lab 2018) 48

[15] See, for example, Symons T, ‘Wise Council: Insights from the Cutting Edge of Data-Driven Local Government’ (Nesta, 28 November 2016) <https://www.nesta.org.uk/report/wise-council-insights-from-the-cutting-edge-of-data-driven-local-government/>; Dencik L and others, ‘The “Golden View”: Data-Driven Governance in the Scoring Society’ (Internet Policy Review, 30 June 2019) <https://policyreview.info/articles/analysis/golden-view-data-driven-governance-scoring-society>. For an earlier example of this, see: Shaw I and others, ‘An Exemplary Scheme? An Evaluation of the Integrated Children’s System’ (2009) 39 British Journal of Social Work 613 <https://academic.oup.com/bjsw/article/39/4/613/1626938 > accessed 24 March 2021.

 

[16] Arne Hintz and others, ‘Civic Participation in the Datafied Society: Towards Democratic Auditing?’ (Data Justice Lab 2022) 49.

[17] Jonathan Athow, John Lord and Claire Potter, ‘Predictive Analytics: The Science of Non-Compliance’ Civil Service Quarterly (27 January 2015) <https://quarterly.blog.gov.uk/2015/01/27/predictive_analytics/> accessed 5 April 2023.

[18] ‘Using Predictive Analytics in Adult Social Care’ (NHS Digital 2022) <https://digital.nhs.uk/services/social-care-programme/demonstrators-programme-2019-21-case-studies/using-predictive-analytics-in-adult-social-care> accessed 5 April 2023.

[19] Big Brother Watch, ‘Poverty Panopticon: The Hidden Algorithms Shaping Britain’s Welfare State’ (2021) <https://bigbrotherwatch.org.uk/wp-content/uploads/2021/07/Poverty-Panopticon.pdf> accessed 30 June 2023.

[20] Joanna Redden, ‘Predictive Analytics and Child Welfare: Toward Data Justice’ (2020) 45 Canadian Journal of Communication 101.

[21] See, for example, Isabella Pereira, Claudia Mollidor and Ed Allen, ‘Troubled Families Programme: Qualitative Case Study Report: Phase 2: Wave 2’ (Ipsos MORI 2019).

[22] On reducing demand, see Anna Randle and Henry Kippin, ‘Managing Demand: Building Future Public Services’ (RSA 2014) <https://www.thersa.org/globalassets/pdfs/reports/rsa_managing-demand_revision4.pdf> accessed 5 April 2023.

[23] Andrei Toderas and Mina Manning, ‘The Future of Predictive Analytics in Councils’ (Catalyst Project, University of Essex 2019) 4.

[24] Ministry of Housing, Communities & Local Government, ‘Local Data Accelerator Fund for Children and Families: Prospectus’ (Ministry of Housing, Communities & Local Government 2021) 4 <https://www.gov.uk/government/publications/local-data-accelerator-fund-for-children-and-families> accessed 30 June 2023.

[25] Thomas M Vogl and others, ‘Smart Technology and the Emergence of Algorithmic Bureaucracy: Artificial Intelligence in UK Local Authorities’ (2020) 80 Public Administration Review 946 <https://onlinelibrary.wiley.com/doi/abs/10.1111/puar.13286> accessed 18 September 2020; Richard Selwyn, ‘Predictive Analytics’ (Supporting Families Programme, 14 May 2018). <https://supportingfamilies.blog.gov.uk/2018/05/14/predictive-analytics/> accessed 5 April 2023.

[26] Dencik and others  ‘Data Scores as Governance: Investigating Uses of Citizen Scoring in Public Services’ (Data Justice Lab 2018) 116.

[27] David Phillips, Louis Hodge and Tom Harris, ‘English Local Government Funding: Trends and Challenges in 2019 and Beyond’ (Institute for Fiscal Studies 2019) 6 <https://www.ifs.org.uk/publications/14563> accessed 5 April 2023.

[28] ‘Local Government Finance in the Pandemic – NAO Press Release’ (National Audit Office (NAO), 11 November 2022) <https://www.nao.org.uk/press-releases/local-government-finance-in-the-pandemic/> accessed 17 June 2024.

[29] Kate Ogden and David Phillips, ‘COVID-19 and English Council Funding: How Are Budgets Being Hit in 2020–21?’ (Institute for Fiscal Studies 2020) <https://www.ifs.org.uk/publications/15332> accessed 5 April 2023.

[30] ‘Local Government Finance in the Pandemic – NAO Press Release’ (n 28).

[31] Patrick Butler, ‘Swingeing Cuts on Cards as Councils in England Face Funding Crisis, Watchdog Warns’ Guardian (10 March 2021) <https://www.theguardian.com/society/2021/mar/10/swingeing-cuts-on-cards-as-councils-in-england-face-funding-crisis-watchdog-warns> accessed 5 April 2023.

[32] Inioluwa Deborah Raji and others, ‘The Fallacy of AI Functionality’ (ACM Conference on Fairness, Accountability, and Transparency, 2022) <http://arxiv.org/abs/2206.09511> accessed 5 April 2023.

[33] ‘Universal Credit: Warnings over AI Use to Risk-Score Benefit Claims’ BBC News (11 July 2023) <https://www.bbc.com/news/uk-politics-66133665> accessed 12 July 2023.

[34] Booth R, ‘Automated UK Welfare System Needs More Human Contact, Ministers Warned’ The Guardian (22 May 2023) <https://www.theguardian.com/society/2023/may/22/automated-uk-welfare-system-needs-more-human-contact-ministers-warned> accessed 22 May 2023.

[35] UN Special Rapporteur on Extreme Poverty and Human Rights, ‘Report on Digital Technology, Social Protection and Human Rights’ (2019) <https://www.ohchr.org/EN/Issues/Poverty/Pages/DigitalTechnology.aspx> accessed 2 March 2021.

[36] Zara Rahman and Julia Keseru, ‘Predictive Analytics for Children: An Assessment of Ethical Considerations, Risks, and Benefits’ (UNICEF Office of Research 2021) 36 <https://www.unicef-irc.org/publications/1275-predictive-analytics-for-children-an-assessment-of-ethical-considerations-risks-and-benefits.html> accessed 30 June 2023.

[37] Vicky Clayton and others, ‘Machine Learning in Children’s Services: Does It Work?’ (What Works for Children’s Social Care 2020).

[38] Wajid Shafiq, ‘Machine Learning Can Deliver Better Outcomes for Children and Families’ (Community Care, 21 September 2020) <https://www.communitycare.co.uk/2020/09/21/data-sharing-supported-machine-learning-can-deliver-better-outcomes-children-families/> accessed 5 April 2023.

[39] Ada Lovelace Institute and DataKind UK, Examining the black box (2020) <https://www.adalovelaceinstitute.org/wp-content/uploads/2020/04/Ada-Lovelace-Institute-DataKind-UK-Examining-the-Black-Box-Report-2020.pdf> accessed 30 June 2023; Daan Kolkman, ‘Is Public Accountability Possible in Algorithmic Policymaking? The Case for a Public Watchdog’ (Impact of Social Sciences, 24 July 2020) <https://blogs.lse.ac.uk/impactofsocialsciences/2020/07/24/is-public-accountability-possible-in-algorithmic-policymaking-the-case-for-a-public-watchdog/> accessed 5 April 2023.

[40] Ed Sheridan, ‘Town Hall Drops Pilot Programme Profiling Families without Their Knowledge’ (Hackney Citizen, 30 October 2019) <https://www.hackneycitizen.co.uk/2019/10/30/town-hall-drops-pilot-programme-profiling-families-without-their-knowledge/> accessed 24 January 2023.

[41] Ada Lovelace Institute, Transparency mechanisms for UK public-sector algorithmic decision-making systems (2020) <https://www.adalovelaceinstitute.org/report/transparency-mechanisms-for-uk-public-sector-algorithmic-decision-making-systems/> accessed 30 June 2023; Dencik and others, ‘Data Scores as Governance: Investigating Uses of Citizen Scoring in Public Services’ (Data Justice Lab 2018).

[42] Local authorities – also called local councils – are elected bodies responsible for providing a range of services in a geographical area; Local Government Association, ‘What Is Local Government?’ <https://www.local.gov.uk/about/what-local-government> accessed 18 May 2023

[43] These range from social services to electoral registration to libraries to recycling; Mark Sandford, ‘Local Government in England: Structures’ (House of Commons Library 2022) 25–6. <https://commonslibrary.parliament.uk/research-briefings/sn07104/> accessed 15 September 2022.

[44] As of the 2021 census. Of this population, 26% were children aged between 0 and 15: the highest proportion of all local authorities in England and Wales. See ‘How Life Has Changed in Barking and Dagenham: Census 2021’ <https://www.ons.gov.uk/visualisations/censusareachanges/E09000002/> accessed 4 April 2023.

[45] London Borough of Barking & Dagenham, ‘No-One Left Behind: In Pursuit of Growth for the Benefit of Everyone. Report of the Barking and Dagenham Independent Growth Commission’ (2016) 39. <https://www.lbbd.gov.uk/sites/default/files/2022-08/No-one-left-behind-in-pursuit-of-growth-for-the-benefit-of-everyone.pdf> accessed 9 May 2023.

[46] Ministry of Housing, Communities & Local Government, ‘The English Indices of Deprivation 2019’ (2019) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/835115/IoD2019_Statistical_Release.pdf> accessed 30 June 2023; ‘English Indices of Deprivation 2019 File 10: Local Authority District Summaries’ (GOV.UK) <https://www.gov.uk/government/statistics/english-indices-of-deprivation-2019> accessed 3 May 2023.

[47] Chris Naylor, ‘Chris Naylor: Why We Must Address Deficits of Power’ (Local Government Chronicle (LGC), 10 June 2019) <https://www.lgcplus.com/services/community-cohesion/chris-naylor-why-we-must-address-deficits-of-power-10-06-2019/> accessed 4 April 2023.

[48] Cuts to social spending in the UK in the aftermath of the 2008 global financial crisis. See Jay Wiggan, ‘Austerity Politics’ in Pete Alcock and others (eds), The Student’s Companion to Social Policy (5th edn, Wiley-Blackwell 2016).

[49] London Borough of Barking & Dagenham, ‘The Barking & Dagenham Corporate Plan 2020 to 2022’ (2020) 9 <https://www.lbbd.gov.uk/sites/default/files/2022-07/LBBD-Corporate-Plan-2020-2022_0.pdf> accessed 9 May 2023.

[50] Mark Fowler, ‘Procurement of Data Analytics and Predictive Modelling for Children’s, Homelessness and Adult Services’ (11 December 2018) 2. <https://modgov.lbbd.gov.uk/Internet/documents/s127495/Data%20Analytics%20Procurement%20Report.pdf> accessed 21 November 2022.

[51] London Borough of Barking & Dagenham, ‘The Barking & Dagenham Corporate Plan: 2020 to 2022 Appendix 1’ (2020) 3. Emphasis in original.

[52] Fowler (n 50) 2. <https://modgov.lbbd.gov.uk/Internet/documents/s127495/Data%20Analytics%20Procurement%20Report.pdf> accessed 21 November 2022.

[53] For example at the CIPFA/Xantura webinar: Introducing the COVID-19 OneView Service (27 May 2020)

[54] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 2.

[55] ‘Data Insight in a Local Authority; What Have We Learnt so Far?’ (New Local, 2 June 2017) <https://www.newlocal.org.uk/articles/data-insight-in-a-local-authority-what-have-we-learnt-so-far/> accessed 10 May 2023.

[56] London Borough of Barking & Dagenham, ‘Decision: Procurement of Data Analytics and Predictive Modelling for Children’s, Homelessness and Adult Services’ (11 December 2018). <https://modgov.lbbd.gov.uk/Internet/ieDecisionDetails.aspx?AIId=76308> accessed 21 November 2022.

[57] London Borough of Barking & Dagenham, ‘One View: Build Closure Report: Children’s’ (16 December 2019).

[58] London Borough of Barking & Dagenham, ‘Barking and Dagenham One View DPIA v3.0’ 1.

[59] ‘EY UK’ <https://www.ey.com/en_uk> accessed 10 May 2023.

[60] Public Sector Executive, ‘London Ventures to Lead Cross-Sector Improvements’ (22 October 2013) <https://www.publicsectorexecutive.com/Public-Sector-News/london-ventures-to-lead-cross-sector-improvements> accessed 16 March 2023.

[61] EY, London Councils and Capital Ambition, ‘Guide to London Ventures’ (2017).

[62] Fowler (n 50) 4.

[63] Chris Naylor, ‘Chris Naylor: Why We Must Address Deficits of Power’ (Local Government Chronicle (LGC), 10 June 2019) <https://www.lgcplus.com/services/community-cohesion/chris-naylor-why-we-must-address-deficits-of-power-10-06-2019/> accessed 4 April 2023.

[64] Monica Needs, ‘Sparking Civic Activism’ (Nesta, 11 September 2019). <https://www.nesta.org.uk/blog/sparking-civic-activism/> accessed 4 April 2023.

[65] Sometimes termed ‘One View’ in internal Council documents.

[66] In July 2020, Xantura’s website described OneView as ‘a cloud-based platform for sharing data in a controlled way’. ‘Xantura (Wayback Machine)’ (23 July 2020). <http://web.archive.org/web/20200723130753/https://xantura.com/> accessed 22 May 2023.

[67] Fowler (n 50).

[68] Data collected about school pupils, including data on educational achievements, absences and exclusions, and free school meal eligibility. It is collected by the Department for Education (DfE) three times a year, and used for monitoring and to determine core funding. ‘The School Census: What You Need to Know’ (The Education Hub, 7 October 2022) <https://educationhub.blog.gov.uk/2022/10/07/the-school-census-what-you-need-to-know/> accessed 25 May 2023.

[69] For example, the Liquid Logic case management system used by Adult Social Care and Children’s Social Care, and the Capita system used by Housing.

[70] Mutual Ventures and Xantura, ‘Questions Submitted during the Harnessing the Power of Data to Transform Children’s Services Webinar’ 2.

[71] London Borough of Barking & Dagenham, ‘OneView Impact Plan Updated July 2020’ (internal documentation, 2020) 11.

[72] Tarleton Gillespie, ‘Algorithm’ in Benjamin Peters (ed), Digital Keywords: A Vocabulary of Information Society and Culture (Princeton University Press 2016) 19–20.

[73] ICO and the Alan Turing Institute ‘Explaining Decisions Made with AI’ (ICO 2022) 7 <https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/> accessed 26 January 2023.

[74] Jon Kleinberg and Éva Tardos, Algorithm Design (Pearson/Addison-Wesley 2006) 795

[75] NLP uses computational linguistics as well as statistics and machine learning to process text so that its content, meaning and sentiment can be used in further computation. See Holdsworth J, ‘What Is NLP (Natural Language Processing)?’ (IBM, 23 September 2021) <https://www.ibm.com/topics/natural-language-processing> accessed 20 June 2023.

 

[76] London Borough of Barking & Dagenham, Xantura and EY ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 17.

[77] Victoria Climbié was a child who was murdered by her great-aunt and her great-aunt’s partner in 2000; Peter Connelly (also known as Baby P) was a child who died in 2007 after suffering more than 50 injuries. The deaths led to significant criticism of Haringey Council, the local authority in both cases. The Laming Inquiry into Victoria Climbié’s death specifically recommended that the Government issue guidance about how data protection and other legislation impacts “the sharing of information between professional groups in circumstances where there are concerns about the welfare of children and families.” See Laming, Lord, ‘The Victoria Climbié Inquiry: Report’ (2003) 373.

[78] Further details on this process were not available to the research team.

[79] London Borough of Barking & Dagenham, ‘B&D One View: Frequently Asked Questions’ 3.

[80] 26:58 in Mutual Ventures, ‘Harnessing the Power of Data to Transform Children’s Services, 8th September 2020’ <https://www.youtube.com/watch?v=inFq2lzcbfc> accessed 23 January 2023.

[81] London Borough of Barking & Dagenham, ‘B&D One View: Frequently Asked Questions’ 3.

[82] London Borough of Barking & Dagenham, ‘One View: Build Closure Report: Children’s’ (16 December 2019) 6.

[83] Ibid.

[84] Different tiers of service reflect different levels of need, different legal obligations and therefore different levels of support from social workers. ‘Stepping down’ refers to moving from Early Help, from Early Help to universal services, from Child in Need to Early Help or below, from Child Protection to Child in Need or below, and from Looked After Children to Child Protection or below. ‘Stepping up’ refers to moving in the other direction. London Borough of Barking & Dagenham, ‘One View: Build Closure Report: Children’s’ (16 December 2019) 29.

[85] London Borough of Barking & Dagenham, ‘Risk Alerts User Guide For Teams Receiving One View Risk Alerts’.

[86] London Borough of Barking & Dagenham, ‘Barking and Dagenham One View DPIA v3.0’ 14.

[87] London Borough of Barking & Dagenham, ‘Risk Alerts User Guide For Teams Receiving One View Risk Alerts’ 4.

[88] Ibid 2.

[89] Ibid.

[90] Ibid.

[91] London Borough of Barking & Dagenham, ‘One View Review: Beyond COVID 19 – What Next? Agenda & Supporting Papers’ 5.

[92] Ibid.

[93] People at most risk of becoming seriously ill during the COVID-19 pandemic were advised to ‘shield’ – to stay at home and minimise face-to-face contact. See  ‘What Is “Shielding” and Who Needs to Do It?’ (Full Fact, 4 June 2020) <https://fullfact.org/health/coronavirus-shielding-social-distancing/> accessed 11 May 2023.

[94]  Fowler (n 50) para 2.7.

[95] Ibid.

[96] This report was prepared in order to document the delivery of the Build phase, identify outstanding issues and seek approval to move to the Run phase. See London Borough of Barking & Dagenham, ‘One View: Build Closure Report: Children’s’ (16 December 2019) 2.

[97] The ‘Build Closure Report’ states that this is revised from the case set out in the original project proposal that predicted savings of £1,640 million across three years. We do not have access to this original project proposal.

[98] London Borough of Barking & Dagenham, ‘One View: Build Closure Report: Children’s’ (16 December 2019) 29–35.

[99] London Borough of Barking & Dagenham, ‘One View Review: Beyond COVID 19 – What Next? Agenda & Supporting Papers’ 7.

[100] A saving of 36 minutes per case, across more than 3,500 cases each year. London Borough of Barking & Dagenham, ‘One View: Build Closure Report: Children’s’ (16 December 2019) 34.

[101] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 3.

[102] London Borough of Barking & Dagenham, ‘One View: Build Closure Report: Children’s’ (16 December 2019) 21.

[103] Ibid 36.

[104] Ibid.

[105] Ibid.

[106] Ibid.

[107] London Borough of Barking & Dagenham, ‘Barking and Dagenham One View DPIA v3.0’ 8.

[108] London Borough of Barking & Dagenham, ‘One View: Build Closure Report: Children’s’ (16 December 2019) 33.

[109] Neil Sartorio, ‘Why We Need to Stop Talking about Vulnerable Citizens and Start Building a Stronger, More Resilient Society’ (10 June 2019) <https://www.linkedin.com/pulse/why-we-need-stop-talking-vulnerable-citizens-start-more-neil-sartorio> accessed 22 November 2022.

[110] London Borough of Barking & Dagenham, ‘One View Review: Beyond COVID 19 – What Next? Agenda & Supporting Papers’ 18.

[111] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 3.

[112] Mutual Ventures and Xantura, ‘Questions Submitted during the Harnessing the Power of Data to Transform Children’s Services Webinar’ 5.

[113] London Borough of Barking & Dagenham, ‘One View Insight: Debt, Homelessness and Housing Risk Alerts: Draft for Discussion (Updated)’ 23.

[114] London Borough of Barking & Dagenham, ‘The Barking & Dagenham Corporate Plan: 2020 to 2022 Appendix 1’ (2020) 9.

[115] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 3–4.

[116] Ibid 14.

[117] London Borough of Barking & Dagenham, ‘Barking and Dagenham One View DPIA v3.0’ 10.

[118] Alan Dix, ‘Sufficient Reason’, Sufficient Reason (2018).

[119] Based on a moral or value judgement – in this case, the definition of what is ‘fair.’

[120] Songül Tolan, ‘Fair and Unbiased Algorithmic Decision Making: Current State and Future Challenges’ (Joint Research Centre, European Commission 2018).

[121] 37:35 in CIPFA, ‘How to Maintain High Ethical Standards and Fight Corruption in the Public Sector – 20 November’ <https://www.youtube.com/watch?v=FkgivOraz60> accessed 25 January 2023.

[122]  London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 11–12.

[123] Ibid 3, 11–12, 14.

[124] London Borough of Barking & Dagenham, ‘Barking and Dagenham One View DPIA v3.0’.

[125] Ibid 20.

[126] Ibid 10.

[127] London Borough of Barking & Dagenham, ‘B&D One View User Guide’.

[128] London Borough of Barking & Dagenham, ‘B&D One View: Frequently Asked Questions’.

[129] London Borough of Barking & Dagenham, ‘Barking and Dagenham One View DPIA v3.0’ 30.

[130] Ibid.

[131] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 12.

[132] Ibid 3–4.

[133] London Borough of Barking & Dagenham, ‘Barking and Dagenham One View DPIA v3.0’ 30.

[134] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 3.

[135] ICO, ‘What Is Special Category Data?’ (17 October 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/special-category-data/what-is-special-category-data/> accessed 1 February 2023.

[136] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 14.

[137] SA Mathieson, ‘Barking and Dagenham: Using Data and Technology to Improve People’s Lives’ (Socitm inform 2019) <https://media.socitm.net/wp-content/uploads/2021/09/10090551/Socitm-Inform-report-Barking-and-Dagenham-1.pdf> accessed 2 December 2022.

[138] UKAuthority, ‘Barking & Dagenham Uses Data to Manage Bookies’ (18 May 2017). <https://www.ukauthority.com/articles/barking-dagenham-uses-data-to-manage-bookies/> accessed 2 December 2022; ‘Betting Machine Stakes Cut to £2’ (BBC News, 17 May 2018). <https://www.bbc.com/news/business-44148285> accessed 2 December 2022.

[139] London Borough of Barking & Dagenham, ‘B&D One View: Frequently Asked Questions’ 7.

[140] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 3.

[141] London Borough of Barking & Dagenham, ‘B&D One View: Frequently Asked Questions’ 7.

[142] See for example HM Government Working Together to Safeguard Children 2023: A guide to multi-agency working to help, protect and promote the welfare of children (December 2023).

[143] Shared Intelligence, ‘Using Predictive Analytics in Local Public Services’ (Local Government Association, 5 November 2020) <https://www.local.gov.uk/publications/using-predictive-analytics-local-public-services> accessed 5 April 2023.

[144] ICO, ‘When Do We Need to Do a DPIA?’ (17 October 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/data-protection-impact-assessments-dpias/when-do-we-need-to-do-a-dpia/> accessed 16 December 2022.

[145] Central Digital & Data Office, ‘Data Ethics Framework’ (GOV.UK, 16 September 2020) <https://www.gov.uk/government/publications/data-ethics-framework> accessed 23 May 2023.

[146] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 14.

[147] London Borough of Barking & Dagenham, ‘Delivering Value and Making Change Happen​’ (internal document) 28.

[148] London Borough of Barking & Dagenham, ‘One View: Build Closure Report: Children’s’ (16 December 2019) 16.

[149] Ibid 17.

[150] ICO and the Alan Turing Institute, ‘Explaining Decisions Made with AI’ (ICO 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/> accessed 26 January 2023.

[151] CDDO and CDEI, ‘Algorithmic Transparency Recording Standard: Guidance for Public Sector Bodies’ (GOV.UK, 5 January 2023) <https://www.gov.uk/government/publications/guidance-for-organisations-using-the-algorithmic-transparency-recording-standard/algorithmic-transparency-recording-standard-guidance-for-public-sector-bodies> accessed 9 February 2023.

[152] ICO and the Alan Turing Institute, ‘Explaining Decisions Made with AI’ (ICO 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/> accessed 26 January 2023.

[153] CDDO and CDEI, ‘Algorithmic Transparency Recording Standard: Guidance for Public Sector Bodies’ (GOV.UK, 5 January 2023) <https://www.gov.uk/government/publications/guidance-for-organisations-using-the-algorithmic-transparency-recording-standard/algorithmic-transparency-recording-standard-guidance-for-public-sector-bodies> accessed 9 February 2023.

[154] Xantura, ‘Why Council Debt Collection Has to Change Now’ (23 October 2020) <https://xantura.com/why-council-debt-collection-has-to-change-now/> accessed 23 January 2023.

[155] 41:52 in Mutual Ventures, ‘Harnessing the Power of Data to Transform Children’s Services, 8th September 2020’ <https://www.youtube.com/watch?v=inFq2lzcbfc> accessed 23 January 2023.

[156] Ibid (39:34).

[157] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 8.

[158] Mutual Ventures and Xantura, ‘Questions Submitted during the Harnessing the Power of Data to Transform Children’s Services Webinar’ 4.

[159] Other actors have described the use of Xantura’s software as ‘profiling:’ see e.g. Ed Sheridan, ‘Town Hall Drops Pilot Programme Profiling Families without Their Knowledge’ (Hackney Citizen, 30 October 2019) <https://www.hackneycitizen.co.uk/2019/10/30/town-hall-drops-pilot-programme-profiling-families-without-their-knowledge/> accessed 24 January 2023.

[160] London Borough of Barking & Dagenham, ‘Barking and Dagenham One View DPIA v3.0’ 29. Emphasis in original.

[161] For example, council staff have publicly described Xantura as enabling them to undertake ‘rapid, ethical and proportionate [data] processing’: see Toni Sekinah, ‘How One London Council’s Digital Investment Enabled a Resilient Response to the COVID-19 Crisis’ (Diginomica, 28 October 2020) <https://diginomica.com/how-one-london-councils-digital-investment-enabled-resilient-response-covid-19-crisis> accessed 24 January 2023.

[162] London Borough of Barking & Dagenham, ‘General Privacy Notice’ <https://www.lbbd.gov.uk/council-and-democracy/privacy-notices/general-privacy-notice> accessed 25 January 2023.

[163] Mutual Ventures and Xantura, ‘Questions Submitted during the Harnessing the Power of Data to Transform Children’s Services Webinar’ 3.

[164] CIPFA, ‘How to Maintain High Ethical Standards and Fight Corruption in the Public Sector – 20 November’ <https://www.youtube.com/watch?v=FkgivOraz60> accessed 25 January 2023.

[165] 2:02:28 in London Borough of Barking & Dagenham Overview and Scrutiny Committee, ‘(Virtual) London Borough of Barking and Dagenham Overview and Scrutiny Committee’ (2021) <https://auditelsystems.mediasite.com/Mediasite/Play/f04197fd21f447e1ac6ac225dde15efd1d> accessed 25 January 2023.

[166] London Borough of Barking & Dagenham, ‘Barking and Dagenham One View DPIA v3.0’ 9. The charter has subsequently been published as an ‘Information Ethics and Transparency Charter’ in February 2021:  London Borough of Barking & Dagenham, ‘Information Ethics and Transparency Charter’ <https://modgov.lbbd.gov.uk/internet/documents/s143287/Appendix%202-%20Information%20Ethics%20and%20Transparency%20Charter.pdf> accessed 25 January 2023.

[167] London Borough of Barking & Dagenham, ‘Barking and Dagenham One View DPIA v3.0’ 24.

[168] London Borough of Barking & Dagenham, ‘B&D One View: Frequently Asked Questions’ 7–8.

[169] Mutual Ventures and Xantura, ‘Questions Submitted during the Harnessing the Power of Data to Transform Children’s Services Webinar’ 4.

[170] London Borough of Barking & Dagenham, Xantura and EY, ‘Data Ethics Workbook (February 2020): B&D One View – London Borough of Barking & Dagenham’ 12.

[171] London Borough of Barking & Dagenham, ‘B&D One View: Frequently Asked Questions’ 4.

[172] London Borough of Barking & Dagenham, ‘Synthetic Case Summary’ (internal document).

[173] The Data Ethics Framework and its accompanying workbook were not mentioned by interviewees directly involved in delivering services to residents – either as documents that they themselves were aware of or as tools that the Council was using. The workbook appears to have been used most intensively during the development of OneView in 2019. The various user guides provided to frontline staff in Children’s Care and Support do not refer to the framework or the workbook.

[174] This document is 145 pages long: ICO and the Alan Turing Institute, ‘Explaining Decisions Made with AI’ (ICO 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/> accessed 26 January 2023.

[175] Laurel Eckhouse and others, ‘Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment’ (2018) 46 Criminal Justice and Behavior 16 <https://doi.org/10.1177/0093854818811379> accessed 1 December 2018.

[176] For example, by police at protests, as documented by Big Brother Watch, ‘Face Off: The Lawless Growth of Facial Recognition in UK Policing)’ (2018) <https://bigbrotherwatch.org.uk/campaigns/stop-facial-recognition/report/> accessed 24 May 2023.

[177] Vicky Clayton and others, ‘Machine Learning in Children’s Services: Technical Report’ (2020) 9.

[178] Sebastian Pfotenhauer and others, ‘The Politics of Scaling’ (2022) 52 Social Studies of Science 3.

[179] Alan France and David Utting, ‘The Paradigm of “Risk and Protection-Focused Prevention” and Its Impact on Services for Children and Families’ (2005) 19 Children & Society 77.

[180] Ibid 82.

[181] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, ‘Algorithmic Accountability for the Public Sector’ (2021) 13 <https:// www.opengovpartnership.org/documents/ algorithmic-accountability-public-sector/>.

[182] Ada Lovelace Institute, Participatory data stewardship: A framework for involving people in the use of data (2021) 48–60 <https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/> accessed 30 June 2023.

[183] Vicky Clayton and others, ‘Machine Learning in Children’s Services: Does It Work?’ (What Works for Children’s Social Care 2020) 5.

[184] Matthew J Salganik and others, ‘Measuring the Predictability of Life Outcomes with a Scientific Mass Collaboration’ (2020) 117 Proceedings of the National Academy of Sciences 8398 <https://www.pnas.org/content/early/2020/03/24/1915006117> accessed 8 April 2020.

[185] Micah Altman, Alexandra Wood and Effy Vayena, ‘A Harm-Reduction Framework for Algorithmic Fairness’ (2018) 16 IEEE Security & Privacy 34.

[186] Data Justice Lab, ‘Data Harm Record’ (2020) <https://datajusticelab.org/data-harm-record/> accessed 30 June 2023.

[187] Michael Katell and others, ‘Toward Situated Interventions for Algorithmic Equity: Lessons from the Field’ (Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, ACM, 2020) <http://dl.acm.org/doi/10.1145/3351095.3372874> accessed 28 January 2020.

[188] See for example Reema Patel, Octavia Reeve and Andrew Strait, ‘How Does Structural Racism Impact on Data and AI?’ (5 May 2021) <https://www.adalovelaceinstitute.org/blog/structural-racism-impact-data-ai/> accessed 11 November 2021; Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York University Press 2018); Catherine D’Ignazio and Lauren F Klein, Data Feminism (The MIT Press 2020); David Leslie and others, ‘Ethics Review of Machine Learning in Children’s Social Care’ (What Works for Children’s Social Care 2020) <https://whatworks-csc.org.uk/wp-content/uploads/WWCSC_Ethics_of_Machine_Learning_in_CSC_Jan2020.pdf> accessed 2 February 2020.

[189] Doug Pyper, ‘The Public Sector Equality Duty and Equality Impact Assessments’ (House of Commons Library, 8 July 2020) 24 <https://researchbriefings.files.parliament.uk/documents/SN06591/SN06591.pdf> accessed 30 June 2023.

[190] Ibid 11.

[191] R (Bridges) v South Wales Police [2020] EWCA Civ 1058 [199].

[192] Equality and Human Rights Commission, ‘Artificial Intelligence in Public Services’ (1 September 2022) <https://www.equalityhumanrights.com/en/advice-and-guidance/artificial-intelligence-public-services> accessed 2 February 2023.

[193] Equality and Human Rights Commission, ‘Artificial Intelligence: Checklist for Public Bodies in England’ (1 September 2022) <https://www.equalityhumanrights.com/en/advice-and-guidance/artificial-intelligence-checklist-public-bodies-england> accessed 2 February 2023.

[194] Equality and Human Rights Commission, ‘Human Rights: Human Lives: A Guide to the Human Rights Act for Public Authorities’ (2014) 53–4. <https://www.equalityhumanrights.com/en/file/5921/download?token=YHsvvBFw> accessed 25 September 2023.

[195] Ibid 55.

[196] Ibid.

[197] ICO, ‘What Is Special Category Data?’ (17 October 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/special-category-data/what-is-special-category-data/> accessed 1 February 2023.

[198] Ibid.

[199] Ibid.

[200] Ibid.

[201] ICO, ‘What Are the Rules on Special Category Data?’ (17 October 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/special-category-data/what-are-the-rules-on-special-category-data/> accessed 2 February 2023.

[202] Ibid.

[203] Centre for Data Ethics and Innovation, ‘Enabling Responsible Access to Demographic Data to Make AI Systems Fairer’ (2023) <https://www.gov.uk/government/publications/enabling-responsible-access-to-demographic-data-to-make-ai-systems-fairer/report-enabling-responsible-access-to-demographic-data-to-make-ai-systems-fairer> accessed 14 June 2023.

[204] Equality and Human Rights Commission, ‘The Public Sector Equality Duty and Data Protection’ (2021) <https://www.equalityhumanrights.com/en/publication-download/public-sector-equality-duty-and-data-protection> accessed 2 February 2023.

[205] ICO, ‘What about Fairness, Bias and Discrimination?’ (19 May 2023) <https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/what-about-fairness-bias-and-discrimination/> accessed 19 June 2023.

[206] Deborah G Johnson and Jameson M Wetmore, ‘STS and Ethics: Implications for Engineering Ethics’ in Edward J Hackett and others (eds), The Handbook of Science and Technology Studies, (3rd edn, The MIT Press 2008) 574.

[207] Andrew D Selbst and others, ‘Fairness and Abstraction in Sociotechnical Systems’ (Proceedings of the Conference on Fairness, Accountability, and Transparency, ACM, 2019) 62 <http://doi.acm.org/10.1145/3287560.3287598> accessed 29 January 2019.

[208] ICO and the Alan Turing Institute, ‘Explaining Decisions Made with AI’ (ICO 2022) 81–82 <https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/> accessed 26 January 2023.

[209] J Nathan Matias, ‘Humans and Algorithms Work Together — So Study Them Together’ (2023) 617 Nature 248.

[210] Zara Rahman and Julia Keseru, ‘Predictive Analytics for Children: An Assessment of Ethical Considerations, Risks, and Benefits’ (UNICEF Office of Research 2021) 36, 40–44 <https://www.unicef-irc.org/publications/1275-predictive-analytics-for-children-an-assessment-of-ethical-considerations-risks-and-benefits.html> accessed day month year.

[211] Alon Jacovi and others, ‘Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI’ (Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, ACM, 2021) 628–630 <https://doi.org/10.1145/3442188.3445923> accessed 6 March 2021.

[212] Vicky Clayton and others, ‘Machine Learning in Children’s Services: Does It Work?’ (What Works for Children’s Social Care 2020) 24.

[213] Joanna Redden and others, ‘Automating Public Services: Learning from Cancelled Systems’ (Carnegie UK 2022) 11 <https://www.carnegieuktrust.org.uk/publications/automating-public-services-learning-from-cancelled-systems/> accessed 30 June 2023.

[214] Ibid 55.

[215] Anna Jobin, Marcello Ienca and Effy Vayena, ‘Artificial Intelligence: The Global Landscape of Ethics Guidelines’ (2019) arXiv:1906.11668 [cs.CY] <http://arxiv.org/abs/1906.11668> accessed 28 June 2019.

[216] UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, ‘Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression to the General Assembly on Artificial Intelligence Technologies and Implications for the Information Environment’ (2018); UN Special Rapporteur on Extreme Poverty and Human Rights, ‘Report on Digital Technology, Social Protection and Human Rights’ (2019). <https://www.ohchr.org/EN/Issues/Poverty/Pages/DigitalTechnology.aspx> accessed 2 March 2021.

[217] Ronan Hamon and others, ‘Impossible Explanations? Beyond Explainable AI in the GDPR from a COVID-19 Use Case Scenario’ (Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, APM, 2021) 550 <https://doi.org/10.1145/3442188.3445917> accessed 6 March 2021.

[218] Taina Bucher, If … Then: Algorithmic Power and Politics (Oxford University Press 2018) 44.

[219] Umang Bhatt and others, ‘Explainable Machine Learning in Deployment’ (Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020).

[220] Lorna McGregor, Daragh Murray and Vivian Ng, ‘International Human Rights Law as a Framework for Algorithmic Accountability’ (2019) 68 International & Comparative Law Quarterly 309, 319.

[221] David Leslie, ‘Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector’ (The Alan Turing Institute 2019) 12 <https://zenodo.org/record/3240529> accessed 13 January 2020.

[222] ICO and the Alan Turing Institute, ‘Explaining Decisions Made with AI’ (ICO 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/> accessed 26 January 2023.

[223] Ibid 40.

[224] Ibid 49–51.

[225] CDDO and CDEI, ‘Algorithmic Transparency Recording Standard: Guidance for Public Sector Bodies’ (GOV.UK, 5 January 2023) <https://www.gov.uk/government/publications/guidance-for-organisations-using-the-algorithmic-transparency-recording-standard/algorithmic-transparency-recording-standard-guidance-for-public-sector-bodies> accessed 9 February 2023.

[226] Ada Lovelace Institute, Who cares what the public think? (2022) 19 <https://www.adalovelaceinstitute.org/evidence-review/public-attitudes-data-regulation/> accessed 30 June 2023.

[227] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, ‘Algorithmic accountability for the public sector’ (2021) <https:// www.opengovpartnership.org/documents/ algorithmic-accountability-public-sector/> .

[228] Many of these refer specifically to ‘artificial intelligence’, which is a form of data-driven technology.

[229] AlgorithmWatch, ‘AI Ethics Guidelines Global Inventory’ (2019) <https://algorithmwatch.org/en/ai-ethics-guidelines-global-inventory/> accessed 26 January 2023.

[230] Luciano Floridi and others, ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (2018) 28 Minds and Machines 689.

[231] UNESCO, ‘Recommendation on the Ethics of Artificial Intelligence’ (SHS/BIO/REC-AIETHICS/2021).

[232] IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, ‘Ethically Aligned Design: First Edition’ (Institute of Electrical and Electronics Engineers 2019) <https://ethicsinaction.ieee.org/> accessed 18 August 2019.

[233] EthicalOS, ‘Risk Mitigation Checklist’ (2018) <https://ethicalos.org/wp-content/uploads/2018/08/EthicalOS_Check-List_080618.pdf> accessed 23 March 2020.

[234] Kate Dodgson and others, ‘A Framework for the Ethical Use of Advanced Data Science Methods in the Humanitarian Sector’ (Data Science & Ethics Group 2020) <https://5f2cd2ba-741c-4b29-ae47-00a8291b1d3c.filesusr.com/ugd/d1cf5c_6af8feb771194453817d62c92cee2a21.pdf> accessed 29 April 2020.

[235] David Leslie and others, ‘Ethics Review of Machine Learning in Children’s Social Care’ (What Works for Children’s Social Care 2020) <https://whatworks-csc.org.uk/wp-content/uploads/WWCSC_Ethics_of_Machine_Learning_in_CSC_Jan2020.pdf> accessed 2 February 2020.

[236] Anna Jobin, Marcello Ienca and Effy Vayena, ‘Artificial Intelligence: The Global Landscape of Ethics Guidelines’ (2019) arXiv:1906.11668 [cs.CY] <http://arxiv.org/abs/1906.11668> accessed 28 June 2019.

[237] Sanna J Ali and others, ‘Walking the Walk of AI Ethics: Organizational Challenges and the Individualization of Risk among Ethics Entrepreneurs’ (ACM Conference on Fairness, Accountability, and Transparency, 2023) <http://arxiv.org/abs/2305.09573> accessed 16 June 2023; Jessica Morley and others, ‘Operationalising AI Ethics: Barriers, Enablers and next Steps’ (2023) 38 AI & Society 411.

[238] Shannon Vallor, Irina Raicu and Brian Green, ‘Technology and Engineering Practice: Ethical Lenses to Look Through’ (Markkula Center for Applied Ethics, 13 July 2020) <https://www.scu.edu/ethics-in-technology-practice/ethical-lenses/> accessed 16 June 2023.

[239] Anne L Washington and Rachel Kuo, ‘Whose Side Are Ethics Codes on? Power, Responsibility and the Social Good’ (Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, ACM, 2020) <http://dl.acm.org/doi/10.1145/3351095.3372844> accessed 28 January 2020.

[240] Alison B Powell and others, ‘Addressing Ethical Gaps in “Technology for Good”: Foregrounding Care and Capabilities’ (2022) Big Data & Society, 9(2).

[241] Theresa Züger and Hadi Asghari, ‘AI for the Public. How Public Interest Theory Shifts the Discourse on AI’ (2022) 38 AI & SOCIETY 815 <https://doi.org/10.1007/s00146-022-01480-5> accessed 26 September 2022.

[242]  Catherine D’Ignazio and Lauren F Klein, Data Feminism (The MIT Press 2020); David Leslie and others, ‘Ethics Review of Machine Learning in Children’s Social Care’ (What Works for Children’s Social Care 2020) 60 <https://whatworks-csc.org.uk/wp-content/uploads/WWCSC_Ethics_of_Machine_Learning_in_CSC_Jan2020.pdf> accessed 2 February 2020.

[243] ICO and The Alan Turing Institute, ‘Explaining Decisions Made with AI’ (ICO 2022) <https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/> accessed 26 January 2023.

[244] Ada Lovelace Institute, Algorithmic impact assessment: A case study in healthcare (2022)

https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ accessed 13 June 2023.

[245] Note: because relatively small numbers of individuals work in each service, this section does not detail the exact numbers of interviewees in each category to protect individuals’ anonymity.

[246] The list included interviews with individuals involved in implementing OneView, the Council’s system for implementing predictive analytics, including seven interviews with four EY employees, and three interviews with one Xantura employee.

[247] For more detail on the Council’s structure, see London Borough of Barking & Dagenham, ‘Council Structure’ <https://www.lbbd.gov.uk/Council-structure> accessed 17 December 2020.


Image credit: KevinAlexanderGeorge