Skip to content

The rapid development and roll-out of vaccines to protect people from COVID-19 has prompted debate about digital ‘vaccine passports’.

This report presents the key debates, evidence and common questions under six subject headings. These are further distilled in this summary into six requirements that governments and developers will need to deliver, to ensure any vaccine passport scheme builds from a secure scientific foundation, understands the full context of its specific sociotechnical system, and mitigates some of the biggest risks and harms through law and policy. In other words, a roadmap for a vaccine passport system that delivers societal benefit.

Checkpoints for vaccine passports

Requirements that governments and developers will need to deliver in order for any vaccine passport system to deliver societal benefit

Checkpoints for vaccine passports Download (PDF 1 MB)

Executive Summary

The rapid development and roll-out of vaccines to protect people from COVID-19 has prompted debate about digital ‘vaccine passports’. There is a confusion of different terms to describe these tools, which are also called COVID-19 status certificates. We identify them through the common properties of linking health status (vaccine status and/or test results) with verification of identity, for the purpose of determining permissions, rights or freedoms (such as access to travel, leisure or work). The vaccine passports under debate primarily take a digital form.

Digital vaccine passports are novel technologies, built on uncertain and evolving science. By creating infrastructure for segregation and risk scoring at an individual level, and enabling third-parties to access health information, they bring profound risks to individual rights and concepts of equity in society.

As the pandemic death toll rises globally, some countries are bringing down case numbers through rapid vaccination programmes, while others are facing substantial third or fourth waves of infection, and the mitigating effects of vaccination have brought COVID vaccine passports into consideration for companies, states and countries.

Arguments offered in support of vaccine passports include that they could allow countries to reopen more safely, let those at lower risk of infection and transmission help to restart local economies, and allow people to reengage in social contact with reduced risk and anxiety.

Could a digital vaccine passport provide a progressive return to a normal life, for those who meet the criteria now, while vaccines are distributed in the coming months and years? Or might the local and global inequalities and risks outweigh the benefits and undermine societal notions of solidarity?

The current vaccine passport debate is complex, encompassing a range of different proposed design choices, uses and contexts, as well as posing high-level and generalised trade-offs, which are impossible to quantify given the current evidence base, or false choices that obstruct understanding (e.g. ‘saving lives vs privacy’). Meanwhile, policymakers supporting these strategies, and companies developing and marketing these technological solutions, make a compelling and simplistic pitch that these tools can help societies open up safer and sooner.

This study disentangles those debates to identity the important issues, outstanding questions and tests that any government should consider in weighing whether to permit this type of tool to be used within society. It aims to support governments and developers to work through the necessary steps to examine the evidence available, understand the design choices and the societal impacts, and assess whether a roll-out of vaccine passports could navigate risks to play a socially beneficial role.

This report is the result of an international call for evidence, an expert deliberation, and months of monitoring the debate and development of COVID status certification and vaccine passport systems around the world. We have reviewed evidence and discussion on technical build, risks, concerns and opportunities put forward by governments, supranational bodies, collectives, companies, developers, experts and third-sector organisations. We are indebted to the many experts who brought their knowledge and evidence to this project (see full acknowledgements at the end of this report).

Responding to the policy environment, and the real-world decisions being made at pace, this study has, of necessity, prioritised speed over geographic completeness. In particular, we should caution that the evidence submitted is weighted towards the UK, Europe and North American contexts and will be more useful currently to policymakers in these areas, and in future to policymakers facing similar conditions – increased levels of vaccination and reducing case numbers – while navigating what are likely to be long-term questions of managing outbreaks and variants.

There are some factors for consideration that will be relevant to any conditions, for countries and states considering whether and how to use digital vaccine passports. These include the exploration of the current evidence on infection and transmission of the virus following vaccination, and some aspects of the technical design considerations and choices that any scheme will face.

A number of the issues – such as the standards governing technical development – will need to be considered at an international level, to ensure interoperability and mutual recognition between different countries. There are strong reasons why all countries should consider the potential global impacts of adoption of a vaccine passport scheme. Any national or regional use of vaccine passports that contributes to hoarding or ‘vaccine nationalism’ will produce extreme local manifestations of existing global inequalities – both in terms of health and economics – as the high rate of infection and deaths in India currently evidences. Prioritising national safety over global responsibility also risks prolonging the pandemic for everyone by leaving the door open to mutations that aren’t well controlled by existing vaccines.

Other requirements will be highly contextualised in each jurisdiction. The progress in accessing and administering vaccinations, local levels of uptake and reasons for vaccine hesitancy, legal regimes, and ethical and social considerations will weigh heavily on whether and how such schemes should go ahead. Even countries that seem to have superficially similar conditions may in fact differ on important and relevant aspects that will need local deliberation of what is justifiable and achievable practically, from the extent of existing digital infrastructure to public comfort with the use of technology, and attitudes towards increased visibility to the state or to private companies.

Incentives and overheads will look different as well. The structure of the economy – whether it is highly reliant on tourism for example, as well as the level of access to the internet and smartphones – will be important factors in calculating marginal costs and benefits of digital vaccine passports. And that local calculation will need to be dynamic: countries with minimal public health restrictions in place and low rates of COVID-19 face very different calculations in terms of benefits and costs to those in highly restrictive lockdowns with a high rate of COVID-19 in the community.

This report presents the key debates, evidence and common questions under six subject headings. These are further distilled in this summary into six requirements that governments and developers will need to deliver, to ensure any vaccine passport scheme builds from a secure scientific foundation, understands the full context of its specific sociotechnical system, and mitigates some of the biggest risks and harms through law and policy. In other words, a roadmap for a vaccine passport system that delivers societal benefit. These are:

  1. Scientific confidence in the impact on public health
  2. Clear, specific and delimited purpose
  3. Ethical consideration and clear legal guidance about permitted and restricted uses, and mechanisms to support rights and redress, and to tackle illegal use
  4. Sociotechnical system design, including operational infrastructure
  5. Public legitimacy
  6. Protection against future risks and mitigation strategies for global
    harms.

These requirements (with detailed recommendations below) set a series of high thresholds for vaccine passports being developed, deployed and implemented in a societally beneficial way. Building digital infrastructure in which different actors across society control rights or freedoms on the basis of individual health status, and all the myriad of potential benefits and harms that could arise from doing so, should face a high bar.

At this stage in the pandemic, there hasn’t been an opportunity for real-world models to work comprehensively through these challenging but necessary steps, and much of the debate has focused on a smaller subset of these requirements – in particular technical design and public acceptability. Despite the high thresholds, and given what is at stake and how much is still uncertain about the pathway of the pandemic, it is possible that the case can be made for vaccine passports to become a legitimate tool to manage COVID-19 at a domestic, national scale, as well as supporting safer international travel.

As evidence, explanation and clarification of a complex policy area, we hope this report helps all actors navigate the necessary decision-making prior to adoption and use of vaccine passports. By setting out the features to be delivered across the whole system, the benefits and risks to be weighed, and the harms to be mitigated, we hope to support governments to calculate whether they can be justified, or whether investment in vaccine passports might prove to be a technological distraction from the central goal to reopen societies safely and equitably: global vaccination.

Recommendations summary for governments and developers

1. Scientific confidence in the impact on public health

The timeframe of the pandemic means that – despite significant leaps forward in understanding that have led to more effective disease control and vaccine development –scientific knowledge is still developing about the effectiveness of protection offered through tests, vaccines or antibodies that most vaccine passport models rely on.

Most of the vaccines now available offer a high level of protection against serious illness from the currently dominant strains of the virus. It is still too early to know the level of protection offered by individual vaccines in terms of duration, generalisability, efficacy regarding mutations and protection against transmission.

This means that any vaccine passport system would need to be dynamic, taking into account the differing efficacy of each vaccine, known differences in efficacy against circulating variants, and the change in efficacy over time. A vaccine passport should not be seen as a ‘safe’ pass or a proxy for immunity, rather as a lowering of risk that might be comparable to, or work in combination with, other public health measures.

Calculating an individual’s risk based on providing test results within a vaccine passport scheme avoids some of the problems associated with relying solely on vaccination, including access, take-up and coverage. A good negative test indicates that an individual is not currently infectious and therefore not a risk to others. However, this type of hybrid scheme requires widespread access to highly accurate and fast turnaround tests, as well as scientific consensus as to the window in which someone can be deemed low risk (most use 24–72 hours).

Evidence of a negative test offers no ‘future’ protection after that window, making it less desirable for a move to another city or entry to another country. Given that most point-of-care tests (tests that give a result at home) have a lower level of accuracy than tests administered in clinical settings, the practical overheads of reliance on testing may make this highly challenging for any routine or widespread use. If consistently accurate point-of-care tests become available, that might make testing a more viable route for a passport system, but would also reduce the need for a digital record – as people could simply show the test at the point of access.

Almost all models of vaccine passport attempt to manage risk at an individual level rather than using collective and contextual measures: they class an individual as lower risk based on their vaccine or test status, rather than a more contextual risk of local infection numbers and R rate in a given area. Prioritising this narrow calculation above a more contextual one may undermine collective assessments of risk and safety, and reduce the likelihood of observing social distancing or mask wearing.

A further important dimension is how the use of a vaccine passport affects vaccine take-up by hesitant groups – it provides a clear incentive to disengaged or busy people, but could heighten anxiety from those who distrust the vaccine or the state, if it is seen as mandatory vaccination or surveillance by the back door.

Before progressing further with plans for vaccine passports:

Governments and public health experts should:

 

  1. Set scientific pre-conditions, including the level of reduced transmission from vaccination that would be acceptable to permit their use; and acceptable testing regimes (accuracy levels and timeline).
  2. Model and test behavioural impacts of different passport schemes (for
    example, in combination or in place of social distancing). This should examine
    any ‘side effects’ of certification (such as a false sense of security, or impacts
    on vaccine hesitancy), as well as responses to changing conditions (for
    example, vaccines’ efficacy against new mutations). This should be modelled
    in general and in specific situations (such as the predicted health impact if
    used in place of quarantine at borders, or social distancing in restaurants), to
    inform their likely real-world impact on risk and transmission.
  3. Compare vaccine passport schemes to other public health measures in
    terms of necessity, benefits, risks and costs, or alternatives – for example,
    offering different guidance to vaccinated and non-vaccinated populations
    without requiring certification; investing in public health measures; or greater
    incentives to test and self-isolate.
  4. Develop and test public communications about what certification should be
    understood to mean in terms of uncertainty and risk.
  5. Outline the permitted pathways for calculating what constitutes ‘lower risk’ individuals, to build into any ‘passport’ scheme, including: vaccine type; vaccination schedule (gaps between doses); test types (at home or professionally administered); natural immunity/antibody protection; and duration of reduced risk following vaccination, testing and infection.
  6. Outline public health infrastructure requirements for successful use of a passport scheme, which might include access to vaccine, vaccination rate, access to tests, testing accuracy, or testing turnaround.

Developers must:

 

  1. Recognise, understand and use the science underpinning these systems.
  2. Use evidence-based terminology to avoid incorrect or misleading understanding of their products. For example, many developers conflated the concept of ‘immunity’ with ‘vaccinated’ in materials shared with partners and governments, creating a false sense that these systems can prove if someone is immune.
  3. Follow government guidelines for permitted pathways to calculation of ‘lower risk’.
  4. Not release for public use any digital vaccine passport tools for use until there is scientific. agreement about how they represent ‘lower risk’ (as above).

2. Clear, specific and delimited purpose

It will be much easier to weigh the benefits, risks and potential mitigations when considering specific use cases (visiting care homes, starting university, or international travel without quarantine, for example) rather than generalised uses.

Based on the health modelling, there may be greater justification for some use cases of digital vaccine passports than others, such as settings where individuals work face to face with vulnerable groups. Countries are already coming under pressure to create certificates for international travel to selected destinations and this is likely to expand. There may also
be some uses that should be prohibited as discriminatory (examples to consider include accessing essential services, public transport or voting) and exemptions that should be introduced for those unable to have a vaccine or regular testing.

Developing clear purposes and uses should be carried out with consideration to public deliberation, and law and ethics (see below), and mindful of risks that could be caused in different settings, which might include liability for businesses or insurance costs for individuals, barriers to employment, as well as stigma and discrimination.

Before progressing further with plans for vaccine passports:

Governments should:

 

  1. Specify the purpose of a vaccine passport and articulate the specific problems it seeks to solve.
  2. Weigh alternative options and existing infrastructure, policy or practice to consider whether any new system and its overheads are proportionate for specific use cases.
  3. Clearly define where use of certification will be permitted, and set out the scientific evidence on the impact of these systems.
  4. Clearly define where the use of certification will not be acceptable, and whether any population groups should be exempted (for example children, pregnant women or those with health conditions).
  5. Consult with representatives of workers and employers, and issue clear guidance on the use of vaccine passports in the workplace.
  6. Develop success measures and a model for evaluation.

Developers must:

  1. Articulate clear intended use cases and purposes for these systems, and anticipate unsupported uses. Some developers consulted for this study said they designed their systems as ‘use agnostic’, meaning they failed to articulate who the specific end users and affected parties would be. Not having clear use cases makes it challenging for developers to utilise best-practice privacy-by-design and ethics-by-design approaches when designing new technologies.
  2. Utilise design tools and processes that seek to identify the consequences and potential effects of these tools in different contexts. These may include scenario planning of different situations in which users might use these tools for unintended purposes; utilising design practices like consequence scanning to identify and mitigate potential harms; and employing ‘red teams’ to identify vulnerabilities by deliberately attacking the tools’ digital and physical security features. For the sake of their own product’s effectiveness, it is essential that developers work back from the worst-case scenariouses of their tools to make necessary changes to technical design features, partnership and business models, and use this process to inform impact evaluation and monitoring.

3. Ethical consideration and clear legal guidance about permitted and restricted uses, and mechanisms to support rights and redress and tackle illegal use

Interpretation and application of ethics and law will be particularly local to regions’ jurisdictions, and – as described above – this report does not attempt to do justice to a fully international picture. There are of course some global agreements, and in particular the Universal Declaration of Human Rights and its two covenants, that are universally applicable.
Based on the debates around ethical norms and social values we have been following in the UK, USA and Europe in particular, there are a number of areas of focus in terms of ethics and law.

Personal liberty has been a significant concern in the debate – that vaccine passports might represent the least restrictive option for individual liberties while minimising harm to others. There are important legal tests, in particular respecting a range of human rights, particularly
the right to a private life, which must be considered where people are required to disclose personal information.

Wider concerns raised are around impacts on fairness, equality and non-discrimination, social stratification and stigma at both a domestic and an international level. Specific concerns about harms to individuals or groups, through facilitating additional surveillance by governments or
private companies, blocking employment or access to essential services, will need to be addressed.

Legal and ethical issues should be weighed in advance of any roll-out, and adequate guidance, oversight and regulation will be required.

Before progressing any further with vaccine passports:

Governments should:

  1. Publish, and require the publication of, impact assessments – on issues including data protection, equality and human rights.
  2. Offer clarity on the current legality of any use, in particular relating to laws regarding employment, equalities, data protection, policing, migration and asylum, and health regulations.
  3. Create clear and specific laws, and develop guidelines for all potential user groups about the legality of use, mechanisms for enforcement and methods of legal redress for any vaccine passport scheme.
  4. Support cooperation between relevant regulators that need to work cooperatively and pre-emptively.
  5. Make any changes via primary legislation, to ensure due process, proper scrutiny and public confidence.
  6. Develop suitable policy architecture around any vaccine passport scheme, to mitigate harms identified in impact assessments. That might require employment protection and financial support for those facing barriers to work on the basis of health status; mass rapid testing centres that can be flexed by need (for example, before major sports events) and guaranteed turnaround of results that is fast enough to be used in a passport scheme.

Developers must:

  1. Undertake human rights, equalities and data protection impact assessments of their systems, both prior to use and post-deployment, to measure their impact in different scenarios. These assessments can help clarify potential risks and harms of systems, and offer clear routes to mitigation. They should be made public and subject to scrutiny by an independent assessor.
  2. Consider the existing norms of social behaviour that these tools may change. Do these tools grant additional power to particular members of society at the cost of others? Do they open new potential for misuse? The misuse of data collected for contact tracing should act as a warning – contact tracing data from pubs being harvested and sold on to third-parties is an example of unforeseen behaviours that these tools may enable. Mitigating these risks should be built into the sociotechnical design (see below).

4. Sociotechnical system design, including operational infrastructure to make a digital tool feasible

Designing a vaccine passport system requires much more than the technical design of an app, and includes consideration of wider societal systems alongside a detailed examination of how any scheme would operate in practice.

When it comes to technical design, there are a number of models being developed that have different attributes and security measures, and bring different risks into focus. There are commonalities, for example QR codes are widely used with varying degrees of security, but the models are too disparate and varied to summarise in detail here. With some models bringing together identity information and biometrics information with health records, any scheme must incorporate the highest-level security.

Some risks can be minimised to some extent, by following best-practice design principles, including data minimisation, openness, privacy by design, ethics by design and giving the user control over their data. Governments also need to be careful not to allow rapid deployment
of COVID vaccine passport systems to lock in future decisions including around the development of wider digital identity systems (see requirement on future risks).

When it comes to the ‘socio’ part of sociotechnical design, governments need to decide what role they ought to play, even if they choose not to design and implement a system themselves (many developers described their role as ‘creating the highway’ and look to governments to decide the ‘rules of the road’).

Governments (alone, or acting through regional or international governmental institutions) are the only actor that can consider the opportunities and risks (identified above) in the round, and will need to offer legal clarity as well as monitor impact and mitigate harms, so should not step back from this question. They will need to ensure that the operational and digital infrastructure is in place across the whole system, from jab or test through to job or border.

Governments will also need to consider costs – including opportunity costs, maintenance costs and burdens on business – and impacts on other aspects of public health, including vaccination programmes, other public health measures, and public trust in health services and
vaccination.

Before progressing any further with vaccine passports:

Governments should:

  1. Outline their vision for any role vaccine passports should play in their COVID-19 strategy, whether they are developing their own systems or permitting others to develop and use passports.
  2. Outline a set of best-practice design principles any technical designs should
    embody – including data minimisation, openness, ethics by design and privacy
    by design – and conduct small-scale pilots before further deployment.
  3. Protect against digital discrimination, by creating a non-digital (paper)
    alternative.
  4. Be clear about how vaccine passports link or expand existing data systems
    (in particular health records and identity).
  5. Clarify broader societal issues relating to the system, including the duration
    of any planned system, practical expectations of other actors in the system
    and technological requirements, aims, costs and the possible impacts on other
    parts of the public health system or economy, informed by public deliberation
    (see below).
  6. Incorporate policy measures to mitigate ethical and social risks or harms
    identified (see above).

Developers must:

  1. Consider how these applications will fit within wider societal systems, and
    what externalities their introduction may cause. While governments should
    articulate the rules of the road, developers must acknowledge values and
    incentives that they bake into their design and security features, and how these
    can amplify or mitigate potential harmful uses of their technology. It is essential
    that developers work with local communities, regulators, businesses and
    civil society organisations to understand risks introduced by their products,
    and tests out how these systems are being used in practice, to understand
    their externalities. Failing to do so will not only risk causing further harm to
    already marginalised members of society, but lead to reputational damage and litigation or legal liability for developers.
  2. Proactively clarify with regulators the need for clear legal guidance
    on where these systems are appropriate prior to any roll-out or use of
    specific applications. In the event a lack of clear guidance from governments
    continues, this may result in firms, developers and their users facing legal
    liability for misuse or abuse.
  3. Ensure they develop their technology with privacy-by-design and ethicsby-design approaches. This should include data-minimisation strategies to
    reducing the amount of data stored and transferred; consequence scanning
    in the design phase; public engagement, in particular with marginalised
    communities during design and implementation; and scanning for security
    threats across the whole system (from health actors to border control).
  4. Ensure their systems meet international interoperability standards being developed by the WHO.
  5. Work with governments and members of local communities to develop training materials for these systems.

5. Public legitimacy

Public confidence will be crucial to the success of a COVID vaccine passport system, and will be highly locally contextual. There are sensitivities involved in building technical systems that require personal health data to be linked with identity or biometric data for many countries. These combine with challenges in the wider sociotechnical system, including financial and other burdens on society, businesses and individuals, to produce concerns about potential harms. A system that is seen as trusted and legitimate could bolster hopes that it might encourage vaccination and updake of booster shots, or inspire more confidence in spaces that require vaccination or testing to enter.

Polling suggests public support for vaccine passports varies based on the particular details of proposed systems (including how they will establish status and in which settings), and concerns about discrimination and inequality. Polling to date only scratches the surface
of these new applications of technology, and deeper methods of public engagement will be needed to properly understand opinion, perceived benefits and risks, and the trade-offs the public is willing to make.

Before progressing any further with vaccine passports:

Governments should:

  1. Undertake rapid and ongoing public deliberation as a complement to, and not a replacement for, existing guidance, legislation and proper consideration of subjects mentioned above and throughout this report.
  2. Undertake public deliberation with groups who may have particular interests or concerns from such a system, for example those who are unable to have the vaccine, those unable to open businesses due to risk, those who face oversurveillance from policy or authorities, groups who have experienced discrimination or stigma, or those with particularly sensitivities about the use of biometric identification systems, for example. This would be in addition to assessing general public opinion.
  3. Engage key actors in the successful delivery of these systems (business
    owners, border control, public health experts, for example).

Developers must:

  1. Undertake meaningful consultation with potentially affected stakeholders,
    local communities and businesses to understand whether roll-outs of these systems are desired, and identify any risks or concerns. The negative reaction from parts of the hospitality industry in the UK should be a warning to developers who explicitly cite this use case as a primary reason for developing their system.1

6. Protection against future risks and mitigation strategies for global harms

If governments believe they have resolved all the preceding tensions and determined that a new system should be developed, they will also need to consider the longer-term effects of such a system and how it might shape future decisions or be used by future governments.

Risks to mitigate include the concern that emergency measures become a permanent feature of society. The introduction of vaccine passports has the potential to pave the way to normalising individualised health risk scoring, and could be open to scope creep post-pandemic, including more intrusive data collection or a wider sharing of health information.
Governments should consider the risk of infrastructure passing to future governments with different political agendas, and how tools introduced for pandemic containment could be repurposed against marginalised groups or for repressive purposes. More prosaically there
are maintenance and continuous development costs to consider, as well as path dependency for future decisions generated by emergency practices becoming normalised.

Equally pressing is how one national scheme affects the global response to COVID-19. Despite international coordination, there are significant inequalities of access to vaccines resulting in extreme differences in local manifestations of the virus – both in terms of health and economics. A legitimate concern is that wealthier countries rolling out vaccine passports could further contribute to exacerbating global inequalities, by incentivising vaccine hoarding. For example, vaccine passport schemes could encourage well-vaccinated and contextually low-risk countries to prioritise retaining booster shots to allow their citizens to take international holidays, rather than incentivise global vaccination – which is the only definitive route to controlling the pandemic.

Before progressing any further:

Governments should:

  1. Be up front as to whether any systems are intended to be used long term, and design and consult accordingly.
  2. Establish clear, published criteria for the success of a system and for ongoing evaluation.
  3. Ensure legislation includes a time-limited period with sunset clauses or conditions under which use is restricted and any dataset deleted – and structures or guidance to support deletion where data has been integrated into work systems for example.
  4. Ensure legislation includes purpose limitation, with clear guidance on application and enforcement, and include safeguards outlining uses which would be illegal.
  5. Work through international bodies like the WHO, GAVI and COVAX to seek international agreement on vaccine passports and mechanisms to counteract inequalities and promote vaccine sharing.

Developers must:

  1. Engage in scenario-planning exercises that think ahead to how these tools
    will be used after the pandemic. This should include consideration of how
    these tools will be used in other contexts, whether those uses are societally
    beneficial, and whether tools can be time-limited to mitigate potentially
    harmful uses.

Introduction

The question of whether and how to implement COVID status certification schemes, or ‘vaccine passports’, has become an important topic across the globe. These schemes would allow differential access to venues and services on the basis of verified health information relating to an individual’s COVID-19 risk, and would be used to control the spread of
COVID-19.

There is a diversity of approaches being pursued across the world, for multiple purposes. Some countries and states are moving ahead unilaterally: Israel, Denmark and New York State are already rolling out COVID vaccine passports, and the United Kingdom is undertaking a
review into whether to implement a passport system.2

For use in international travel and tourism, groups like the Commons Project and the International Air Transport Association are developing applications for vaccine passports; the European Union has set out its plans for a Digital Green Certificate to enable travel within the bloc; and the World Health Organisation is developing a digital version of its International Certificate of Vaccination or Prophylaxis for use with COVID-19.

In this report, the Ada Lovelace Institute aims to clarify the key considerations for any jurisdiction considering whether and how to implement digital vaccine passports to control the spread of COVID-19.

Most of the evidence we received came from or focused on the United Kingdom, Europe and north America, so our requirements for socially beneficial vaccine passport schemes are likely to be particularly relevant to liberal democracies.

We use ‘vaccine passports’ as an imperfect umbrella
term to encompass digital certification schemes that use
one or more of vaccination record, test result or ‘natural immunity’

Defining ‘vaccine passports’

Finding the right phrase to describe these new forms of digital certification is difficult. ‘Passports’ may be more helpful than ‘certificates’ in that they imply that an individual’s status means something in terms of what they can access, rather than simply recognising that an event (a vaccination) has taken place. But they can also be confusing given conversations are happening about both international travel and domestic uses.

When schemes based on an individual having recovered from COVID-19 were first discussed, they were known as ‘immunity passports’ or ‘immunity certificates’. But the term ‘immunity’ was problematic for at least two reasons: proof of recovery from the disease was an imperfect proxy at best for immunity, with evidence still emerging about how protected a recovered patient might be; and the term ‘immunity’ itself has different meanings in individual and collective contexts (whether it protects an individual and to what extent, and whether it protects those they come into contact with).

Many countries and schemes, e.g. Israel’s domestic scheme and the European Union’s proposed scheme for travel, refer to ‘green pass’ or ‘green certificate’. This focuses on the authorisation part of the scheme – like a traffic light – rather than the health information aspect.

Most recently, ‘vaccine passports’ or ‘vaccine certification’ have become common. As described above, a variety of tests are now being used as part of existing and proposed systems, so the term can be misleading as it suggests that only vaccination will provide an individual with access and other benefits. Acknowledging this complexity, we have chosen to
use ‘vaccine passports’ as an imperfect umbrella term to encompass digital certification schemes that use one or more of vaccination record, test result or ‘natural immunity’.

For the purposes of this study, a digital vaccine passport as defined here consists of four component functions and purposes:

  • health information (recording and communication of vaccine status or test result through e.g. a certificate)
  • identity information (which could be a biometric, a passport, or a health identity number)
  • verification (connection of a user identity to health information)
  • authorisation or permission (allowing or blocking actions based on the health and identify information).

Modelling individual risk will always require simplification of a messier underlying reality that involves missing or inaccurate information

This definition extends the function and purpose beyond a digital vaccination record, to enable healthcare providers to know which vaccine doses to administer when. Sharing this verified health information through a vaccine passport is intended to provide information about an individual’s COVID-19 risk, both to themselves and to others, and to assess that information to make decisions about access and movement.

Modelling individual risk will always require simplification of a messier underlying reality that involves missing or inaccurate information, and uncertainty in how to interpret the information available. The question is whether those proxies for risk, despite their flaws, can enable individuals and third parties to distinguish between individuals who are more or less at risk of being infected with and spreading COVID-19.

Most models currently focus on displaying binary attributes (yes/no) of some combination of four different types of risk-relevant COVID-19 information:

  • A status based on medical process, evidenced through:
    • vaccination records, including data, type and doses
    • proof of recovery from COVID-19, e.g. receiving a positive PCR
      test, completing the requisite period of self-isolation and being
      symptom free.
  • A status based on direct observation of correlates of risk, evidenced
    through:

    • negative virus test results
    • antibody test results.

Other schemes might provide a more granular or ‘live’ assessment of risk by incorporating information such as background infection rates, demographic characteristics of users, or users’ underlying health conditions. These schemes are not covered in this report, although many of the points below can also relate to models that provide a stratified assessment of risk and subsequently more differentiated access.

However we choose to identify them, vaccine passports must be considered as part of a wider sociotechnical system – something that goes beyond the data and software that form the technical application (or ‘app’) itself, and includes: data; software; hardware and infrastructure; people, skills and capabilities; organisations; and formal and informal institutions.3

Identifying these components highlights how any successful system needs to consider not just the technical design questions within the app itself, but how it interacts with wider complex systems. Vaccine passports are part of extensive societal systems, like a public-health system that includes test, trace and isolate services, behavioural guidance on mask wearing and social distancing, or a wider biometrics and digital ID ecosystem.

How any sociotechnical system should be designed, what use cases are appropriate, what legal concerns need to be considered and clarified, what ethical tensions are most relevant, what publics deem acceptable and legitimate, and what future risks any system runs, are all questions that will need to be resolved within the particular context policymakers and developers are operating in.

A brief history of health-based, differential restrictions and vaccine certification

Discussions of vaccine certification are not unique to COVID-19. They have been around for as long as vaccines themselves – such as smallpox in pre-independence India.4 The idea of ‘immunoprivilege’ – that citizens identified as having immunity against certain diseases would enjoy greater rights and privileges – also has a long history, such as the status of survivors of yellow fever in the nineteenth-century United States.5

Yellow fever is the most commonly referenced example of existing vaccine certification for a specific disease. The International Certificate of Vaccination or Prophylaxis (ICVP), also known as the Carte Jaune or Yellow Card, was created by the World Health Organisation as a measure to prevent the cross-border spread of infectious diseases.6

Although it dates back in some form to the 1930s, it has been part of the International Health Regulations since 1969 (and was most recently updated in 2005). The regulations remove barriers to entry for anyone who has been vaccinated against the disease. Even when travelling from a country where yellow fever was endemic, showing a Yellow Card would mean someone could not be prevented from entering a country because of that disease.4

There are some important differences between yellow fever and COVID-19: yellow fever vaccines are highly effective and long lasting, while COVID-19 vaccines are still being developed and there is not yet evidence to show how long they are effective for. Transmission is also different: yellow fever spreads via vectors (infected mosquitoes) rather than directly from person to person, which is why there are no global outbreaks of yellow fever and it is easier to control the disease.8

In May 2021, yellow fever is the only disease that is expressly listed in the International Health Regulations, meaning that countries can require proof of vaccination from travellers as a condition of entry. But there have been others, including smallpox (removed after the disease was eradicated), cholera and typhus, both removed when it was decided vaccination against them was not enough to stop outbreaks around the world. The certificate has, historically, been paper-based, but there had been proposals and advocacy to digitise the system even before
COVID-19.9

A brief history of COVID status certification

At the start of the pandemic, a number of countries demonstrated interest in some form of ‘immunity passport’ based on natural immunity and antibodies after infection with COVID-19 to restore pre-pandemic freedoms (including Germany and the UK, and a pilot in Estonia), but a
lack of evidence about the protection acquired through natural immunity meant few schemes were used in real-world scenarios.10

The WHO has shifted its stance by announcing plans to develop a digitally enhanced International Certification of Vaccination

In April 2020, the World Health Organisation (WHO) put out a statement saying there was ‘not enough evidence about the effectiveness of antibody-mediated immunity to guarantee the accuracy of an ‘immunity passport’ or “risk-free certificate”’, and that ‘the use of such certificates may therefore increase the risks of continued transmission’.11

The approval and roll-out of effective vaccines re-energised the idea of restoring personal freedoms and societal mobility based on COVID vaccinate passports. Israel implemented a domestic ‘Green Pass’ in February 2021,12 the European Commission published plans for a Digital Green Certificate in March 2021,13 and Denmark began using a domestic ‘Coronapas’ in April 2021.14

The WHO has shifted its stance by announcing plans to develop a digitally enhanced International Certificate of Vaccination and has established the Smart Vaccination Certificate consortium with Estonia. However, as of April 2021, it remains of the view that it ‘would not like to see the vaccination passport as a requirement for entry or exit because we are not certain at this stage that the vaccine prevents transmission’.15

IBM has launched Digital Health Pass,16 integrated with Salesforce’s employee management platform Work.com,17 and has worked with New York State to launch Excelsior Pass.18 CommonPass, supported by the World Economic Forum, and the International Air Transport Association (IATA)’s Travel Pass are both being trialled by airlines.19

The Linux Foundation Public Health’s COVID-19 Credentials Initiative and the Vaccination Credential Initiative, which includes Microsoft and Oracle, are pushing for open interoperable standards.20 A marketplace of smaller, private actors has also emerged offering bespoke solutions and infrastructures.21

In the UK, the Government initially appeared reluctant, saying it had ’no plans’ to introduce a scheme, and that such a scheme would be ’discriminatory’.22 Other ministers left the door open to digital passporting schemes when circumstances changed,23 and and the Government appeared to be keeping its options open by funding a number of startups piloting similar technology, tendering for an electronic system for citizens to show a negative COVID-19 test, and reportedly instructing officials to draw up draft options for vaccine certificates for international travel.24

There are intuitive attractions to the idea of a COVID vaccine passport scheme

As part of its roadmap out of lockdown in February, the Government announced a review into the possible use of certification.25 This was followed by a two-week consultation and an update in April announcing trials of domestic COVID status certification for mass gatherings, theatres, nightclubs and other indoor entertainment venues.26

For a comprehensive overview of international developments, see the Ada Lovelace Institute’s international monitor of vaccine passports and COVID status apps.

The hopes for vaccine passports

There are intuitive attractions to the idea of a COVID vaccine passport scheme, and particularly in the hope that a better balance could be found between economic activity and community safety, by allowing a more fine-grained and targeted set of restrictions than sweeping measures of national lockdowns. Such hopes are particularly located in the prospect of a silver bullet that may help return life to something resembling normal, after more than a year of social anxiety and economic damage.

A number of arguments have been put forward for the usefulness of
COVID vaccine passports, including:

  • Public health: Those who are certified as unable to transmit the virus are allowed to take part in activities that would normally present a risk of transmission. Being able to take part in such activities, see family and friends and visit hospitality and entertainment venues will have a positive effect on wellbeing and mental health.
  • Vaccine uptake: The use of certification to provide those who have been vaccinated with greater access to society could incentivise vaccination among those who are able to be safely immunised.
  • Personal liberty: Enhancing the freedoms of those who have a passport to do things that would otherwise be restricted due to COVID-19 (always noting that granting permissions for some will, in relative terms, increase the loss of liberty experienced by others). This could have a particularly profound benefit for those facing extreme harm and isolation due to the virus, for example those suffering domestic abuse, or in care homes and unable to see relatives.
  • Economic benefits: supporting industries struggling in lockdown (and the wider economy) by enabling phased opening, for example in entertainment, leisure and hospitality.
  • International travel: a passport scheme will allow people to travel for business and pleasure, with economic benefits (particularly for the tourism industry) and social advantages (reuniting families or holidays).

Science and public health

The first question to ask of a COVID vaccine passport system is whether an individual’s status conveys meaningful information about the risk they pose to others

The foundation of any COVID status certificate or ‘vaccine passport’ is that it allows stratification of people by COVID-19 risk and therefore allows a more fine-grained approach to preserving public health, keeping the community safer with fewer restrictions. Vaccine passports allow only those who pose an acceptably lower risk to others to take part in activities that would normally present a risk of transmission, e.g. working in care homes, travelling abroad, or entering venues and events such as pubs, restaurants, music festivals or sporting fixtures.

Therefore, the first question to ask of a COVID vaccine passport system is whether an individual’s status, for example that they have been vaccinated, conveys meaningful information about the risk they pose to others? Does the scientific evidence base we have on COVID-19 vaccines, antibodies and viral testing, support making that link, and if so, how certain should we be about an individual’s risk based on those proxies?

The development and deployment of a significant number of viable vaccines in just over a year is a remarkable scientific achievement. Tests have also rapidly improved in quality and quantity, and scientific understanding of COVID-19 infection and transmission has improved greatly since the beginning of the pandemic. In spite of these inventions and innovations, unfortunately the novelty of the disease means the answers to significant questions are still uncertain.

Vaccination and immunity

Our knowledge of COVID-19 vaccine efficacy against different its strands and immunity following an infection continues to evolve. Key questions about vaccines include:

  • What are the effect of vaccines on those vaccinated?
  • What are the effect of vaccines on spreading the disease to others?
  • What is the efficacy of vaccines against different emerging variants?
  • What is the efficacy of vaccines over time?

Our expert deliberative panel expressed concern about developing any system of COVID vaccine passport based on proof of vaccination while so much is still unknown – as systems could be built on particular assumptions that would then change. Any system that was developed would have to be flexible enough to deal with emerging evidence.

One certainty is that no vaccine is currently entirely effective for all people. Although evidence is encouraging that the current COVID-19 vaccines offer strong protection against serious illness, vaccination status does not offer conclusive proof that someone vaccinated cannot become ill.

The evidence is even more emergent on the effect of vaccines on the transmission of COVID-19 from one person to another. Any public health argument in favour of introducing vaccine passports relies on evidence that someone being vaccinated would protect others, but this remains unclear.27

A vaccine can provide different types of immunity:

  • Non-sterilising immunity, where an infected individual is protected from the effects of the disease but can still transmit it (and may instead have an asymptomatic case where previously they would have displayed symptoms).28
  • Sterilising immunity, where a vaccinated person does not get ill themselves and cannot transmit the disease.

Experts in our deliberation identified a ‘false dilemma’ in discussions about the efficacy of these different types of immunity: even a population vaccinated with ‘non-sterilising’ immunity should still prevent the disease spreading, as infected individuals will have weaker forms of it and fewer ‘virions’ (infectious virus particles) to spread. Emerging evidence suggests that ‘viral load’ is lower in vaccinated individuals, which may have some effect on transmission, and one study (in Scotland) found the risk of infection was reduced by 30% for household members living with a vaccinated individual, but much remains unknown.29

An issue raised in the deliberation was that focusing on individual proof of vaccination might underemphasise the collective nature of the challenge. Vaccination programmes aim at (and work through) a population effect: that when enough people have some level of protection, whether through vaccination or recovery from infection, the whole population is protected through reaching herd immunity. Even following vaccination, the UK Government’s Scientific Advisory Group for Emergencies offers caution: ‘Even when a significant proportion of the population has been vaccinated lifting NPIs [non-pharmaceutical interventions, like social distancing] will increase infections and there is a likelihood of epidemic resurgence (third wave) if restrictions are relaxed such that R is allowed to increase to above 1 (high confidence)’. This pattern of vaccination and infection may be occurring in Chile, where high vaccination rates have been followed by a surge in cases.30

Different vaccines have different levels of efficacy when it comes to protecting both the person receiving the vaccination and anyone they come into contact with. This is partly due to vaccines having different levels of effectiveness, based on differently underlying technologies.

As of May 2021, 12 different vaccines are approved or in use around the world, utilising messenger ribonucleic acid (mRNA), viral vectors, inactive coronavirus, and virus-like proteins:31

Vaccines approved for use, May 2021

Different levels of efficacy will also be partly due to different individuals responding differently to the same vaccine – the same vaccine may be effective in protecting one recipient and less so in protecting another.

The efficacy of the vaccines may change with different variants of the disease. There are concerns that some vaccines, for example the current Oxford-AstraZeneca vaccine, may be less effective against the so-called South African variant.32 There will continue to be mutations in COVID-19, such as the E484K mutation which has been found in the Brazilian, South African and Kent strains of the disease (this is an ‘escape mutation’ which can make it easier for a virus to slip through the body’s defences) and the E484Q and L425R mutations present in many cases in India.33 Such mutations make understanding of vaccination effects on individual transmission a moving target, as vaccines must be assessed against a changing background of dominant strains within the population.

Booster vaccinations against variants may help manage the issue of strains. It is possible these may be necessary, as the efficacy of vaccines against any strain may change over time; the WHO has said, it is ‘too early to know the duration of protection of COVID-19 vaccines’.34 With the disease only just over a year old and the vaccines having deployed only in the last few months, it will be some time before conclusive evidence is available on this.

Any vaccine passport system would need to be dynamic – taking into account the differing efficacy of different vaccines, known differences in efficacy against certain variants and the change in efficacy over time – as well as representing the effect of the vaccine on the individual carrying a vaccine passport.

There are also questions about any lasting immunity acquired by those recovering from COVID-19. The WHO has noted that while ‘most people’ who recover from COVID-19 develop some ‘period of protection’, ‘we’re still learning how strong this protection is, and how long it lasts’.35

Inclusion of testing

A number of COVID vaccine passport schemes in development (and the UK Government’s review into what it calls COVID status certification) may allow a combination of three characteristics to be recorded and used in addition to vaccination: recovery from COVID-19, testing negative for COVID-19, or testing positive for protective antibodies against
COVID-19.

We can group these characteristics into statuses based on medical process, and those based on medical observation.

Status based on medical process includes vaccination status and proof of recovery from COVID-19. In both cases, a particular event – recovering from an infection or having a vaccination – that might have some impact on an individual’s immunity is taken as a proxy for them posing less risk. As described above, the potential efficacy of this must be understood in the context of what remains unknown about an individual’s ability to spread the disease, their own immunity and the change in their immunity over time.

Status based on medical observation – or direct observation of results correlating to risk – includes two forms of testing: a negative test result for the virus, or a positive test result for antibodies that can offer protection against COVID-19.36 Incorporating robust tests might provide a better, though very time-limited, measure of risk (the biggest challenges to this would be practical and operational). Status based on test results would also avoid the need for building a larger technical infrastructure, particularly one involving digital identity records. But current testing mechanisms do have drawbacks.

There are two main kinds of diagnostic tests that could be used for negative virus test certification:

  1. Molecular testing, which includes the widely used polymerase chain reaction (PCR) tests, detect the virus’s genetic material. They are generally highly accurate at detecting negative results (usually higher than 90%), but their exact predictive value depends on the background rate of COVID-19 infection,37 and depends on the point in the infection that the test is taken.38 These tests often detect the presence of coronavirus for more than a week after an individual stops being infectious. They also need to be processed in a lab – during which time an individual may have become infected and infectious.
  2. Antigen testing, which includes the rapid lateral flow tests used in the UK Government’s mass-testing programmes, detect specific proteins from the virus. If someone tests positive, the result is generally accurate – but as these types of test only detect high viral loads, positive cases can be missed (a ‘false negative’) particularly when self-administered. Certificates based on antigen tests are likely to have a high degree of inaccuracy – tests might be useful in screening and denying (a ‘red light’), rather than allowing (a ‘green light’ test), entry to individuals at a specific point in time. They are unlikely to be useful for any kind of durable negative certification.

Antibody tests, meanwhile, confirm that an individual has previously had the virus. There are two sources of variability from these tests. First, people may have variable antibody response when they are infected with COVID-19 – while most people infected with SARS-CoV-2 display an antibody response between 10 and 21 days after being infected, detection in mild cases can take longer, and in a small number of cases antibodies are not detected at all.39 Second, the tests themselves are not completely accurate, and the accuracy of different tests varies.40

It also remains unclear how an individual antibody test result should be interpreted. The European Centre for Disease Prevention and Control advises that it is currently unknown, as of February 2021, whether an antibody response in a given infected person confers protective immunity, what level of antibodies is needed for this to occur, how this might vary from person to person or the impact of new variants on the protection existing antibodies confer.41 The longevity of the antibody response is also still uncertain, but it is known that antibodies to other coronaviruses wane over time.42

Questions remain as to how viable rapid and highly accurate testing is, particularly those that can be completed outside a lab setting. Although a testing regime allowing entry to venues could avoid a number of the challenges associated with using vaccination status (extensive technical infrastructure and access to health data, possible discrimination against certain groups) it also provides practical and logistical challenges – from administering such tests for access to a sporting event or hospitality venue, to the feasibility of regularly testing children – as well as there being uncertainty around the accuracy of tests.

Risk and uncertainty

At a time when uncertainty – about vaccine efficacy, when life will return to ‘normal’ and much else besides – is endemic, it is natural that politicians, policymakers and the public alike are grasping for certainty. There may be a danger in seeing COVID vaccine passports as a silver bullet returning us quickly to normality, with passports suggesting false binaries (yes/no, safe/unsafe, can access/cannot access) and false certainty, at a time when governments need to be communicating uncertainty with humility and encouraging the public to consider evidence-based risk. Our expert panel raised concerns that the UK Government saying it was ‘led by the science’ brought disadvantages, encouraging a simplistic view of it being infallible and squeezing out space for nuance and debate.

Conveying a proper sense of uncertainty and risk will be important as individuals make decisions about their own health that may also have an impact on collective public health. For example, if I have been vaccinated, but know there is a chance it may not be fully effective, how does that change how I assess the risk to me in engaging in certain behaviours?
What information will I need to also assess my risk of spreading the disease to others? Is it useful for a venue that admits me to understand that a passport may provide a false sense of certainty that I do not have or cannot easily spread the disease?

Any reliance on proof that the process of vaccination has been completed will also require careful consideration about the actual change in risk as a result of that system: experts raised the risk that use of passports could increase the spread of the disease, as individuals who believe themselves to be completely protected engage in riskier behaviour. A review of the limited evidence so far suggests vaccine passports could reduce other protective behaviours.43

While vaccine passports could make people more confident in some areas, for example by providing reassurance to vulnerable people who have been isolating, it could also slow down the return to normality by suggesting to some that their fellow citizens are a permanent threat.
Creating categories of ‘safe’ and ‘unsafe’ that could continue to keep risk salient in people’s minds even once the risk is reduced (for example a risk closer to that of flu: dangerous but not overwhelmingly so) could be counterproductive to reopening and restarting society and the economy.

Recommendations and key concerns

If a government wants to roll out its own COVID vaccine passport system, or permit others to do so, there are some significant risks it needs to consider and mitigate from the perspective of public health.

The first is that vaccine passport schemes could undermine public health by treating a collective problem as an individual one. Vaccine passport apps could potentially undermine other public health interventions and suggest a binary certainty (passport holders are
safe; those without are risky) that does not adequately reflect a more nuanced and collective understanding of risk posed and faced during the pandemic. It may be counterproductive or harmful to encourage risk scoring at an individual level when risk is more contextual and collective – it will be national and international herd immunity that will offer ultimate
protection. Passporting might foster a false sense of security in either the passported person or others, and increase rather than decrease risky behaviours.35

The second is the opportunity cost of focusing on COVID vaccine passport schemes at the expense of other interventions. Particularly for those countries with rapid vaccination regimes, there may be a comparatively narrow window where there is scientific confidence about the impact of vaccines on transmission and enough of a vaccinated population that it is worth segregating rights and freedoms. Once there is population-level herd immunity or COVID-19 becomes endemic with comparable risks to flu, it will not make sense to differentiate and a vaccine passport scheme would be unnecessary.

COVID vaccine passport schemes bring political, financial and human capital costs that must be weighed against any benefits. They might crowd out more important policies to reopen society more quickly for everyone, such as vaccine roll-out, test, trace and isolate schemes, and other public health measures. Focusing on vaccine passports may give the public a false sense of certainty that other measures are not required, and lead governments to ignore other interventions that may be crucial.

If a government does want to move forward, it should:

 

Set scientific preconditions. To move forward, governments should have a better understanding of vaccine efficacy and transmission, durability and generalisability, and evidence that use of vaccine passports would lead to:

  • reduced transmission risk by vaccinated people – this is likely to involve
    issues of risk appetite, as the risk of transmission may be reduced but will
    probably not be nil.
  • low ‘side effects’ – that passporting won’t foster a false sense of security in either the passported person or others, which might lead to an increase of risky behaviours (not following required public health measures), with a net harmful effect. This should be tested, where possible, against the benefits of other public health measures.

 

Communicate clearly what certification means. Whether governments choose to issue some kind of COVID status certification, sanction private companies to do so or ban discrimination on the basis of certification altogether, individuals will make judgements based on the health information underlying potential schemes in informal settings such as gathering with friends or dating.

 

Governments must clearly communicate the differences between different types of certification, the probabilistic rather than binary implications of each, and the relative risks individuals face as a result.

 

To support effective communication, governments, regardless of whether they themselves intend to roll-out any certification scheme, should undertake further quantitative and qualitative research of different framings and phrasing on public understanding of risk, to determine how best to communicate efficacy of each kind of certification.

 

 

Purpose

It is important that governments state
the purpose and intended effect of any COVID vaccine
passport scheme

It is important that governments state the purpose and intended effect of any COVID vaccine passport scheme, to give clarity both to members of the public as to why the scheme is being introduced and to businesses and others who will need to implement any scheme and meet legal requirements in frameworks like data protection.

It is hard to model, assess or evaluate vaccine passports at a general level so governments will need to state the purpose of any system, what it will be used for and, crucially, what will not be included in any such system, i.e. if particular groups will be exempt, or if particular settings will
be off-limits.

Use cases

In debates, particular use cases have focused on international travel, indoor entertainment venues and employment.

International travel

Some organisations, like the Tony Blair Institute, have argued that the way to navigate allowing people to travel internationally again will be for travellers to show their current COVID-19 status – either a proof of vaccination or testing status.45 Already, many countries require proof
of vaccination, proof of recovery or negative COVID-19 test results as a requirement for entry. Much of the industry focus for vaccine passports has been on airports and international travel.

International travel already has existing norms around restricting entry to places at specific checkpoints, based on information contained in passports, and the infrastructure to support such a system. Further, passports are already linked to biometrics and sometimes to digital
databases, as with the USA’s ESTA visa.

In these circumstances, countries will have an obligation to provide their citizens with proof of vaccination in order to allow them to travel to countries that require it. Once a system is in place to allow proof of vaccination for travel to some countries, the marginal cost for further
countries to require proof lowers, and there is a normalised precedent set by other travellers. It is easy to see international COVID vaccine passport schemes come into place even if initially only a small number of countries strongly support them.

The WHO maintains that they do not recommend proof of COVID-19 vaccination as a condition of departure or entry for international travel.46 However, the WHO is consulting on ‘Interim guidance for developing a Smart Vaccination Certificate’.47 The question of COVID vaccine passport systems for international travel seems now to be resolving around standard-setting, ensuring equity and establishing the duration of the scheme, rather than whether such schemes should exist at all.

Indoor entertainment venues

Indoor entertainment venues such as theatres, cinemas, concert venues and indoor sports arenas all have similar characteristics. with large groups of people coming together and remaining seated or standing in close proximity for hours. This means they are both higher risk and discretionary activities, which many countries have focused on as an opportunity to allow opening, or to reassure customers in attending.

Examining the use case of opening theatres only to those with some form of COVID status certification highlights how many of the logistical issues might play out in a particular context. First, there will be other activities related to the theatre trip – particularly using public transport
to reach the venue, or meeting in a pub beforehand. One of the UK Government’s scientific advisory bodies considered these may pose a higher transmission risk than the activity itself.48

Second, there will be practical and logistical challenges at the theatre. Because tickets are sold through secondary sellers as well as by the venue, it is likely that status could only be checked at the theatre on arrival. Any certification system would need to be available to all visitors,
including international ones. If tests at the venue could also be used to permit entry, there would be logistical challenges (for example, where would the tests be administered, and by whom?) that could make the cost prohibitive for theatres.

The increasing role many theatres and arts organisations play in their community could also suffer. Disparities in vaccine uptake, particularly between communities of different ethnicities, could mean COVID vaccine passports are counterproductive to theatre’s goals of inclusivity and acting as a shared public space. According to one producer, ‘the application of vaccine passports for audiences are likely to fundamentally alter a relationship with its local community.’49

Others in the arts,50 sport and hospitality acknowledge these challenges but believe they can be overcome. In the UK, a number of leading sports venues and events – including Wimbledon (tennis), Silverstone (motor racing), the England and Wales Cricket Board and the main football and rugby leagues – have welcomed the Government’s review and would welcome early guidelines to support planning.51

Employment (and health and safety)

Employment-related use cases discussed in the media include proposals that frontline workers, particularly in health and social care, would have to be vaccinated to work in certain settings (especially in care homes). Other employers – such as plumbing firm Pimlico Plumbers in the UK – have suggested they may only take on new staff who have been vaccinated.52 Staff may feel more comfortable returning to work, knowing that colleagues have been vaccinated. Therefore it’s an important use case for governments to address (and may have to grapple with themselves, given they are also employers).

The situation will vary from jurisdiction to jurisdiction. In the UK, the Health and Safety at Work Act (1974) requires employers to take care of their employees and ensure they are safe at work. Given that, employers might think it prudent to ask themselves whether vaccination could play a role in that process.

The ‘hierarchy of controls’ applies in workplace settings in the UK, and may also be a helpful guide for other jurisdictions.53 Controls at the top of the hierarchy are most effective in protecting against a risk and should be prioritised:

  • Elimination: Can the employer eliminate the risk by removing a work activity or hazard altogether? This is not currently possible in the case of COVID-19. Vaccination and even testing could not guarantee this, given the still-emerging scientific evidence on vaccine impact on transmission, and possible false negatives in testing.
  • Substitution: Can the hazard be replaced with something less hazardous? Working from home rather than at the place of work would count as a substitution.
  • Engineering controls: This refers to using equipment to help control the hazards, such as ventilation and screens.
  • Administrative controls: This involves implementing procedures to control the hazards – with COVID-19, these might include lines on the floor, one-way systems around the workplace and social distancing.
  • Personal protective equipment (PPE): This is the last line of defence, to be tried only if measures at all other levels have been tried and found ineffective. Even if one argued that a vaccine counted as PPE, it would only be a last line of defence, and no substitute for employers taking other actions first.

In most settings, it is likely to be difficult for an employer to argue that vaccination could be a primary control in ensuring the safety of most workplaces. Other measures, such as social distancing, better ventilation and allowing employees to work from home, are higher up the hierarchy and likely to deliver some benefits.

There may be some workplace settings where different considerations might apply – for example, in healthcare. The UK Government has suggested that care home staff might be required by law to have a COVID-19 vaccination, and is consulting on the issue.54 Many have
cited hepatitis B vaccination as a precedent. However, this is not legally required in the way many people have understood – it is a recommendation of the Green Book on immunisation that many health providers have considered proportionate and therefore require their staff
to have as part of their health and safety guidance.55 This will vary across workplaces: if an employer carried out a risk assessment that found that employees had to have a vaccination, proportionality would depend on the quality of the risk assessment.56 There may be other examples of measures being considered proportional in some work settings but not in others – for example, regularly testing staff working on a film or television production might be sensible, given that any outbreak would shut the production down at huge cost, but not in an office, where other measures can be taken.

What would happen if an employer tried to implement a ‘no jab, no job’ policy, where someone could not work without a vaccine? The UK’s workplace expert body, ACAS (the Advisory, Conciliation and Arbitration Service), recommends that employers should:

  • not impose any such decision, but discuss it with staff and unions (where applicable)
  • support staff to get the vaccine, rather than attempting to force them to do so
  • put any policy in writing and ensure it is in line with existing organisation policies (for example, disciplinary and grievance policies), and probably do so after receiving legal advice.57

Discussions with employees should also surface any other concerns. These may include scope creep – employees might be concerned that employers will want further information – including why an employee might not be able to receive a vaccine – which might require disclosing personal information (pregnancy for example) or perhaps other personal data (such as venues an employee had checked into). Once an employer has invested in a system, there may be concerns as to what else they might want to use it for – there are concerns about growing workplace surveillance in general,58 especially given the changes made to working patterns by the pandemic. There may also be concerns that if an employer tried to require vaccination, they could also require (for example) that employees return physically to the office rather than being able to work from home.

If any certification system is more than temporary, other concerns include new forms of discrimination opening up – what if an employee cannot have the vaccine, is therefore banned from business travel, and is passed over for promotion opportunities as a result?

A ‘no jab, no job’ employer could face the risk of legal action in the UK, particularly on discrimination grounds – because not everyone can make particular choices to have the vaccine. The UK Government’s equalities body, the Equality and Human Rights Commission, has suggested such a policy may not be possible.59 Creating a COVID vaccine passport that was used to relax other health and safety measures could also pose rights concerns, particularly for staff in high contact face-to-face services such as hospitality or education.60 If evidence reveals that COVID vaccine passport schemes have a limited impact in controlling the spread of the virus, those who have become infected as a result of vaccine passport use, and then developed serious or even fatal illness may have had their right to life (Article 2 ECHR) or right to respect for private and family life (Article 8 ECHR) violated.

The European Court of Human Rights has previously ruled that if a government knowingly failed to take measures to protect workers from workplace hazards, there would be a violation of the right to life (if the worker died from the hazard) or the right to respect for private and family life (if the worker developed a serious disease).61 In this case, the workplace hazard would be the risk of infection from other members of staff and their customers. Of course, if the certification scheme demonstrably improved the safety of staff compared to existing COVID-19 mitigation measures, there is the possibility of a reversed scenario, where government and employers have an obligation to introduce such schemes to protect their employee’s right to life and right to respect for private and family life.

All this underlines the importance of having clear scientific evidence about the impact of vaccinations on an individual’s risk to themselves and their risk of transmission to others, before schemes are implemented. This would allow concerns to be properly weighted, legal
clarification to be given, and risks to be clearly communicated. It also underlines the need for employers to be given legal clarity and guidance from governments on what they can and should (and cannot and should not) do. Otherwise, the burden of decision and implementation will fall on many workplaces already stretched by the pandemic, and leave employees relying on the decisions made by their employers.

Exemptions and exceptions

It is also important to consider what use cases are undesirable and unacceptable and thus should be explicitly prohibited by governments.

Places

Some places are essential to an individual’s participation in society. For example, many countries judged supermarkets so essential that they remained open even during the tightest lockdown restrictions. Essential venues may include but are not limited to:

  • supermarkets and other essential retail, e.g. pharmacies or home repair
  • medical establishments, e.g. GPs, hospitals, other clinics
  • the justice system, including courts and police stations.

Public support for certification in these and similar settings tends to be lower than for what might be considered more ‘discretionary’ activities, such as international travel, sporting events, gyms and entertainment, and hospitality venues (see Public legitimacy). But there are trade-offs to be made when considering these venues, too, such as mental health and economic benefits.

People

As well as particular places, there may be particular groups of people who could be considered for exemptions, with medical or other reasons making it difficult or impossible for them to be vaccinated. Recommendations are changing for some vaccines, but currently these might include, but are not limited to:

  • pregnant women
  • children
  • the immunocompromised
  • those with learning disabilities who are unable to be vaccinated or
    tested regularly.

In Israel, children under the age of one were excluded from their vaccine passport scheme, but those between the age of one and sixteen were unable to access the Green Pass system via vaccination and could  only use it if they could provide proof of recovery from COVID-19.62 In contrast, the Danish Coronapas system, which does provide a testing alternative for those who are not yet vaccinated, has chosen to exempt children under 15 from the scheme.63

Recommendations and key concerns

 

Governments need to define clearly where the use of COVID vaccine passport schemes will be acceptable and the purpose behind introducing any such scheme. They should set out the scientific evidence as to the impact of schemes in different settings. They should also consider whether existing processes and structures could be adapted, and if not, explain clearly why a new system is required.

 

They should also consult with representatives of workers and employers and issue clear guidance on the use of COVID vaccine passports in the workplace, to reduce the burden on employers to make these difficult decisions and ensure that workers are not at the mercy of poor decisions by individual employers.

 

Governments should also define where the use of certification will never be acceptable, such as to access essential services, and what exemptions will be permitted, for example for those who are unable to be vaccinated.

 

Law, rights and ethics

The introduction of any vaccine passport system inevitably intersects with a wide range of legal concerns

The introduction of any vaccine passport system inevitably intersects with a wide variety of legal concerns, including equality and discrimination, data protection, employment, health and safety, and wider human rights laws. Any scheme will also have to make clear trade-offs between ethical and societal commitments, and this will be complicated by intersections between legal concerns and broader ethical and societal concerns. These are likely to manifest in the domain of rights; on questions of individual liberty, societal equity and fairness; risks of new forms of stratification and discrimination, both within societies and across borders; and new geopolitical tensions.

In this chapter we examine these legal, ethical and rights concerns in context.

Legal systems are inherently specific to their jurisdictions. There is some commonality across legal regimes, arising from shared histories, international agreements, and from many jurisdictions’ responses to similar issues over time. For example, the International Bill of Human
Rights and its constituent parts, the Universal Declaration of Human Rights (UDHR), the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic Social and Cultural Rights (ICESCR) form an international framework that informs and underpins the legal protection of human rights in jurisdictions around the
world.64

As described above – much of the evidence compiled in this report represents laws operating in the UK and Europe. Comparing the legal dimensions of certification schemes across jurisdictions is beyond the scope of this report, but given the international alignment on human rights, some analysis may be transferable to jurisdictions not directly considered here.

Similarly, the view below represents a broadly Western set of ethical and social values. The findings may be useful to other jurisdictions, recognising that alternative conditions and cultures may represent substantially different concerns, or take universal issues and interpret or weight them differently.

Further, counterfactual possibilities are an important consideration in ethical analysis of COVID status certification systems. These systems will only represent one policy intervention in a full complement of public health, economic and social policy that governments can make to mitigate the effects of the pandemic. The feasible alternatives to COVID vaccine passports that are under consideration by governments – for example whether to continue full lockdowns, implement slower general reopening or propose a full reopening against different background risks from COVID-19 – are therefore important in any analysis of their ethics,
in evaluating the marginal economic, societal and health benefits and harms.65

Principal areas of debate have focused on personal liberty, privacy and other human rights, fairness, equality and non-discrimination, societal stratification and international equity.

Personal liberty

Over the last year, civil liberties have been restricted in the form of lockdowns and other public health restrictions. During a pandemic, this is justified by the fact that an infected person can cause harm and death to others. For COVID-19 in particular, widespread transmission in
communities and high rates of transmission without symptoms means that an individual’s risk to others is difficult to determine, and therefore universal restrictions are justified to prevent harm to others.35

Some bioethicists have argued that there are strong ethical arguments in favour of COVID status certification systems that use antibody tests and/or proof of vaccination.67 They argue that these COVID status certification systems represent the least restrictive option for individual liberties, without causing additional harm to others, when compared to other pandemic responses such as lockdowns. They argue that those
who can demonstrate that they are highly unlikely to spread COVID-19 no longer pose a risk to others’ right to life, and so it is unjustified to restrict their civil liberties.

The argument centres on an individual being able to prove that they are not a substantial risk to others, through proof of vaccination or antibodies, to lift restrictions on that individual’s liberty. This argument does not necessarily require vaccination or natural immunity to COVID-19 to be perfect: we commonly accept a level of risk in our everyday lives, for example infectious diseases like flu are considered to be a tolerable risk, to be managed without additional restrictions.

This argument requires a vaccine or natural immunity to reduce risk to an acceptable level to remove the justification for restrictions. The strength of this argument therefore turns on what level of risk is acceptable for a given society, the impact vaccinations and antibodies have on
transmission and therefore risk to others, and the degree of certainty we are willing to accept in the evidence on transmission.

If all those conditions can be met, then COVID status certification is argued to represent a ‘pareto’ improvement on lockdown measures for some people without others’ situation worsening, i.e. they expand the number of people who can exercise their personal liberty without infringing on the liberties of others or increasing the risk of harm to others.

Any COVID status certification scheme should also ensure it does not arbitrarily interfere with individual human rights, in particular the right to respect for private life, the rights to freedom of assembly and movement and the right to work.

State sanctioned systems which require the collection and disclosure of personal information fall within the scope of the right to privacy guaranteed by provisions such as Article 8 of the ECHR and implemented in national laws, e.g. in the UK, under the Human Rights Act
1998.68 Vaccine passport systems which rest on the generation, collation and dissemination of sensitive personal health information, and which may also permit monitoring of individuals’ movements by a range of actors, will be permissible when they are in pursuit of legitimate aims that justify interference with the right, including ‘the protection of health’ and ‘the economic well-being of the country’. However, even if these aims are clearly being pursued, any interference with this right must satisfy the cumulative tests of legality, necessity and proportionality:69

  • The legality test requires that COVID status certification schemes interfering with the right to respect for private life must have a basis in domestic law and be compatible with the rule of law.
  • The necessity test demands that the measures adopted address a pressing social need.
  • The proportionality test requires that the measures taken by public authorities are proportionate to the legitimate aims pursued and entail the least restrictive viable solution.

COVID status certification schemes may well be able to meet these tests, given the scale of physical and mental harms caused by the COVID-19 pandemic, directly and indirectly, and the economic damage that has resulted. However, again, decision-makers will need to demonstrate they have sufficient scientific evidence to justify the necessity and proportionality of these schemes. Further, the requirement of proportionality necessitates transparently weighing these schemes against alternatives, such as greater investment in test, trace and
isolate schemes (e.g. additional support payments and sick pay) and considering the marginal protection to health, benefit to economic wellbeing, and restrictiveness of certification schemes.

Other human rights, including the right to work, and the freedoms of assembly and movement, may also be engaged by vaccine passport systems, and restrictions on those rights must similarly be justified in accordance with the tests for permissible limitations. The implications
of vaccine passports for the right to freedom of assembly deserve particular scrutiny, in light of the protests that have occurred since the start of the pandemic, and the responses to protests such as Black Lives Matter in summer 2020. During moments of exceptional societal
upheaval, peaceful assembly and protest remain critical tools for ensuring justice and demanding democratic accountability. Although the protection of public health constitutes a legitimate purpose to limit the exercise of such rights, there is a legitimate concern that restrictions on assembly and protest may be disproportionately applied in the name of
pandemic prevention.70 Consideration should be given to the potential for misuse of a vaccine passport system by a government with ulterior motives, or repurposed in future by subsequent administrations.

Fairness

Arguments for and against vaccine passports centre around fairness: some have argued that until everyone has access to an effective vaccine, any system requiring a passport for entry or service will be unfair.71 Responses to this have suggested introducing proof of vaccination requirements only once vaccines are widely available, and exempting those who are not eligible to be vaccinated from the need to prove their vaccination status. (Note that, epidemiologically speaking, a system would cease to be useful once herd immunity had reached a level sufficient to protect against transmission.)72

Others have argued that while it is true that COVID status certification is ‘unfair’ in the sense that only some people will be able to access them, that differential access is not arbitrary and is instead based on a genuine reduction in risk associated with those individuals who have
been certified.73 Therefore, there is a legitimate reason to afford them a different treatment.

It is further argued that pandemics are necessarily unfair and responses to them, such as lockdowns, have differential effects even if the same rule is applied to all. Some can work from home in secure jobs, while others lose their jobs and businesses, and those providing healthcare and essential services are required to expose themselves to risk. This, it is argued, is unfair under another view of fairness. The debate is given further complexity by introducing choices between different kinds of unfairness and questioning whether that unfairness has a legitimate underpinning.

Some argue that benefits of COVID status certifications schemes could also spill over to those not eligible. For example, greater economic activity would allow the continued existence of hospitality, leisure and cultural venues that might have otherwise been forced to close, and
would preserve them for others to access once they become eligible for certification or once restrictions are lifted for all.

On the other hand, certification schemes may exacerbate inequalities between those who might be free to return to work or seek certain kinds of employment, and those uncertified who cannot. Existing distrust of the state, identity infrastructure and vaccines could put some groups at a particular disadvantage. Globally, access to digital technology, forms of identification, tests and vaccines is already unequal, and COVID status certification schemes may unintentionally mirror and reinforce existing inequalities without wider programmes for addressing health inequalities.

Many therefore argue that COVID status certification schemes must be accompanied by a redistribution of the resources and benefits they create, for example by providing additional support to ease the costs to those still facing restrictions, to maximise the fairness and equity of any scheme.74

Equality and non-discrimination

COVID status certification systems discriminate on the basis of COVID-19 risk by design. The relevant legal question is therefore whether the law protects against this kind of discrimination, either directly or indirectly, and if so, whether that discrimination is proportionate (and
therefore permissible) in pursuit of other legitimate aims.

Article 1 of the Universal Declaration of Human Rights (UDHR) recognises that ‘all human beings are born free and equal in dignity and rights’. International treaties on human rights such as the ECHR operationalise the right to equality by establishing guarantees against discrimination (Article 14 ECHR).75

In the UK, the Equality Act 2010 provides a single legal framework for the protection of equality and the right to non-discrimination. Relevant to issues of COVID status certification are protections against discrimination on the basis of:76

  • age
  • disability
  • pregnancy and maternity
  • religion or belief
  • race.

For example, a vaccination requirement allowing differential access could be challenged on grounds of indirect discrimination on the basis of age, at least until all adults have had fair opportunity to have a coronavirus vaccination. UK Government policy prioritises primarily on the basis of age, meaning that a vaccination requirement would systematically disadvantage younger members of the population. Similar legal concerns around discrimination are likely to arise in other countries with age-based vaccination prioritisation.

Even once all eligible adults have been offered a vaccine, those groups where vaccination is not recommended may still be able to claim that a vaccination requirement is discriminatory under the Equality Act 2010.

Others might be able to claim discrimination on the basis of religion or belief that requires vaccine refusal. Faith leaders across many major organised religions have endorsed COVID-19 vaccination,77 but this won’t cover religious communities with different beliefs or interpretation of religious texts, so may legitimately claim their religious convictions require vaccine refusal and therefore argue that vaccine requirements constitute discrimination.

Finally, vaccination hesitancy has been shown to correlate with ethnic background in some communities,78 due to distrust of the state arising from longstanding, evidenced practices of racism and injustice.79 Requiring vaccination may therefore compound existing discrimination. This indirect discrimination is apparently one of the concerns raised with the UK Government by its equalities watchdog, the Equality and Human Rights Commission.80

These concerns are relevant to both private- and government-provided systems. The Government may also have human rights obligations to prevent discrimination by private providers, even if the discrimination is not directly imposed by the state and instead the state simply fails to ‘protect individuals from such discrimination performed by private
entities.’60

Some of these potential forms of discrimination would be ameliorated once there is widespread access to vaccination and if evidence emerges that vaccination is appropriate for groups currently advised against it for medical reasons. However, some discrimination will be present in any scheme based on vaccination requirements. The question for any scheme reliant on vaccine certification then becomes: if discrimination can be established on any of these grounds, is this discrimination ‘a proportionate means of achieving a legitimate aim’ under the provisions of the Equality Act 2010?82

Many of these discrimination concerns can potentially be avoided if appropriate alternatives to vaccination certification are available, for example by exempting certain groups or through providing a negative viral test alternative.

Some schemes could prove discriminatory against minority ethnic communities and women with darker skin tones in particular because of the way they verify identity.75 It has been suggested that some COVID vaccine passport schemes could use facial recognition to verify an individual’s identity.84 Research demonstrates that commonly used commercial facial recognition products do not accurately identify Black and Asian faces, especially when trying to recognise women with darker skin types.85 This could also lead to unlawful discrimination on grounds of race, if the products are inaccurate and there are not alternative ways to verify identity.

Societal stratification

Some bioethicists have highlighted that marginalised groups as a whole may face more scrutiny, as the creation of new checkpoints to access services and spaces may perpetuate disproportionate policing.86

Labelling people on the basis of their COVID-19 status would also create a new categorisation by which society could be stratified, i.e. the ‘immunoprivileged’ and the ‘immunodeprived’, potentially creating circumstance for novel forms of discrimination.35 This could happen informally without any certification schemes, as individuals already have access to and can share their own vaccination status, but certification schemes could increase the salience of those distinctions and amplify those distinctions by creating social situations that can only be accessed by those in possession of ‘immunoprivilege’.

This kind of immunological stratification is not without precedent. In nineteenth-century New Orleans, repeated waves of yellow fever generated a hierarchy of ‘immunocapital’ where those who survived became ‘acclimated citizens’ whose immunity conferred social, economic and political power, and ‘unacclimated strangers’ – generally those who had recently migrated to the area – were treated as an underclass. This stratification also helped to entrench existing ethnic and socioeconomic inequality.88

International equity and stratification

There are many low-income countries that do not currently have the economic capacity to acquire all the doses needed to immunise their whole population. Even with the support of COVAX – an international scheme designed to improve access to vaccines – many countries will only be able to vaccinate their most vulnerable citizens in the near future. Furthermore there are stark inequalities in access to cold chains and transportation, as well as capacity to administer vaccines.89

Adding to these health inequalities, people from such countries are disproportionately likely to have their freedom of movement restricted if an international vaccinate passport scheme is put in place. This will particularly affect stateless, undocumented migrants, refugees (whether
internationally or internally displaced), and similar groups who lack or even fear formal connections to governmental public health bodies.

Citizens of these low-income countries may already be discriminated against. As Dr Btihaj Ajana puts it, ‘the amalgamation of borders, passports, and biometric technologies [that] has been instrumental in creating a dual regime of circulation and an international class
differentiation through which some nations can move around and access services with ease while others are excluded and made to endure an “excess of documentation and securitisation”.’90

For example, health practitioners and researchers from low-income countries already struggle to conduct research, share their work at conferences and undertake consultancy work in high-income countries, because of existing difficulties obtaining visas and meeting entry
requirements. International COVID vaccine passports could worsen this imbalance, making diversity and inclusion an even more difficult task in the field, and side-lining valuable expertise of academics in low-income countries.91

It is easy to see how similar problems could arise in other fields and industries, meaning that COVID vaccine passports could add another layer of discrimination to this existing system and have consequences beyond the official end of the pandemic. (We return to the future risks these systems pose in a later chapter).

The structure of the global economy may push countries whose citizens might be excluded by international COVID vaccine passport schemes into supporting their development. Many low-income countries are dependent on tourism, and thus are incentivised to support schemes in
order to restart the flow of visitors. These differential incentives play out in supranational administrations like Europe, where the main supporters of the European Union Digital Green Certificate have been countries like Greece and Spain, which are more reliant on tourism than their northern neighbours.

None of this is to condemn countries for responding to those incentives. For countries reliant on tourism, and especially lower-income ones with a comparatively younger population and fewer economic alternatives, taking on the risks of virus transmission and discrimination may be worth it for the net economic and wider health benefits. Countries should not be condemned for responding to those incentives, but the analysis of how their decisions are shaped and constrained by existing global inequities is informative.

There is already pressure on governments to acquire vaccine supplies, which in turn triggers a form of ‘vaccine nationalism’ – where richer countries are able to buy up supplies of vaccines where poorer ones can’t. Tying movement to vaccine certification could entrench existing
global inequalities, making international cooperation on any schemes even more important. International friction is especially unhelpful when vaccination is, ultimately, a global public good. Any individual country’s fate is tied to reaching international herd immunity, as we are already seeing with new strains emerging. In the present moment we are seeing tensions play out as calls are made for countries to donate the vaccines they have acquired to India as it faces a growing crisis,92 and debates intensify about temporarily suspending vaccine patents.93

Oversight and regulation

Enforcement of existing legal protections will be carried out principally by the courts and through litigation. However, regulators and independent bodies with relevant remits, through the enforcement of existing regulation and issuance of context-specific guidance, will also have a role in legal accountability and oversight of COVID status certification systems, both before they are implemented and during any roll-out. Many use cases will also necessarily cut across multiple remits, as workplace schemes might engage data protection, contract law, equalities, and workplace health and safety concerns.

Regulators like the United Kingdom’s Information Commissioner’s Office have said they would approach a detailed COVID status certification scheme proposal in the same way they would approach any other initiative by government.94 International forums of data protection and privacy authorities have also begun to issue pre-emptive guidance on certification systems.95

Relevant regulators and independent bodies may include:

  • data protection authorities
  • national human rights institutions
  • occupational Health and Safety regulators
  • medical products regulators
  • centres for disease control and prevention, and other public health bodies.

Certain types of domestic laws can be changed in certain countries, and international law contains derogation clauses for specific purposes. However, Governments should be on guard not to needlessly tear down Chesterton’s Fence.96 If governments want to change a law or make a special carve-out for status certification schemes, they should know why the laws preventing it were enacted in the first place and be able to explain clearly why legal changes are necessary and proportionate, acknowledging potential unintended consequences.

Recommendations and key concerns

 

  • Governments must act urgently to create clear and specific guidelines and law around any uses, mechanisms for enforcement and methods of legal redress of COVID status certification. Given the sensitive nature of these systems, private actors will need legal clarity whether or not legal changes are enacted. Contextual guidance should be issued with interpretations of existing law, even if legislators don’t change anything. Regulators and independent bodies with relevant remits should take pre-emptive action to clarify the regulation and guidance they oversee, and take pro-active steps to ensure enforcement where possible.
  • Regulators should work cooperatively, acknowledging that many use cases will necessarily cut across multiple remits, and therefore a clear division of responsibilities is essential so that poor practice doesn’t fall through the cracks. Working together to provide maximum clarity in a fast-moving area, will ensure that regulators do not issue contradictory guidance.
  • If there are tensions between different obligations, regulators should work together to resolve those rather than passing the burden on to businesses and individuals. If combinations of obligations make a specific system unworkable, regulators should also be empowered to flag that to government, businesses and the public, and pass responsibility on to democratically elected bodies to untangle those contradictions in a public forum.
  • Those responsible for rolling out any certification schemes should be required to publish impact assessments, including Data Protection Impact Assessments and Equality and Human Rights Impact Assessments, which outline what protections are being put in place to reduce risks and mitigate harms.
  • Any legal changes should be made via primary legislation to ensure proper scrutiny and debate, rather than emergency regulations introduced at hours’ or days’ notice.97 If a COVID certification scheme is to be temporary, legislation should include clear sunset clauses and be accompanied by explanations as to how the system will be dismantled.

Sociotechnical design and operational infrastructure

Designing any technical system requires comprehensive thinking about the human or societal as well as technological elements of a system

Designing any technical system requires comprehensive thinking about the human or societal as well as technological elements of a system, and how humans and technology interact as part of that system. For example, a car is a piece of technology – a machine made of an engine, wheels, materials and electronic systems – but its operation also involves a driver, the rules of the road, traffic safety laws and planning decisions that allow roads to be built (and much more).

Thinking about a digital vaccine passport system requires doing the technical design ‘right’, and there are many factors that contribute to that empirical judgement. There is currently no single or dominant model for these technologies, and different attributes bring distinct design options and incorporated risks into focus. New infrastructure and databases may be required, depending on existing capacity in the national context.

With some models bringing together identity information, biometrics information, health records and contact tracing data, technical design must incorporate the highest security. Some risks can be minimised to some extent by following best-practice design principles, including data minimisation, openness, privacy by design, ethics by design, giving the user control over their data, and adopting the highest standards of governance, transparency, accountability and adherence with data protection law.

But successful design and delivery will involve thinking about much more than the technical design of an app – it should involve detailed consideration of how a technical solution would fit into a broader societal context, including the full range of public health interventions. For example, it might be theoretically possible to build an app that in itself protects the privacy of the user and helps them access particular rights and freedoms, but that nonetheless causes wider societal harms through increasing stigma or new opportunities for oversurveillance of minority groups.

Whatever we call the applications themselves, COVID vaccine passports are part of a wider sociotechnical system. That is, they are part of a wider COVID status certification system that goes beyond the data and software that form the technical application itself, including:98

  • Data such as the vaccination records, identity proxies, health and location data of individuals.
  • Software such as apps, verification systems, interoperability middleware, biometric systems, testing systems, databases, linkages across multiple databases and multiple jurisdictions, encryption systems.
  • Hardware and infrastructure such as verification kiosks or scanners, servers and cloud storage, mobile phones, linkages to testing and vaccination procedures.
  • People, skills and capabilities such as skilled operators, medical experts and their expertise, compliant individuals and populations, regulators, enforcement services such as border control and the police, IT professionals, standards bodies, infrastructure firms, services firms, marketing and public information, democratic engagement and deliberation, legal professionals.
  • Organisations such governments, global governance organisations, firms, lobby groups, unions.
  • Formal and informal institutions such as laws, regulations, standards and enforcement mechanisms, accountability structures.

At another level, these COVID vaccine passport systems are part of wider societal systems. For example, they are one part of a wider public health system, where consideration needs to be given to how they interact with other interventions and mitigation measures, for example
their behavioural impacts on mask wearing and social distancing, or diversion of attention and resources away from other parts of the vaccination programme or from test, trace and isolate schemes.

If introduced, vaccine passports would also be part of a wider emerging system of digital identification and the roll-out of biometrics into everyday life around the world. In this context, they need to be considered in relation to how their implementation might accelerate the
development and implementation of these schemes without sufficient public engagement or response to public concerns, and the risks that accompany embedding technologies that are hard to roll back into everyday life.

Finally, they will require practical and operational overheads to work – whether that’s scanners to read QR codes at venues, additional staff at the door to check passports, access to wifi at vaccination centres, or adequate testing capacity so that test results can be turned around
quickly enough to be of practical use.

In a multipurpose system and in the face of such complexity – that everything is connected to everything else, and that any intervention will have uncertain and unpredictable outcomes – it might be tempting to assume evaluation of any individual intervention will be almost impossible.3 Instead, those considering implementing or condoning these systems, and governments in particular, must investigate the nature and the strengths of these connections, gather empirical evidence, and then assess whether that evidence justifies policy action
while being transparent about the uncertainties involved.

We will look at technical and sociotechnical design in turn, and form recommendations and key concerns in response to both technical design and the context of the wider societal system.

Technical design

There are currently several options for the technical design and roll-out of vaccine passports, and this makes decision-making particularly difficult. Where the debate about contact tracing apps focused on two very different models – decentralised systems (where data stayed on
individuals’ phones) and centralised systems (involving central servers), there is no equivalent binary choice in the vaccine passport debate. What is emerging is a range of solutions being proposed and developed, and divergent approaches to delivery (see our international monitor for specific models under development around the world).100

Vaccine passport taxonomies

Any vaccine passport system will have the following common components:

  1. health information (recording and communication of vaccine status or test result through e.g. a certificate)
  2. identity information (which could be a biometric, a passport, or a health identity number)
  3. verification (connection of a user identity to health information)
  4. authorisation or permission (allowing or blocking actions through based on the health and identify information).

That brings into focus the number of distinct roles operating within the system, including:

  • the issuer of the credential – for example, the authority that holds the health data and could confirm that a vaccine or test had been administered (the NHS in the UK)
  • the holder of that information – for example, an individual with the credential on their phone
  • the verifier of all the necessary information – for example, a venue checking that the correct credential applied to the individual in front of them
  •  technical providers – for example, the developer of a particular vaccine passport app.

Each component of the vaccine passport system could be digital or non-digital. For example, an entirely non-digital system would involve:

Digital and non-digital components of a vaccine passport system

Table of digital and non-digital components of a vaccine passport system

Digital versus non-digital systems

 

Most of the discussion about COVID-19 vaccine passports, in the UK and elsewhere, has focused on apps delivered through smartphones. While digital passports are the focus of this report it is necessary to consider how digital and non-digital (analogue) systems compare.

 

An analogue (non-digital, or paper) system may have some advantages:

  • It does not require an extensive technical infrastructure.
  • It does not require the verifier (e.g. a venue) to store sensitive personal data.
  • It can be implemented quickly.
  • It is less permanent, and therefore less vulnerable to scope creep.

 

But it is also an imperfect system in many ways:

  • An identity document or vaccine card contains more sensitive information than is needed for the purpose of access (e.g. a passport number or address).
  • A sizeable minority of any population may not possess these documents.
  • Paper-based identity documents, and in particular vaccine cards, can be
    fraudulently copied, or ‘faked’.

 

Apps have some advantages over analogue mechanisms, and potentially
provide:

  • a simple yes/no result without sharing extensive personal details (in contrast
    with sharing all the information in a passport, driving license or medical record
    for example)
  • a clearer audit trail as to when and where an individual has had to verify their
    COVID-19 status
  • the ability to update details, as more becomes known about the lasting
    efficacy of vaccines
  • greater security and protection against fraud.

Technical infrastructure can exacerbate the significant risks of surveillance and
scope creep (see chapters on ethics and future risks). Equitable access is a significant concern, and arguments have been made that there would be substantial disadvantages to a digital-only system, primarily around digital exclusion, even in countries with extensive access to technology infrastructure.

 

Internet and smartphone access and use varies between and within countries.101
A recent Ada Lovelace Institute report, which considered some of the digital
and data divides in the United Kingdom, showed that a fifth of respondents
didn’t have a smartphone, 14% did not have broadband, and the most clinically
vulnerable were less likely to have either.102 By comparison in India only 38% of people report having a smartphone or using the internet occasionally, with big differences between those of different ages, education levels and income.103

Health information and identity data

Schemes will be technically distinct across different countries, depending on a number of factors, including the extent to which health records are digital, whether health systems have existing central databases or are fragmented across providers, whether countries have
digital identity infrastructure or whether digital apps already exist in health systems. In Denmark, for example, the government has worked closely with private vendor Netcompany, and the app operates in the context of an existing digital identity system. Other countries like the UK have a centralised health system but no digital identity system so have to grapple with different routes to providing identity – none of which will be perfect.

Most systems relying on identity verification will be likely to require ‘anchor’ documents such as a passport or driving licence to be used somewhere in the process, but that won’t enable access for all individuals: in 2015, one in four people eligible to vote in England and Wales were estimated to lack either a passport or driving licence, with certain groups, such as young people, even more excluded.104 Ensuring complete registration and access are challenges that exist already in all health systems, and these are often linked to age and class inequalities in access, both in physical and digital health systems, and more pronounced in many low- and middle-income countries.98

Depending on design and country context, schemes will have different implications for data infrastructure. Some call back to existing databases (checking with existing medical records or checking acceptable QR codes, for example). Others create a digital credential or token that might be stored on your phone. Vaccine passport schemes might require the creation of new databases, which include biometrics records. Each of these pose different risks and benefits, depending on the wider systems they interface with.

In the UK, the preferred route to implementing vaccine passports seems to be building the functionality into the existing NHS app (not to be confused with the NHS contact tracing app or GP apps). This app is regulated by the Medicines and Healthcare products Regulatory
Agency (MHRA) to hold digital health records, and to act as an interface between patients and health services to book appointments and manage prescriptions. A strength of this approach is that it develops an existing infrastructure, rather than building a new one, which already operates to high-level data security standards (see data security section).

Building the tool under the auspices of the NHS brings built in-trust, however it also raises the stakes: if something did go wrong, or this was perceived as a tool that wasn’t in-keeping with the NHS values, it could have an impact on wider trust in the NHS. It is also will have to deal with
coverage issues: currently the NHS app is available only in England rather than across the UK and currently has only two million verified users.106

Verification

A critical element of a passport scheme is verification: how the relying or verifying party, e.g. the venue or the airline, can check that the credential that confirms an individual has been vaccinated or tested actually belongs to that individual.89

Many applications being developed rely on QR codes, which are issued as digital or printed cards when an individual is vaccinated, and can be scanned by a venue. Some of these systems would produce a binary (yes/no) response to indicate whether a person could or could not enter, without revealing what method (vaccination, test or exemption) allowed them to do so. Others might be more specific, such as the Danish Coronapas that shows how much time remains in the 24–72 hour window provided by a negative test result.108

Apps have different security protocols: some providers stress that, under their systems, the ‘digital ceremony’ of verification takes place only between the individual and the venue with no databases having to be called – the cryptography within the apps is enough. Others say there
would be a record of the code in the cloud or on a blockchain to verify it was genuine, but this would be separated from any personal data stored on-device.

Israel’s Green Pass app has a QR code that can be scanned, while also providing a physical alternative (Green Pass plus ID document to verify identity). Denmark’s Coronapas system – which includes an August 2021 sunset clause, except for tourism and travel109 – allows citizens to sign into the app with their existing national digital ID (and use the photo from their passport) and display a QR code based on tests, antibody tests or vaccination.

Others are using more complex technologies to verify identity. The Mvine/iProov project funded by the Innovate UK research agency to be trialled in the UK, for example, makes use of facial recognition technology: once an individual has been vaccinated, a medical professional takes a picture and issues the individual with a digital certificate (including a QR code). The certificate number and biometric face scan are stored online by iProov; although the facial biometric is not available to third-parties, this storage could still raise privacy risks. A venue would scan the QR code and the holder’s face, to verify that they were the person to whom the credential belonged. Anyone without a smartphone could have a card with a QR code and still have their face scanned as verification.110

Using facial recognition data in the passport could get around the issue of having no state digital identity system, provided a trusted medical professional was able to link the health credential to the facial biometrics, but it brings in other challenges. In the past facial recognition has been shown to be less accurate for certain demographics – in particular women, and people from minority ethnic backgrounds – which could amplify discrimination and reduce inclusivity.

Conflating COVID vaccine passports with another controversial technology could undermine public trust and confidence – many people are uncomfortable with biometric data about their faces being gathered by private companies or government, and are concerned about how
such data is governed. The Ada Lovelace Institute’s Citizen Biometrics Council recently called for better standards and stronger regulation of biometric data.111

Authorisation or permission

The point of verification by a venue of an individual’s identity may also create practical challenges. It might be a simple, non-digital process – a human examining a digital health record that displays a green tick and a photo, for example, and then waving the individual through. Or it might have a further technical component, by scanning a QR code or further
biometric verification, requiring infrastructure that brings in additional security risks that require further consideration. For example, would venues be required to keep an audit of everyone they have allowed to enter – with related privacy and practical implications of storing a great deal of personal data – or would the fact they have followed a (more minimal) process be sufficient?112

Regardless of how the scheme is delivered, any vaccine passport system should be compliant with data protection, adopt best-practice design principles, offer high data security, be clear how it links or expands existing state data systems, in particular digital identity, and offer a non-digital route. We go into each of these aspects in more detail below.

Data protection and health data

Any vaccine passport system will involve secure access to an individual’s health data, which in many regions will be subject to particular conditions under data protection laws.

In the UK, data protection is guaranteed by the Data Protection Act 2018, which enshrines the EU GDPR, and which in the short term is likely to remain aligned with the EU GDPR.113 (GDPR – the General Data Protection Regulation – was introduced across Europe in 2018 and aims to
standardise the approach to privacy and data protection across Europe. It has also provided a model for other countries, such as Brazil.)

Health data – such as the results of COVID-19 tests and vaccination records – constitutes sensitive data under Article 9 of the GDPR, meaning the collection and further use of that data needs to be justified with one of the exemptions in Article 9-2.75 One of these exemptions is the necessity ‘for reasons of public interest in the area of public health, such as protecting against serious cross-border threats to health’.

Any use of personal data for public health reasons should be necessary – that is, targeted and proportional to achieving the desired purpose – and be of benefit to the wider public and society, rather than just individual health. One evidence submission we received suggested this means that governments and developers will need to demonstrate that a vaccine passport will have a meaningful impact on public health.60 European authorities have also underlined that any arrangements justified by the current public health emergency should not continue afterwards.116

Even if such a justification can be established, Article 9-2(i) of the GDPR requires adequate and specific measures to safeguard the rights and freedoms of individuals to be put in place even when pursuing public health interests.75 Given that COVID vaccine passport systems will contain sensitive personal information, app providers will need to comply with the principles of lawfulness, fairness and transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity, confidentiality and accountability, as outlined in Article 5 of the GDPR.35

Even if explicit consent or public health interests allow for the collection, storage and processing of test results and vaccination records, providers would still need to build data protection into the design of these technologies by default under Article 25-1 of the GDPR. For
example, providers proactively need to take technical and organisational measures to address potential data privacy-invasive situations, including the transfer of data to parties not covered by GDPR, which might occur if services are offered by international private providers.119

Providers should also ensure individuals are informed as to how their data is being utilised, by whom and for what purpose, providing clear and accessible information, recognising the geographical, cultural and linguistic diversity of the societies they are providing this service to.120

Given this, the Data Protection Act will almost certainly require providers, public or private, to carry out a data protection impact assessment (DPIA). National data protection authorities, which in the UK context means the Information Commissioner’s Office (ICO), will have a duty to
monitor, investigate and enforce the application of these rules under Articles 57 and 58 of the GDPR.75

More broadly, the Global Privacy Assembly – an international body composed of information and privacy commissioners – has said that while the processing of health data to enable international travel may be justified on public health grounds, governments and other organisations should take heed of principles including:

  • Embedding ‘privacy by design and default’ into the design of any system, including conducting a ‘formal and comprehensive assessment’ of the privacy impact on individuals (see design principles below).
  • Ensuring personal data is used according to a clearly defined purpose, under relevant legal authority, and only where it is necessary and proportionate.
  • Protecting the data protection rights of individuals unable to use or access electronic devices or access vaccines and consider alternatives to prevent them suffering discrimination.
  • Informing individuals as to how their data is being used.
  • Collecting only the minimum health information from individuals ‘necessary for their contribution to protection of public health’.
  • Building sunset clauses into the design of such schemes, ‘foreseeing permanent deletion of such data or databases, recognising that the routine processing of COVID 19 health information at borders may become unnecessary once the pandemic ends’.122

Design principles

Anyone developing a COVID certification scheme should consider a series of design principles at all stages of developing a system that will help to minimise harms and the risk of unintended consequences, and maximise the chances of a system working and commanding public
confidence. These principles may include, but are not limited to:

  • Data minimisation, the principle that only the personal data needed to fulfil a specified purpose should be held.123 This would suggest that, for the purposes of letting someone into a venue, the only relevant information is whether a person is permitted to enter or not, rather than fuller details of why (have had a vaccination, a negative test or are exempt, for example) or unnecessary personal information.
  • User control, the idea that the individual should have control of their data at all times and choose who to share it with.
  • Not unrelated is the idea that the credential should ‘act like paper’, as with a paper credential, there is no need for the system to ‘call back home’ and refer back to other databases.
  • Privacy-enhancing technology, operating to international privacy standards, should be used where possible to protect personal data, and developers should take a ‘data protection by design’ approach. All of this also points towards solutions that do not make use of other controversial technologies, such as facial recognition or verification
    that don’t operate locally and securely on user-controlled devices.
  • Openness, not just in explaining to the public exactly how systems are operating (including key details like who is responsible and accountable, the legal protections and ethical standards being applied, and what data is being used and how), but in taking an open-source approach to code that will help keep it up to date and open to scrutiny.
  • Transparency about who is responsible and accountable, what legal protections are in place, what ethical standards are being applied, and what data is being used and how.
  • High standards of governance, accountability, the application of other principles and adherence with data protection law (including the GDPR in Europe) will be essential to protecting the individual, but also ensuring public trust – as the UK Information Commissioner has written, a failure in one scheme could lead to a loss of trust across all attempts to use data and digital technology to combat COVID-19.124 The ICO is clearly conscious about the issue of ‘scope creep’ – that data collected for one purpose could be used for others.125
  • Adherence to international standards, for the sake of interoperability and quality. Among those currently being utilised are W3C’s Verifiable Credentials (although the use of this standard has been critiqued on privacy grounds)126 and the HL7’s Fast Healthcare Interoperability Resources (FHIR) for sharing healthcare data.127
  • Piloting proposed solutions at small scale, with full details of any such trials made public, and thorough evaluation and iterative improvements before rolling out any schemes on a larger scale.
  • Undertake consequence scanning128 to explore what potential use cases are desirable or undesirable, and make design choices accordingly.
  • Analyse plans from a security perspective. Key questions include how many potential security threats are being created by implementing these infrastructures, what new power the system gives to different actors (venue owners, etc.) and how that power could be misused, whether these new powers contravene existing norms,
    whether they raise a risk of unequal treatment in society and how these risks can be mitigated.
  • Engage members of the public – particularly those from marginalised communities – in the design and piloting of these systems. (See the Public legitimacy chapter for more detail).

Data security

Some COVID status certification services would require robust security – particularly if they are bringing together sensitive information. Higher technical security may pose a trade-off for accessibility which will need to be weighed carefully. For example, the NHS offers three levels of identity verification:129

  • Low level, where a user has verified ownership of an email address and mobile phone number but has not proven who they are or provided any other details.
  • Medium level (P5), where additional information (date of birth, NHS number, name, postcode) has been checked against patient records on the NHS Personal Demographics Service. This allows users to access services like contacting their GP but not to access health records (and so is unlikely to be sufficient for sharing vaccination or testing data).
  • High level (P9), which requires a full identity verification process including comparison between the user and photo ID, either at a GP surgery or submitting a photo of their ID and a short recording of their face.

Any process requiring access to personal health data should use high-security level. But the verification of photo ID may exclude vulnerable people or add a burden to GP services, who would need additionally to resource verification of patient identity.130 Alternatives would need to be provided for non-digital access, given a mobile phone is required for an NHS login, and for groups without an NHS number including foreign tourists.

Where countries are building their own solutions tied to state infrastructure, the alternative is for third-party apps, run by private providers, to be given permission to access health records. In the UK there are already private companies who are regulated to store health records and act as an interface between the public and NHS services. Particular consideration needs to be given to exactly how this would work in relation to COVID vaccine passports, what standards providers would need to meet in accessing and using this extremely sensitive data and
how accountability might be assured. In addition, given the high levels of verification necessary, there must be due consideration of whether and how such a standard could be met by private providers. (Also see security and fraud section below).

Developing digital identity infrastructure

It is essential that it is clear whether digital vaccine passports will create or expand existing infrastructure, in particular as regards to digital identity.

In the UK there have been at least two decades of debate about digital identity (the UK currently does not have a single digital identity system), and reaching consensus about identity verification has been challenging. In March 2021, the UK Government confirmed the end of the Verify scheme (although it has been given a final short extension),131 long criticised for failing to meet expectations of users, or in terms of the number of government services using it.132, 133 134 In September 2020, the Government published its response to a consultation on digital identity; in February 2021, it published a draft framework for digital identity. Any organisations currently developing vaccine passport systems in the UK will need to ensure that they fit within this framework.

In India, which has an existing identity system called Aadhaar, the roll-out of a contact tracing app has been used to populate other databases linked to Aadhaar, without further scrutiny and amid claims that it violates purpose limitation (the idea that data collected for one purpose
cannot be used for others without a user’s consent). Concerns have been raised from countries such as Argentina and Kenya, that existing digital identity systems lack transparency and oversight.89

Governments that do not currently use digital identity systems should ensure they do not rush into them because of vaccine certification without due thought, debate and deliberation to explore the potential benefits (greater interoperability of identity, joined-up services, etc.) as
well as the practical and privacy concerns. Creating new infrastructure that is primarily designed to meet the needs of the pandemic might restrict future choices.

Non-digital route

To be inclusive any technical vaccine passport system will need to have an analogue or paper-based alternative to protect against exclusion. This will bring risks, in particular relating to fraud and exclusion (see below). A non-digital route might not need to be an entirely separate system, for example one of the pilot projects funded by the UK Government, involving app developers Mvine and iProov, reports that their combination of a printed QR code and facial verification allows people without smartphones to be part of the system.136

As discussed above, the technical design of a digital vaccine passport is part of the wider sociotechnical system. This means that even if the technical build is done in a way to (for example) minimise the sharing of personal data and enhance privacy, this will not eradicate all harms. The act of certification discriminates between different groups of people – that will be the case whatever the technical design.137 Therefore it is critical to consider the sociotechnical design as at least as important as technical design.

Sociotechnical design

We now turn to questions of what the wider system around any technical implementation will need to look like, and what will need to be considered in the creation of such systems.

The role of government

The first question asked in relation to domestic vaccine certification systems is often who will provide them: government itself (as in Israel) or other actors, including private companies (many of whom are developing solutions) and non-profit foundations (such as Linux Foundation Public Health). However, the important question to start with is: what will
government’s role be?

Governments have the ability to consider the whole sociotechnical system, including any mitigations against harm that might be required, in ways that other actors cannot, and as such have an essential role to play. Some countries – such as Israel – are already rolling out their own
schemes where their Green Pass is issued by the state. But governments that decide not to roll out their own scheme while permitting others to build them are still taking a decision that carries responsibilities. In many nations governments will be the only legitimate standard setters, and in countries with national health systems they will be responsible for administering vaccinations and certifying that they took place.

Even if governments opted to prohibit the use of vaccine certification – something our expert deliberation felt would be difficult – informal uses are possible, so even here governments should play a role in public communication or guidance. If they do not, key public policy questions around discrimination and ethics will effectively be outsourced to private
companies. In most countries, private companies are likely to have some involvement even in state-run schemes. The question then is not whether government has responsibilities relating to vaccination certification, but what those responsibilities are.

There may be advantages to a system being the responsibility of government. They may already own key parts of the infrastructure that could be used. Many countries have existing ID systems, which can help with identity verification. In the UK, the NHS is responsible for
administering the vaccine and it has been suggested the existing NHS app (not the contact tracing app) could be modified to allow citizens to access their vaccination records. Adapting existing systems may negate the need to build entirely new ones, saving time, cost and reducing risks like scope creep and path dependency.

On the other hand, adapting existing systems to accommodate vaccine passports brings risks. If existing systems, especially identity ones, are flawed, existing problems may become further entrenched. In the UK, the NHS enjoys higher public trust than most institutions and higher
data trust than anyone else,138 but this could be damaged if expectations for vaccine passports were not met, for example through continued outbreaks of COVID-19 (as people falsely assume vaccination or testing will stop all transmission).

Existing apps for citizens may exclude certain groups. The NHS app, which is reliant on registration with the NHS in England, may not cover all eligible UK citizens and would also not work for many individuals visiting or resident in the UK. This could prevent those who have been vaccinated by, and are registered with foreign healthcare providers, from accessing domestic leisure venues during a holiday, or exclude undocumented migrants, asylum seekers and refugees who were not able to be vaccinated in their home country from access to systems in the UK. In Israel, many foreign students, diplomats, asylum seekers and
other non-citizens were excluded from the Green Pass system for weeks after the scheme launched, despite having been vaccinated in Israel.139

Allowing private companies to develop solutions could encourage competition and innovation and provide users with a choice, as no solution is likely to work perfectly in all settings.140 There are risks to relying on a single system (including security risks),141 and a competitive market could help push out untrustworthy players.

Our expert deliberation raised concerns about market-led approaches:

  • That a market-led system could be dominated by big players who were not experts in the field, even leading to a monopoly or monopsony.
  • That risk might be heightened by only certain technology companies being big enough to adapt any system to rapidly changing scientific evidence (for example, on transmission).
  • That the rush to dominate the market quickly could lead to vital discussions of equality and ethics being missed, not leave enough time for user research and evaluation, and bring insufficient engagement with health authorities.
  • That there is uncertainty and a lack of transparency about the business model for any private sector solution, and that data acquired through provision of the app (even if anonymised) may be monetised by private providers.

Other risks include that allowing different systems to be developed could fragment a public policy problem into a series of private problems that would be harder to govern; that private companies would have less of an incentive to think about the wider societal context and possible harms unless government had put standards and rules in place; and that multiple solutions may not be interoperable, which would lead to some being recognised in some settings (e.g. by some venues or restaurant chains) but not by others.

Whether apps are supported and developed by government or other, private providers, there are some facts that should be made public clearly, including who is responsible and accountable, what legal protections are in place, what ethical standards are being applied, and
what data is being used and how.

Duration of a COVID vaccine passport system

Another important consideration will be the duration any system is operational. If a system is intended to be a temporary response to avoid prolonging lockdowns and to ease other public health restrictions, its lifecycle would depend to a significant degree on the background rate
of COVID-19, the speed of vaccination within a jurisdiction, and the subsequent impact of health measures on the risk posed by COVID-19.

Some countries have moved quickly in vaccinating their population. As of 12 April 2021, Israel had provided more than 60% of its population with at least one dose of a vaccine, the UK nearly half its population and the US more than a third.142 The percentage of the population that is fully vaccinated in Israel is over 50%, the US over 20% and the UK over 25%.143

A vaccine passport scheme may have some utility when a sizeable minority of the population has had two doses, but before a nation has achieved herd immunity. It may have less utility when only a very small percentage of the population is vaccinated (existing lockdowns would
be likely to continue, there may not be enough economic incentive for businesses to reopen), or with a large percentage of the population having been vaccinated (herd immunity will have some effect).

The speed at which Israel, the US and the UK are vaccinating their populations, for example, suggest that there may only be a very limited window where vaccine passports could be of any use, and there would still be strong scientific reasons (listed above) and other societal reasons
(explored through the rest of this report) not to introduce them.

Mass vaccination would likely bring the risk to society of COVID-19 down to the level of other illnesses already circulating in society, such as seasonal flu. In the UK, the average number of annual deaths from the flu was around 15,000 from 2014/15 to 2018/19,144 but there is no expectation of a passport or testing regime for the flu. Our expert deliberation panel assumed a COVID-19 passport system might have some appeal in the transition from a pandemic to steadier conditions – when, as with the flu, the disease was endemic but vaccination, herd immunity and better treatment had made it less deadly – but then questioned how far it would be possible to switch off a temporary, transition measure once it was in place.

The UK prime minister has suggested that a third wave of COVID-19 could yet ‘wash up on our shores’.145 Would vaccine passports offer any support against such waves globally? Following mass vaccination, the hope is that any future waves would have a more tolerable impact on health, perhaps comparable impact to annual flu seasons unless the virus mutated into a variant against which existing vaccines are not effective. It is not clear how passports would offer significant public health benefit in a situation of low transmission and high population immunity.

The potential scenario of a vaccine-resistant mutation complicates the role of a passport. Those who had previously been considered lower risk would no longer be, and if people behaved as though they were protected because they had a passport, that could potentially accelerate the spread of the disease. On the other hand, if only one vaccine (Pfizer, for
example) was ineffective against a new variant, vaccine passports could be used to allow a subset of the population to continue movement, or government guidance could pivot to a system that was reliant on testing rather than vaccinations.

The end of the COVID vaccine passport lifecycle occurs when it is deemed no longer necessary – but what criteria would need to be met for it to be turned off, and under whose authority? Possible end points could include cases falling below a certain level (though consideration would need to be given as to whether some trigger – an increase in cases, or the
emergence of particular variants – would require them to be switched back on), or the WHO declaring an end to the pandemic. Some have argued that a benefit of passports would be encouraging boosters, which might indicate more long-term use.

Denmark’s plans for its Coronapas contain an August 2021 ’sunset clause’ (other than for tourism and travel), with decisions about any continued scope and use to be informed by the experiences of its domestic use.146 Our expert panel were sceptical about the ease of turning a system off once implemented, and worried about scope creep. Others have argued for disease surveillance systems remaining in place and becoming part of normal health infrastructure, to protect against future pandemics.147

Opportunity costs and overheads

The opportunity cost of focusing on COVID vaccine passports

There will be opportunity costs to focusing on COVID vaccine passports rather than other interventions. Certification schemes will involve political, financial and human capital costs that a government will need to weigh against their benefits. These costs and benefits should not be
considered in isolation. Given that governments have finite resources and attention, focusing on certification schemes should be reviewed in comparison to the costs and benefits of further investment in alternative public health measures intended to lift restrictions, such as investing in greater vaccine supply and roll-out or attempting to improve test, trace and isolate schemes.

As we have seen, there will be a discrete time period where there is scientific confidence about the impact of vaccines on transmission and a large enough vaccinated population to warrant segregating rights and freedoms, before population-level herd immunity, or endemic and low-risk COVID-19 makes vaccine passports unnecessary. In some countries, like the UK, this window might be very narrow.

This window will vary from country to country, and affect the relative balance of costs versus benefits, which will depend on the intended duration of any COVID status certification. High up-front infrastructure and business costs and significant opportunity costs would need to
be weighed in the decision to set up a temporary scheme. Schemes intended to have a long duration also need to be mindful of ongoing costs of maintenance and any costs borne continuously by users, for example in acquiring tests.

Maintenance

Any vaccine passport system will require maintenance, repair and updating in order to remain functional and continue to serve its intended purpose as conditions change around them. The question of who is responsible for maintaining these systems and the costs associated with
continued upkeep should be factored into any cost-benefit analysis of the viability of these systems.

If a vaccine passport system is intended to be temporary, then its obsolescence should be designed in from the start. Legislation and plans should contain sunset clauses, and the costs of closing the system down factored into budget planning. Care should also be taken not to develop other systems that are reliant on it.

Designing in obsolescence may be relatively novel in software development, but not in other fields: nuclear power stations are designed with maintenance, the end of their lifespan and decommissioning in mind. If governments and other providers have not thought about how to close a technical system down, it implies that either they believe it will not be a
temporary measure or have not given the issue sufficient consideration,  both of which may be damaging to public confidence.

If a system is to be more than temporary, then maintenance and upgrade costs will need to be planned in. The prospect of ‘technical debt’ – the idea that limited systems built in haste will require future development spending – is also higher if governments and other providers rush to build systems in weeks or months rather than thinking longer term.

Financial burden on businesses

Businesses that require vaccinations for customers or employees will need systems and additional resource for reviewing vaccine passports, which could create a financial burden for businesses already struggling with depleted financial reserves as they try to reopen.
In certain contexts, like health and social care, there may be existing systems in place which have tracked and verified vaccinations, but many firms in other sectors are likely to be starting from the ground up and having to procure new systems, train staff, and employ ‘security’ staff to administer their use.

There are other possible costs businesses will need to consider. For example, it is unclear what liabilities a venue would face if customers became infected with COVID-19 despite using vaccine passports, if the scheme allowed the venue to (say) reduce the space between theatre seats or between restaurant tables.

There may be related risks for businesses in terms of reputational damage, should such a situation occur. For example, if there is an outbreak traced back to e.g. a cinema using the  scheme to remove maskwearing and spacing requirements, those cinemas might be, fairly or
unfairly, seen as more risky venues.

Costs to users

While almost all countries have chosen to make vaccinations freely available to all as they become eligible, schemes that rely on testing could impose additional costs on users of the system. The more widespread a scheme is, the more burdensome any repeat costs could
become on those who must rely on testing that is not freely available.

In the UK, testing is widely and freely available for most people, and the Government has a service that allows citizens to request free lateral flow tests. But, even in the UK context, testing companies are charging customers for PCR tests required for international travel.148

Interaction with the wider public health system

Effect on vaccine uptake

One possible public health reason for introducing a COVID vaccine passport system would be to encourage uptake of COVID-19 vaccines, in order to reach herd immunity faster. This calculation will be specific to different countries, as rates of vaccine hesitancy vary greatly and the strength of incentivisation may also vary substantially. In England it is not clear there would be much additional benefit by further incentivising vaccination through a vaccine passport system, as more than 95% of people aged 60 and over have already been vaccinated with a first dose,149 and nearly 90% of unvaccinated adults say they would be taking a vaccine if available.150

Some preliminary studies show a mixed picture as to whether vaccine passports would incentivise people to get vaccinated;151 further evidence and investigation will be necessary for any given local context.152 There may be a downside risk that certification could reduce trust and increase vaccine hesitancy if the scheme is seen as introducing mandatory vaccination by the back door.153 This may be particularly acute in some minority ethnic communities that have been oversurveilled historically, leading to a further deterioration in trust.154 This is an area where further research is needed.

Placing an additional burden on the public health system

As well as raising opportunity costs in relation to the wider vaccination effort, these systems could place a direct administrative burden on  vaccine programmes, and on healthcare staff administering vaccinations and handling medical records, who are already overstretched from
additional workloads imposed by the pandemic.155 While some digital systems may be able to reuse existing vaccination records with minimal additional work on the part of frontline health staff, non-digital solutions and obtaining proof of exemption (and authorising some digital schemes) could place additional strain on general practitioners and family doctors,
worsening other health outcomes, unless there are easy and clearly signposted alternative routes or additional resources are made available to general practitioners.

This may be particularly acute in countries still developing digital infrastructure. In their evidence, Access Now give the hypothetical example of a vaccination drive in a village in India. The administrator of the vaccine is required not only to vaccinate the people there, but to authenticate their identity, create their unique identity on the government’s platform and log their vaccination status. But the internet goes down – the vaccinations are halted until it is restored – a lack of technological infrastructure means people are left unvaccinated that
day despite people and vaccines being present. Similar cases, in the distribution of rations and other social benefits, have been recorded in India.89

Setting interoperable global standards

Standards are important for complex technological systems to function properly. In a globalised world, standards act as an important process for establishing shared rules, practices, languages, design principles and procedures. They allow a diversity of actors taking a multiplicity of approaches in a local context to nevertheless maintain coherence for individuals interacting with a technology, work together to avoid duplication of effort, and avoid as much as possible a lack of interoperability between different systems in different places.98 COVID vaccine passport schemes will require interoperable standards, particularly in the context of international travel and border control, and especially if governments allow private actors to develop a diversity of certification applications.

Who is responsible for setting standards

Designing and setting standards is not a neutral process.35 Given the impact they have, standards will often be contested by different countries and interest groups, as they can codify and project particular world views. Standard-setting is not a one-off process, as standards require maintenance and iteration to remain useful and consistent. The process of setting new standards can sometimes be remote from those on the receiving end of novel technologies. The development of COVID vaccine passport systems will need inclusive processes for the creation and maintenance of standards.35

What they should include

As discussed in the Science and public health chapter, there are a number of possible pieces of COVID-19 risk-relevant health information that could be included in a COVID vaccine passport scheme. Decisions will need to be made about:35

  • the risk factors within the system that will be represented in models
  • how to measure or approximate the values of these variables/factors
  • where to define the boundaries of the system, and how to assign confidence to data and components inside and outside these boundaries.

Those responsible for standard setting in COVID vaccine passport systems will need to decide which tests, vaccinations and dosing regimens will be accepted within a specific, and often geographically contained, certification system.

In particular, many high-income countries have primarily relied on vaccines developed in the United States and Western Europe, and have not approved vaccines developed in Russia and China.161 Travellers from low- and medium-income countries, who have primarily relied on Russian and Chinese vaccines, could be denied access to countries recognising only European or North American vaccines or be required to undertake self-isolation or even (costly) hotel quarantining to access those countries. It could also lead to domestic discrimination against migrants from low- and medium-income countries, if access to venues and services is conditional on vaccines used in those high-income countries and migrants are vaccinated with ‘invalid’ vaccines.

Security and fraud

Any digital vaccine passport scheme that successfully restricts and permits access to certain rights and freedoms will inevitably prompt attempts to defraud it. The greater the differentials in access, the stronger the incentive will be. Steps will need to be taken to ensure any vaccine passport scheme is not vulnerable to fraud or accusations of fraud. The Global Privacy Assembly, a global forum for privacy and data protection authorities, emphasises that the cyber security risk of any digital COVID vaccine passport system or app must be fully assessed, taking full account of the risks that can emerge from different actors in a global threat context.162

Within the first week of Israel’s Green Pass scheme, it was reported that a black market for forged passes had emerged on the messaging app Telegram163 and subsequently that the police were looking for hundreds of individuals who had bought counterfeit certificates.164 In February 2021, Europol issued an Early Warning Notification on illicit sales of false negative COVID-19 test certificates, citing multiple incidents across the continent and saying that, as long as travel restrictions remained in place, ‘it is highly likely that production and sales of fake test certificates will prevail. Given the widespread technological means available, in the form of high-quality printers and different software, fraudsters are able to produce high-quality counterfeit, forged or fake documents.’165

Counterfeit vaccine passports could undermine the public health rationale for certification by allowing those at a potentially high risk of transmission to engage illegitimately in riskier activities, creating a situation similar to if there were no certification at all. It could even be
worse: those unaware of counterfeit vaccine passports might make inaccurately low risk assessments of situations and not use other, more informal mitigations (such as social distancing). Widespread counterfeits could also undermine public confidence in vaccine passports if individuals no longer trust any other individual’s certification to be valid and become more suspicious of others’ claims to be vaccinated, recovered, or otherwise at a relatively lower risk to themselves and others.

Recommendations and key concerns

 

Anyone developing a COVID vaccine passport scheme should:

  • consider a series of design principles at all stages of developing a system – that will help to minimise harms and the risk of unintended consequences, and maximise the chances of a system working and commanding public confidence – and conduct small-scale pilots before further deployment
  • protect against digital discrimination by creating a non-digital (paper)
    alternative
  • be clear about how vaccine passports link or expand existing state data
    systems (in particular health records and identity).

 

If a government does want to move ahead with a COVID vaccine passport scheme, it should:

 

Clarify its own role. Whether it is in building its own system, permitting others to do so, or attempting to prohibit such systems altogether, government will have a part to play. This may involve formulating design principles, such as those set out below, and ensuring they are met. It should also involve international discussions about operability across borders.

 

Be clear about the relationship between a COVID vaccine passport scheme and wider plans for digital identity. If governments want to make a case for wider digital identity schemes, then they should have those discussions with the public on their own terms. Conflating digital identity systems with emergency plans for COVID vaccine passport systems could damage public confidence in both technical applications. Governments should also be careful that consideration of any COVID vaccine passport schemes does not force them into longer-lasting decisions about digital identity systems.

 

Design systems that are as accessible as possible. This will include ensuring testing is free at the point of use, including ideally free testing or at the very minimum at-cost testing in private applications (although governments would do well to subsidise private testing if allowed to go ahead, as many do with workplace testing already).

In short, governments should provide clarity on:

  • The role they will play in any system – whether taking ownership of a system themselves, or regulating others. Only governments can take a holistic view of
    the opportunities and potential harms of a system to the society they govern
  • How long a system should endure.
  • What the opportunity costs are of focusing on vaccine passports at the expense of other interventions.
  • The impact of vaccine passport schemes on other elements of the public
    health system, including vaccine uptake and vaccine distribution.
  • The practical expectations on others involved in making a system work, such
    as businesses.
  • Standard-setting: governments need to be clear about which, tests, vaccinations and dosing regimes will be accepted for domestic usage and
    provide unambiguous criteria for inclusion and exclusion based on reliable consideration of the available scientific evidence and background context of
    infection rates and variants present in their jurisdiction. The simplest solution
    is to make the list of accepted vaccinations coterminous with those approved
    for use by the jurisdiction’s relevant medicines regulators, e.g. the MHRA in
    the UK, the EMA in the European Union or the FDA in the USA.
  • The risk of vaccine nationalism in the contexts of border control and domestic access for migrants, especially in the medium to long-term. At a minimum, countries should aim to minimise these potential oversights by operating a mutual recognition scheme that allows vaccines approved by any ‘trusted’ medicines regulator and/or on the WHO’s Emergency Use Listing to be included within a vaccine passport scheme or at least not be excluded on the basis of lack of domestic approval. Not only would mutual recognition and permissive approval enhance individual fairness, it reduces the risk of entrenching existing international inequalities and the risk of geopolitical divides being worsened in the long-term by inconsistent requirements and the systemisation of ‘vaccine worlds’.

Finally, they should incorporate policy measures to mitigate ethical and societal risks or harms identified above.

Public legitimacy

Another consideration for any COVID vaccine passport scheme is its perceived legitimacy

Another consideration for any COVID vaccine passport scheme is its perceived legitimacy. Illegitimate systems are undesirable both because they lack a sufficient political justification and because an illegitimate system will be likely to face significant resistance to its implementation. Legitimacy is a contested concept, and different attributes will be required for a system to be legitimate in different cultures and under different moral and political philosophies. Here, we are concerned with legitimacy in democratic pollical systems.

In part, legitimacy in democratic political systems can come from following due process. This includes debate by representatives in a legislature and subsequent legislation, or by ensuring proportionality and respect for human rights in accordance with existing legal and constitutional frameworks, as we have discussed in previous chapters. However, another important of legitimacy in democratic political systems is the consent of citizens and public support for particular measures. This means understanding what the public are willing to endorse and continuously involving the public at each stage of development.

Polling

One approach to public legitimacy of vaccine passports would be through surveys and polls. Polls conducted in the UK suggest that public support for COVID vaccine passports varies depending on the availability of vaccinations, the particular use cases, and the providers of
certification:

  • An academic study of UK public opinion during March–April 2020, the height of the first wave of COVID-19 in the UK, found most people did not object to immunity passports (introduced as ‘imply[ing] that you are now immune and therefore unable to spread the virus to other people’) and 60% of people wanted one (to varying degrees), although 20% thought them unfair and opposed them completely.166
  • Polling by Deltapoll in January and February 2021 found support for restrictions at an international level. At a domestic level, January polling found narrow support (42–39%) for vaccinated people being allowed to do things (meeting friends, eating in restaurants, using public transport) that others could not.167 Support had risen 12 points by the end of February, although passports and certificates were not explicitly mentioned.
  • Polling published by YouGov in March 2021 found support for a vaccine passport system, but with greater opposition in younger age groups, varying levels of support for different use cases (from 72% in favour of use at care homes, to 31% at supermarkets) and opposition to private companies being allowed to develop their own systems. Support was higher for passports once everyone had been offered a vaccine, compared to during vaccine rollout – which, as discussed above, is when the scientific case for using them is weaker.168 Somewhat contradicting the general support for certification is a separate YouGov poll from early March. This found that 79% of respondents thought those vaccinated should still be subject to the same COVID-19 restrictions as others, until most people had been vaccinated.169
  • Ipsos MORI polling in March 2021 found support for ‘vaccine passports’ was highest for international travel (78%) or visiting relatives in a care home (78%) or hospital (74%), but also high for theatres and indoor concerts (68%), visiting pubs and restaurants (62%) and using public transport (58%, though 25% were opposed). Nonetheless, one in five of those polled thought the ethical and legal concerns outweighed any potential benefits to the economy, with the young and ethnic minorities more concerned.170
  • Research conducted by Ipsos MORI at the end of March 2021, for King’s College London and the University of Bristol, found 39% of those polled thought unvaccinated people would face discrimination (28% did not), with 44% worried that vaccine passports would be sold on the black market. Half of those polled didn’t think passports would have a negative impact on personal freedoms, though a quarter thought they would reduce civil liberties. Just over a fifth of people thought passports would be used for surveillance by the Government, while more than two fifths did not, but concern was much higher among minority ethnic groups.171
  • A survey by De Montfort University, Leicester, found 70% agreed with the need for vaccine passports to travel internationally, but only 34% agreed with such a need for pubgoers or diners (compared to 45% against).172
  • Cultural sector consultancy Indigo found around two-thirds of people would be comfortable with passports or testing to attend live events (with a fifth and close to a third uncomfortable, respectively), but that 60% of people would be uncomfortable if this meant that other public health measures or restrictions inside the venue were dropped.173
  • Polling for the Serco Institute found broad support for passports across different settings, assuming there were ‘appropriate protections and exemptions for people who are precluded from taking the vaccine due to medical conditions’.174

The Ada Lovelace Institute’s own polling, with the Health Foundation, found more than half (55%) of those polled thought a vaccine passport scheme would be likely to lead to marginalised groups being discriminated against. 48% of people from minority ethnic backgrounds and 39% of people in the lowest income bracket (£0-£19,000) were concerned that a vaccine passport scheme would lead to them being discriminated against. While twice as many respondents (45%) disagreed with a ban on vaccine passports compared to those agreeing there should be a ban (22%), a third of respondents (33%) were undecided.

Taken together, these polls point to a lack of societal consensus on the way forward for vaccine passport schemes. Publics in the United States and in France show similar divisions.175

Deeper engagement

Surveys and polls are a powerful tool for measuring mass trends in attitudes, establishing broad baselines in opinion, or understanding what proportion of the public agree with particular statements. The information they provide helps us to understand the pulse of a population’s attitudes. But these methods fail to give comprehensive understanding of people’s perspectives on complex topics, such as the ethical and societal challenges of COVID vaccine passports and related digital technologies, and risk boiling these complex issues down into
statements which can be answered with ‘yes’ or ‘no’, ‘strongly disagree’ or ‘not sure’. Framing questions as simply about vaccine certification schemes also risks focusing on one possible measure rather than taking a holistic view of other measures that governments could deploy.

If governments want to understand what the public thinks about these issues and what trade-offs they might be willing to make in a deeper way, they need to provide a space for them to do so through more deliberative means.

Citizens’ juries and councils enable detailed understanding of people’s perspectives on complex topic areas. For example, the Ada Lovelace Institute has recently undertaken a year-long Citizens’ Biometrics Council to understand public preferences on the use and governance of biometrics technologies.176 Focus groups or engagement workshops can better capture the nuance in people’s opinions and creates complex data to analyse and describe in reports and recommendations. Qualitative and deliberative methods complement the population-level insights provided by polling by offering greater detail on why people hold certain opinions, what values or information inform those views, and what they would advise when informed.

This will be particularly important given the access to government decision-makers that other groups – lobbyists for particular industries, private companies building vaccine passport solutions – may have already had.3 In the UK, lobbying and corruption is currently towards
the top of the news agenda: given the importance of public trust to making government plans for lifting lockdown work, and in deploying new technology, it is vital that governments understand the position of different publics and hold their trust.

Recommendations and key concerns

 

We recommend undertaking rapid and ongoing online public deliberation that is designed to be iterative, across different points of the ‘development’ cycle of COVID vaccine passports, starting before any decision has been taken to implement such a scheme and continuously engaging with diverse publics through the design and implementation of any scheme if and as it develops.

 

Key groups to involve (beyond nationally representative panels of the population) include any groups disproportionately affected by the pandemic to date, and ‘non-users’ that could be excluded from a system, including those who were unable to have a vaccine. Governments should use existing community networks to reach people where they are located.

 

Public engagement to understand what trade-offs the public would be willing to make should be seen as a complement to, and not a replacement for, existing guidance and legislation. It should consider COVID countermeasures in the round (not just COVID vaccine passports) and should be clear about what is and what is not up for public debate.

 

Public engagement is important at all stages of development:

  • Deliberation should be undertaken before any decision on implementation is made, on the ethical trade-offs the public is willing to make and whether they think it’s acceptable for it to go ahead.
  • If deliberation establishes that such a scheme is acceptable or a decision has already been taken to implement a scheme, then public deliberation should be undertaken based on a clear proposal, to stress test the scheme and ask what implementation of vaccine passports would be most likely to engender benefit and generate least risk or harm to all members and groups in society.
  • If a scheme is implemented, then governments should continue to engage with the public to assess the impact of the technologies on particular groups within society, reflect on the experiences of individuals using the scheme in practice, and to inform and guide decision-making about whether such a scheme should continue, how it should be brought to an end or how it should be extended. Deliberation should include future risks and global consequences.
  • All stages are important, but even if deliberation is not possible at one stage, itcan still be implemented at other stages.

Future risks and global consequences

The focus of most discussions of COVID vaccine passport schemes (including previous chapters of this report) has been on the immediate and near-term questions of practicality, legality, ethics and acceptability of systems being developed right now, looking at opportunities and concerns over the next year or two. These discussions focus on schemes being launched within months, including those already operational in Israel and before summer 2021 in the European Union, and their operations over the next year or two, as mass vaccination campaigns roll out around the world.

Even if a country is able to establish that all design questions have been answered, that the societal, legal and ethical tensions have been resolved, that there is no way of adapting existing systems and that a new system needs to be built, the long-term effects of building such systems and how they could shape the future must be considered. In particular, consideration should be given to whether these systems will:

  • Become a permanent fixture of future pandemics?
  • Be expanded to cover a wider and more granular set of health statuses, outside the pandemic context?
  • Change norms and expectations about the bounds of sharing personal health data?
  • Create wider path dependencies that shape the adoption of other technologies in future?

This section is also vital to any public engagement. The public may not see all the unintended consequences and may discount effects on their future selves and future generations, especially with the prospect of escaping a cycle of lockdowns faster. States with longer time horizons and broader duties to all their citizens, need to consider the future risks alongside the immediate pressures on their publics, and encourage their public to do so through deliberative and open engagement.

Permanent emergency solutions

Once time, resources and political capital have been invested in their construction, it is unlikely that these systems and their underlying infrastructure will be rolled back once the crisis that initially justified their creation has passed. There are arguments for maintaining such systems:
for example, the Tony Blair Institute suggests in its case for digital health passports, that ‘Designed properly and integrating testing status, a health passport would also help us manage the virus and prepare for new strains and future pandemics.’178

It is likely that SARS-CoV-2 (the virus that causes COVID-19) will become endemic, like seasonal flu and other infectious disease-causing pathogens (or better contained, like measles, or even eliminated), at which point it will no longer require the emergency and intrusive measures justified by its present transmissibility and fatality. Accepting this as a reasonable scientific expectation for the near future raises concerns about the longevity of emergency apparatus, and that such infrastructure – once built – will not be stripped back.

In response, it has been suggested that sunset clauses should be built into any COVID vaccine passport scheme, with primary legislation clearly setting out the date or conditions by which a scheme will come to an end, and procedures designed into the system to allow that to happen, e.g. a process for the permanent deletion of any data, databases or apps that compromise the technical system.162

Clauses could be included in any use of emergency powers or particular legislation setting out government powers during the COVID-19 pandemic, and include time horizons like the end of a particular year or the end of the crisis according to set criteria (a declaration by the WHO, or cases of infection at a certain level for a specified time period). The clause could also include any process by which a scheme may be explicitly reapproved and continued. In the UK, it has been suggested that a majority vote in both Houses of Parliament could be required to continue any system.180 The Danish Government’s plan for the use of its Coronapas includes an August 2021 ’sunset clause’ for the use of the app other than for tourism and travel, with discussions about the experience of using the system in May and June 2021, to decide on its continued scope and use.181

These will not always be enough to guarantee the system does not become a permanent fixture. Take for example, the European Union’s Digital Green Certificate.95 In one way, it is clearly a time-limited proposal with a clear-end point, albeit with quite a high bar: ‘the Digital Green Certificate system will be suspended once the World Health Organization (WHO) declares the end of the international public health emergency caused by COVID-19.’ However, as a reminder of how these systems become a permanent fixture of life, they note immediately afterwards that ‘Similarly, if the WHO declares a new international public health emergency caused by COVID-19, a variant of it, or a similar infectious disease, the system could be reactivated.’

This creates a kind of path dependency: once this system is built, it becomes a tool for future emergencies, including any future outbreak of COVID-19 or other respiratory pandemics. This in itself does not pose too many additional concerns, beyond those raised in previous chapters. If it can be justified in our current emergency circumstances, there are good reasons to think it could be justified in similar future emergencies, and a pre-existing system could allow it to be spun up much faster. But many of these systems are being discussed as if they are one-off temporary solutions. If the plan from the start is for them to form the basis for future respiratory pandemic preparedness, they should be honestly presented to the public in these terms. They will also require ongoing investment and maintenance.

Scope creep

There is another version of this path dependency: if the purpose and design of the system expands beyond the narrow focus on an emergency response to become business as usual. The digital nature of the system particularly lends itself to iteration, gradual expansion and ‘scope creep’. Some forms of expanded functionality might be in keeping with a public
health purpose, for example, collecting data for disease surveillance and epidemiological research for COVID-19, and perhaps integrating symptom tracking systems with vaccination status.

Other forms may be more sweeping. Other kinds of health status such as physical and mental health records and genetic-test results could also be incorporated to provide more sophisticated risk scoring or even inclusion and exclusion on the basis of health risks beyond COVID-19, moving from COVID status certification to health status certification.3
medConfidential suggest a thought experiment for provocation: any solution under consideration should be tested against whether we would accept the same system of health information verification and differential access for a mental health condition or for HIV.184

Some have pointed to the history of biometric technologies as an analogous example of scope creep, with the initial uses of biometrics limited to exceptional circumstances, such as detention centres and crime investigations, before gradually expanding into everyday tasks,
such as unlocking our phones or logging into our bank accounts. Technologies that seemed intrusive when introduced become commonplace step by step, first by their use in extremes and then each use setting a precedent for the next.185 That is not to say that the gradual
expansion of biometrics is inherently problematic – they are clearly useful in many applications – but often technologies are developed and rolled out before there is sufficient engagement with the public about what use cases they find acceptable and what criteria for effectiveness
and governance they would set.

Similarly, the continued use and expansion of a COVID vaccine passport system could possibly be justified if the tensions in previous chapters are resolved and COVID-19 remains a long-term danger or we deem the systems useful enough to be repurposed for other health concerns. The key concern is that conversations and public engagement need to happen at each stage of continued use and expansion. Each use needs to be evaluated on its own terms at the time of deployment, informed by lessons learned from the previous operation of any similar systems, and driven by informed decisions rather than allowed to continue through software updates without transparency or accountability to citizens.

There is a risk that these important conversations about continued use may not happen or lose salience when the immediate danger is passed, and citizens have to focus their minds on rebuilding post-pandemic.

Others have suggested the system could be expanded beyond the health context, such as for identity verification for other purposes and generalised surveillance.186 However, the greatest impact of developing COVID vaccine passport systems may not be that the core of the system
is directly expanded into a permanent form of digital identity. Rather, the implementation of the system might set precedents and norms that influence and accelerate the creation of other systems for identification and surveillance.

Wider path dependencies

Just as path dependencies in terms of existing infrastructure, legal mechanisms and social and ethical norms will shape any adoption of COVID vaccine passport systems, so will those systems shape the paths available to decision-makers at future junctures.

Decisions made today may have implications for many years to come. For example, if we put in place widespread facial recognition systems to verify identity under these schemes, will we then re-evaluate the appropriateness of using facial recognition for other purposes e.g. age verification in hospitality venues? Or will we be locked into a path, once the capital has been invested, of installing and ironing out the operational issues in these systems? In this scenario, venues find themselves with a very different cost-benefit calculation than they did before the pandemic.

Comparisons were drawn during our expert deliberation to post9/11 security infrastructure at airports, and the once limited but now essentially mandatory Aadhaar identity system in India. There was pessimism about the likelihood of COVID vaccine passport technologies being ‘switched off’ once the crisis has passed, and the tendency to lead to path dependency: ‘Once a road is built, good luck not using it,’ as one participant in our expert deliberation put it. This might be a particular issue if the status of other health conditions were to be added.

Continuous development

If we recognise that these technologies are not intended to disappear once the immediate danger has passed, then we must think of these technologies as perpetually unfinished. This is especially true of the software aspects, which will require constant updates to remain
compatible and consistent with other software systems, legislation and standards.

Therefore, ethical evaluations of COVID status certification systems will require acknowledgment of uncertainty, risk and the inherent unfinished nature of the technology. Where significant uncertainty exists, some suggest that decision-makers can learn from precautionary and anticipatory approaches in sustainable development and other fields.3

Wider information flows and changing expectations

Even if the scope of statuses and purposes in the systems themselves remains limited, concerns were raised during our expert deliberation about how information in the system might be used more broadly than intended.

Even with the most privacy-preserving technology, health data could come into contact with different actors, including those in healthcare settings, employers, clients, police, pubs and insurance companies, who may have different levels of experience and trustworthiness in handling personal data. Private companies who offer COVID vaccine passports may also have commercial incentives to monetise any personal data they collect. Both risk data being shared with third-parties and being repurposed in future for uses the individual did not consent to. This concern is likely to be less significant if high standards on privacy-preserving design are followed in the design phase, and if data protection law is adequately enforced.

Finally, the implementation and existence of a system of health data-sharing in exchange for differential access to services could change social norms about the acceptable circumstances for health data-sharing in future, particularly if the system has any durability beyond the immediate emergency circumstances.35 This is not to prejudge what those changes will be – an ineffective and mismanaged system could damage public trust in digital identity systems and health data-sharing, while an apparently successful one might embed those ideas as a normal part of daily life. Either way, it will have an effect on the social norms and ethical reality in which we evaluate the system retrospectively, for good or ill, and it will shape the attitudes we take into future systems with similar properties.

Recommendations and key concerns

 

The current uncertainty, ongoing social anxiety and economic cost of the pandemic makes the technical fix of a novel tool and emergency infrastructure seem attractive, but the starting point should be identifying specific problems and looking at whether and how these could be addressed through existing practices and laws.

 

If these systems are intended to be used in the long-term, then governments should be upfront about that intention and undertake design, legal and ethical assessment, deliberation etc. on that basis, not pretend they are building a temporary system.

 

This should include – in primary legislation, where possible – details of:

  • Sunset clauses, including clear procedures for deciding whether to continue schemes, and details of legislative oversight and further public deliberation.
  • Commitments not to engage in ‘scope creep’; any expansion to the system should undergo its own separate assessment, with all the criteria outlined in other sections.
  • Proper investment of resources to ensure systems are properly maintained during use and don’t break down, and so exclude people or otherwise unexpectedly fail.
  • Governments and other providers should establish clear, published criteria for evaluation of the success of a system at achieving its stated purpose and of any side effects or externalities caused by the creation of these systems. This might include epidemiological modelling, as far as is possible, of the system’s effect on COVID-19 spread within society, and economic evaluation of the additional marginal benefit provided by the system. Any such evaluation should be continuous with regular public reviews and updates.

Conclusion

In this report the Ada Lovelace Institute sets out detailed recommendations under six requirements for policymakers, developers and designers to work through, to determine whether a roll-out of vaccine passports could navigate risks to play a socially beneficial role.

The six requirements for policymakers, developers and designers are:

  1. Scientific confidence in the impact on public health
  2. Clear, specific and delimited purpose
  3. Ethical consideration and clear legal guidance about permitted and restricted uses, and mechanisms to support rights and redress and tackle illegal use
  4. Sociotechnical system design, including operational infrastructure
  5. Public legitimacy
  6. Protection against future risks and mitigation strategies for global harms.

This report draws on a wide range of evidence:

  • Evidence submitted as part of an open call during January and
    February 2021, which can be found on the Ada Lovelace Institute
    website.
  • A rapid deliberation by an expert panel, summarised in the February 2021 report What place should COVID-19 vaccine passports have in society? The deliberation was chaired by Professor Sir Jonathan Montgomery with leading experts in immunology, epidemiology, public health, law, philosophy, digital identity and engineering.
  • A series of public events on the history and uses of vaccine passports, their possible economic and epidemiological impact, their ethical implications, and the socio-technical challenges of building a vaccine passport system.
  • An international monitor of the development and use of vaccine passport schemes globally.
  • Desk research and targeted interviews with experts and developers.

The report concludes that building digital infrastructure that enables different actors across society to control rights or freedoms on the basis of individual health status, and all the potential benefits and harms that could arise from doing so, should:

  1. Face a high bar: to build from a secure scientific foundation, with understanding of the full context of the sociotechnical system, and mitigate some of the biggest risks through law and policy.
  2. Not prove a technological distraction from the only definitive route to reopening societies safely and equitably: global vaccination.

At the current point in the pandemic response, there hasn’t been enough time for real-world models to work comprehensively through these challenging but necessary steps, and much of the debate has focused on a smaller subset of these requirements – in particular technical design and public acceptability.

Despite the high thresholds, and given what is at stake and how much is still uncertain about the pathway of the pandemic, it is possible that the case can be made for vaccine passports to become a legitimate tool to manage COVID-19 at a domestic, national scale, as well as supporting safer international travel.

As the pandemic response continues around the globe. evidence will continue to emerge, and more detail will come into the public domain about possible models and pilot schemes. We hope the structures developed here remain valuable for decision-makers in industry and government and support efforts to ensure that – if vaccine passports are developed and deployed – that happens in a way that supports a just, equitable society.

Acknowledgements

We are indebted to the many experts and organisations who contributed evidence, spoke at events and briefings, demonstrated tools, and took part in the expert deliberation. We’d especially like to thank Professor Sir Jonathan Montgomery for chairing the expert deliberation, and Gavin Freeguard who has made substantial contributions as a consultant to
delivering this project.

This project has been supported by the European AI Fund, a collaborative initiative of the Network of European Foundations (NEF). The sole responsibility for the project lies with the organiser(s) and the content may not necessarily reflect the positions of European AI Fund,
NEF or European AI Fund’s Partner Foundations.

Participants in the expert deliberation:

Sir Jonathan Montgomery (chair) is Professor of Health Care Law at University College London and Chair of Oxford University Hospitals NHSFT. He was previously Chair of the Nuffield Council on Bioethics and Chair of the Health Research Authority.

Professor Danny Altmann is Professor of Immunology at Imperial College London, where he heads a lab at the Hammersmith Hospital Campus. He was previously Editor-in-Chief of the British Society for Immunology’s ‘Immunology’ journal and is an Associate Editor at ‘Vaccine’ and at ‘Frontiers in Immunology.’

Professor Dave Archard is Emeritus Professor of Philosophy at Queen’s University Belfast. He is also Chair of the Nuffield Council on Bioethics, a member of the Clinical Ethics Committee at Great Ormond Street Hospital and Honorary Vice-President of the Society for Applied Philosophy.

Dr Ana Beduschi is an Associate Professor of Law at Exeter University. She currently leads the UKRI ESRC-funded project on COVID-19: Human Rights Implications of Digital Certificates for Health Status Verification. Professor Sanjoy Bhattacharya is Professor in the History of Medicine, Director of the Centre for Global Health Histories and Director of the WHO
Collaborating Centre for Global Health Histories at the University of York

Dr Sarah Chan is a Chancellor’s Fellow and Reader in Bioethics at the Usher Institute, University of Edinburgh. She is also Deputy Director of the Mason Institute for Medicine, Life Sciences and Law, a Associate Director of the Centre for Biomedicine, Self and Society and a member of the Genomics England Ethics Advisory Committee.

Dr Tracey Chantler is Assistant Professor of Public Health Evaluation & Medical Anthropology at the London School of Hygiene and Tropical Medicine. She is also a member of the Immunisation Health Protection Research Unit, a collaborative research group involving Public Health England and LSHTM.

Professor Robert Dingwall is Professor of Sociology at Nottingham Trent University. He is also a Fellow of the Academy of Social Sciences and a member of the Faculty of Public Health. He sits on several government advisory committees, including NERVTAG (New and Emerging Respiratory Virus Threats Advisory Group) and the JCVI (Joint Committee on Vaccination and Immunisation) sub-committee on Covid-19.

Professor Amy Fairchild is Dean and Professor at the College of Public Health, Ohio State University. She is also Co-Director of the World Health Organization Collaborating Center for Bioethics at Columbia’s Center for the History and Ethics of Public Health.

Dr Matteo Galizzi is Associate Professor of Behavioural Science at the London School of Economics. He is also Co-Director of LSE Behavioural Lab and coordinates the Behavioural Experiments in Health Network and the Data Linking Initiative in Behavioural Science.

Professor Michael Parker is Director of the Wellcome Centre for Ethics and Humanities and Director of the Ethox Centre at the University of Oxford. He is also a member of the Government’s Scientific Advisory Group for Emergencies, the Chair of the Genomics England Ethics Advisory Committee and a non-executive director of Genomics England.

Dr Sobia Raza is a Senior Fellow at the Health Foundation within the Data Analytics team. She is also an Associate and previous Head of Science at the PHG Foundation.

Dr Peter Taylor is Director of Research at the Institute of Development Studies. He was previously the Director of Strategic Development at the International Development Research Centre.

Dr Carmela Troncoso is Assistant Professor, Security and Privacy Engineering Lab at the École Polytechnique Fédérale de Lausanne. She was a leading researcher on DP-3T and is also a member of the Swiss National COVID-19 Science Task Force’s expert group on Digital Epidemiology.

Dr Edgar Whitley is Associate Professor of Information Systems at the London School of Economics. He is co-chair of the UK Cabinet Office Privacy and Consumer Advisory Group and was the research coordinator of the LSE Identity Project on the UK’s proposals to introduce biometric identity cards.

Dr James Wilson is Professor of Philosophy and Co-Director of the Health Humanities Centre at University College London. He is also an Associate editor of Public Health Ethics and Member of the National Data Guardian’s Panel and Steering Group.

The following individuals and organisations responded to our open call for evidence:

  • Access Now
  • Ally Smith
  • Dr Baobao Zhang, Cornell University
  • BLOK BioScience International
  • Dr Btihaj Ajana, King’s College London
  • Consult Hyperion
  • The COVID-19 Credentials Initiative, Linux Foundation Public Health
  • Professor Derek McAuley, Professor Richard Hyde and Dr Jiahong
    Chen, Horizon Digital Economy Research Institute, University of
    Nottingham
  • Dr Dinesh V Gunasekeran, National University of Singapore
  • eHealthVisa
  • The Electronic Frontiers Foundation
  • James Edwards
  • Professor Julian Savulescu and Dr Rebecca Brown, Oxford Uehiro
    Centre for Practical Ethics, University of Oxford
  • Marcia Fletcher
  • medConfidential
  • The PathCheck Foundation
  • Patrick Gracey, Patrick Gracey Productions Ltd
  • Robert Seddon
  • SICPA
  • Susan Mayhew
  • techUK
  • The Tony Blair Institute for Global Change
  • The UK Pandemic Ethics Accelerator
  • Yoti
  • ZAKA
  • Zebra Technologies.

This report was authored by Elliot Jones, Imogen Parker and Gavin Freeguard.


Preferred citation: Ada Lovelace Institute. (2021). Checkpoints for vaccine passports. Available at: https://www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports/

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 110

Skip to content

The final report from Ada’s Citizens’ Biometrics Council, which was established to bring public voice into the debate on technologies that collect and process biometric data, which sets out the process undertaken by the Council, the themes that emerged from their discussions and their recommendations to government, policymakers and technology developers.

You can download the PDF version or navigate through the report online by scrolling to read, or selecting the chapters you want to read from the pop up menu at the bottom of this page.

Final report of the Citizens' Biometrics Council

Recommendations and findings of a public deliberation on biometrics technology, policy and governance

Summary

Biometric technologies, from facial recognition to digital fingerprinting, have proliferated through society in recent years. Applied in an increasing number of contexts, the benefits they offer are counterbalanced by numerous ethical and societal concerns.

In 2019, the Ada Lovelace Institute called for a moratorium on facial recognition, arguing for a halt on its use until the societal, ethical and legal conditions for the responsible use of emerging biometric technologies were established.

Since then, a range of actors, from the commercial and political to the legal and academic, have continued to contribute to the debate around biometrics. But a crucial stakeholder group is yet to be consulted: the public.

Throughout 2020 the Ada Lovelace Institute established the Citizens’ Biometrics Council to deliberate on the use of biometric technologies, bringing much-needed public perspectives to this debate.

Across 60 hours of in-person and online workshops, the Council considered a range of arguments and evidence about technologies such as facial recognition, voice recognition, digital fingerprinting and more.

The Council members included a diverse range of 50 members of the public, recruited to reflect different social, economic and political attitudes, as well as different perspectives on data and technology.

They heard from experts – including police strategists, technology developers, regulators, campaigners, tech ethicists and more – and debated on the opportunities and risks posed by these powerful technologies.

The Council’s goal was to bring a range of people’s voices to the debate on biometrics and build deeper understanding of their concerns, expectations and red lines.

To conclude their deliberations, the Citizens’ Biometrics Council developed a set of recommendations to address the question: what is or isn’t OK when it comes to the use of biometric technologies?

These recommendations cluster around three issues:

  1. Developing more comprehensive legislation and regulation for biometric technologies.
  2. Establishing an independent, authoritative body to provide robust oversight.
  3. Ensuring minimum standards for the design and deployment of biometric technologies.

In this report, we share the Council’s recommendations in full, explore their deliberations and describe next steps for policy and practice.

‘Trust is the one word that sticks in my mind throughout the whole process of biometrics discussions.’

Final report of the Citizens' Biometrics Council

Recommendations and findings of a public deliberation on biometrics technology, policy and governance

About this report

The Ada Lovelace Institute’s 2019 call for a moratorium on biometric technologies like facial recognition was followed by a survey of public attitudes towards facial recognition, published in the report Beyond Face Value.[footnote] Kind, C. (2019) Biometrics and facial recognition technology – where next?, Ada Lovelace Institute. Available at: www.adalovelaceinstitute.org/blog/biometrics-and-facial-recognition-technology-where-next (Accessed: 23 February 2021)[/footnote] [footnote]Ada Lovelace Institute (2019) Beyond face value: public attitudes to facial recognition technology. Available at: www.adalovelaceinstitute.org/report/beyond-face-value-public-attitudes-to-facial-recognition-technology (Accessed: 23 February 2021).[/footnote] The survey showed that not only did the majority of the UK public want greater limitations on the use of facial recognition, but that a deeper understanding of public perspectives was needed to inform what would be considered as socially acceptable for these technologies.

Following Beyond Face Value, the Ada Lovelace Institute began work to establish the Citizens’ Biometrics Council, to create space to better understand public perspectives and bring their voice to debates about biometrics.

Concurrent to the Council, the Ada Lovelace Institute also commissioned an independent legal review of the governance of biometric data in the UK, led by Matthew Ryder QC.[footnote] Ada Lovelace (2019) Ada Lovelace Institute announces independent review of the governance of biometric data. Available at: www.adalovelaceinstitute.org/news/independent-review-governance-of-biometric-data (Accessed: 23 February 2021).[/footnote] The legal review and the Citizens’ Biometrics Council have led independent but parallel enquiries, and offer different types of evidence that are essential for contributing to the trustworthy and trusted use of biometrics.

Where the Citizens’ Biometrics Council offers public perspectives on the conditions for proportionate and responsible biometrics, the legal review will provide detailed analysis of the current state of the law concerning the governance of biometric data, and recommendations for legislative changes required to provide greater oversight of the technology.

This report describes the Citizens’ Biometrics Council. It outlines the background to the current landscape around biometrics; details the methodology used to deliver the Council; lists the Council’s recommendations; analyses the core themes that emerged during their deliberations; and describes three topics that the recommendations cluster around, highlighting the direction that policymakers and practitioners should take to respond to the Councils’ deliberations.

Timeline of Citizens' Biometrics Councl: July 2019-November 2020
Timeline of events

How to read this report…

The Council’s recommendations are statements generated by the Council members as they concluded their deliberations, and give direct voice to their perspectives. The recommendations are the key findings from the Council.

…if you’re a policymaker, researcher or regulator thinking about biometric technologies:

  • The methodology describes how the project was designed to generate robust and relevant findings.
  • The conclusion describes three areas where the Councils’ recommendations converge, and practical next steps for policy, governance and technology development. The findings of the legal review, publishing in spring 2021, will provide detailed analysis and recommendations that build on these areas and more.

…if you’re a developer or designer building biometric technologies:

  • The findings provide detail of the core themes that emerged during the Council’s deliberations. These are crucial for understanding what responsible practices and technology design should look like: they are a guide for building better biometric technology.

…if you’re procuring or deploying biometrics:

  • The background describes the current landscape around biometric technologies; where they have been deployed, the societal challenges they raise and the controversy that surrounds them. It demonstrates why public voice is needed to shape better use of biometrics.
  • The findings and the conclusion describe the core themes of the Council’s deliberations and options for policy and practice, which should be considered a guide for the responsible use of biometric technologies.

A note on quotes

Throughout this report, any text in quotation marks represents quotes from Council members’ deliberations drawn from the transcripts of the workshops, unless otherwise attributed.

Some quotes have been edited to improve readability, for example by removing repetition or filler words used as Council members articulated their thoughts. There have been no additions, word replacements or other edits that would change the meaning or sentiment of Council members’ statements.

All the quotes have been included to amplify the voices of the Council members, and demonstrate the richness of their perspectives.

What are biometrics?

Throughout this report, and across the Council’s workshops, the terms ‘biometric technologies’, ‘biometrics’ and ‘biometric data’ refer to a range of technologies and systems which use digital devices, data science and artificial intelligence (AI) to identify people through data about their biological characteristics.

During the Citizens’ Biometrics Council discussions, we put forward an explanation of biometrics for members to consider (and question), using a version of this infographic:

Biometrics explainer infographic

Biometrics explainer infographic

Background: a biometrics backlash?

Recent years have seen a snowball of developments in relation to biometric technologies. Digital fingerprinting found prominence in the consumer mainstream when Apple introduced ‘Touch ID’ to its smartphones in 2013. Customers who use telephone banking have become familiar with using their voice as their password. From 2016, South Wales Police and London’s Metropolitan Police began trials deploying automated facial recognition in public places in the UK.[footnote]See: Metropolitan Police Service (no date) Update on facial recognition. Available at: www.met.police.uk/advice/advice-and-information/facial-recognition/live-facial-recognition; South Wales Police (no date) What is AFR?. Available at: http://afr.south-wales.police.uk/what-is-afr (Accessed: 23 February 2021).[/footnote]

In 2020, with the arrival of the COVID-19 pandemic, digital tools using facial and other biometric data found new prominence verifying people’s identities in an increasingly contactless and online world. In Russia, facial recognition systems have been used to enforce COVID-19 lockdown restrictions, and in Singapore they have been adopted for online access to government services.[footnote]Dixon, R. (2020) ‘In Russia, facial surveillance and threat of prison being used to make coronavirus quarantines stick’, Washington Post. Available at: www.washingtonpost.com/world/europe/in-russia-facial-surveillance-and-risk-of-jail-seek-to-make-coronavirus-quarantines-stick/2020/03/24/a590c7e8-6dbf-11ea-a156-0048b62cdb51_story.html (Accessed: 17 November 2020).[/footnote] [footnote]MacDonald, T. (2020) ‘Singapore in world first for facial verification’, BBC News. Available at: www.bbc.co.uk/news/ business-54266602 (Accessed: 17 November 2020).[/footnote] In the UK, facial recognition has been suggested for the verification of a person’s immunity or vaccination status,[footnote]Onfido(2020)The role of Digital Identity in Immunity Passports,written evidence submission. Available at: https://committees.parliament. uk/downloadfile/?url=%2Fwrittenevidence%2F2537%2Fdocuments%2F5286%3Fconvertiblefileformat%3Dpdf&slug=c190014pdf (Accessed: 17 November 2020)[/footnote] and law enforcement agencies in the US have continued to deploy facial recognition algorithms, including to retroactively identify violent protestors.[footnote]Vincent, J. (2020) ‘NYPD used facial recognition to track down Black Lives Matter activist,’ The Verge. Available at: www.theverge.com/2020/8/18/21373316/nypd-facial-recognition-black-lives-matter-activist-derrick-ingram (Accessed: 23 February 2021).[/footnote]

The benefits and opportunities posed by biometric technologies include their ability to support effective law enforcement, ensure public safety and verify identities securely and virtually. Biometric technologies have been used in policing for decades, through DNA and fingerprint matching, and are widely deployed in other settings where safe and reliable identification of individuals is required, such as building or device security and at international borders. Emerging biometric technologies, such as automated facial recognition, have current and potential applications in improving services and online safety, and in tackling serious crime.

But the contention that biometric systems are deployed in the public interest is counterbalanced by a range of societal and ethical concerns. These concerns are driving a growing controversy around the use of biometric technologies and increasing resistance towards them, particularly towards automated facial recognition.

In the UK, the Court of Appeal ruled that South Wales Police’s use of automated facial recognition was unlawful in response to a case brought by Ed Bridges and civil rights group, Liberty.[footnote]Ryder M. and Jones J. (2020) ‘Facial recognition technology needs proper regulation’ – Court of Appeal, Ada Lovelace Institute. Available at: www.adalovelaceinstitute.org/facial-recognition-technology-needs-proper-regulation-court-of-appeal (Accessed: 17 November 2020).[/footnote] Journalists around the world have questioned the role of facial recognition in the treatment of Uighur Muslims in China.[footnote]Mozur, P. (2019) ‘One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority’, The New York Times, 14 April. Available at: www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html (Accessed: 23 February 2021).[/footnote] In the USA, Portland became the country’s fourth city to ban uses of facial recognition.[footnote] Brandom, R. (2020) ‘Portland, Maine has voted to ban facial recognition’, The Verge. Available at: www.theverge.com/2020/11/4/21536892/portland-maine-facial-recognition-ban-passed-surveillance (Accessed: 4 November 2020)[/footnote] Facial recognition company Clearview AI made controversial headlines when it was revealed it was scraping images from social media for its algorithms.[footnote]Hill, K. (2020) ‘The Secretive Company That Might End Privacy as We Know It’, The New York Times, 18 January. Available at: www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html (Accessed: 18 November 2020).[/footnote] And three major technology firms – IBM, Amazon and Microsoft – all announced they would stop or limit the use of their facial recognition systems by police forces in the wake of the Black Lives Matter protests.[footnote]Heilweil, R. (2020) ‘Big tech companies back away from selling facial recognition to police. That’s progress.’ Vox. Available at: www.vox.com/recode/2020/6/10/21287194/amazon-microsoft-ibm-facial-recognition-moratorium-police (Accessed: 17 November 2020).[/footnote]

Campaigners and legal scholars have articulated the powerful ways that biometric technologies can subject citizens to undue surveillance, infringing on people’s privacy, civil liberties and data rights. Researchers have demonstrated how many of the market-leading and widelydeployed facial recognition algorithms contain biases which reduce their accuracy for ethnic minorities and women, particularly Black women.[footnote]Buolamwini J., Gebru T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.’ Proceedings of Machine Learning Research 81:1–15, Conference on Fairness, Accountability, and Transparency.[/footnote] [footnote]Leslie, D. (2020) Understanding bias in facial recognition technologies. Zenodo. doi: 10.5281/zenodo.4050457[/footnote] When used in contexts already characterised by structural injustice, these factors could compound and amplify the institutional racism and other biased outcomes that already persist.[footnote]Chowdhury, A. (2020) ‘Unmasking Facial Recognition: An exploration of the racial bias implications of facial recognition surveillance in the United Kingdom.’ WebRoots Democracy. Available at: https://webrootsdemocracy.org/unmasking-facial-recognition (Accessed: 18 March 2021)[/footnote]

In the UK, the Information Commissioner’s Office (ICO), former Biometrics Commissioner and former Surveillance Camera Commissioner have all argued that the law related to biometric technologies is no longer fit for purpose.[footnote]Information Commissioner’s Office (2019) The use of live facial recognition technology by law enforcement in public places. Available at: https://ico.org.uk/media/about-the-ico/documents/2616184/live-frt-law-enforcement-opinion-20191031.pdf (Accessed: 27 November 2020); Wiles P (2020) Biometrics Commissioner’s address to the Westminster Forum: 5 May 2020, GOV.UK. Available at: www.gov.uk/government/speeches/biometrics-commissioners-address-to-the-westminster-forum-5- may-2020 (Accessed: 17 November 2020); Porter, T. (2020) ‘Facing the Camera: Good practice and guidance’. Surveillance Camera Commissioner. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/ file/940386/6.7024_SCC_Facial_recognition_report_v3_WEB.pdf.[/footnote] In August 2020, the Court of Appeal of England and Wales concluded that there were ‘fundamental deficiencies’ in the legal framework surrounding the police use of facial recognition.[footnote]Ryder M., Jones J. (2020) ‘Facial recognition technology needs proper regulation’, Ada Lovelace Institute. Available at: www.adalovelaceinstitute.org/blog/facial-recognition-technology-needs-proper-regulation (Accessed: 18 March 2021).[/footnote] An editorial in the worlds’ leading science journal, Nature, argues biometrics needs an ‘ethical reckoning’, calling for researchers, funders and institutions working in the fields of computer science and artificial intelligence to respond to ‘the ethical challenges of biometrics’.[footnote]Nature editorial (2020) ‘Facial-recognition research needs an ethical reckoning’, Nature, 587(7834), pp. 330–330. doi: 10.1038/d41586-020-03256-7.[/footnote]

There are efforts to address these gaps: researchers have developed frameworks to support audits of facial recognition systems, some technology developers are committed to demonstrating responsible uses of biometrics, and arguments for a US Federal Office for facial recognition have been put forward. [footnote]Ho, D. E. et al. (2020) ‘Evaluating Facial Recognition Technology: A Protocol for Performance Assessment in New Domains’, Stanford Institute for Human-Centered Artificial Intelligence. Available at: https://hai.stanford.edu/sites/default/files/2020-11/HAI_FacialRecognitionWhitePaper_Nov20.pdf (Accessed: 18 March 2021).[/footnote] [footnote]See: Safe Face Pledge. Available at: www.safefacepledge.org (Accessed: 23 February 2021)[/footnote] [footnote]Erik Learned-Miller et al. (2020) ‘Facial recognition technologies in the wild’. Algorithmic Justice League.[/footnote]The former Surveillance Camera Commissioner has also issued guidance for the use of facial recognition by UK police forces.[footnote]Porter, T. (2020) ‘Facing the Camera: Good practice and guidance’. Surveillance Camera Commissioner. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/940386/6.7024_SCC_Facial_ recognition_report_v3_WEB.pdf (Accessed: 18 March 2021)[/footnote]

However, to date little public debate has taken place about what legal and ethical checks and balances are needed – particularly in the UK – and the lack of adequate regulation and oversight leaves potential for misguided use of biometrics at best, and misuse at worst. While many stakeholders with commercial, legal or research interests in biometric technologies have contributed to debates about how biometric technologies can be deployed in the public interest, a crucial stakeholder group is yet to be consulted: the public.

What constitutes trustworthy, responsible, proportionate use of biometric technologies is one of the most complex and urgent questions facing our society today. Addressing this question requires a range of inputs, from legal inquiry and ethical analysis to political scrutiny. But it cannot be addressed without public input.

The Ada Lovelace Institute convened the Citizens’ Biometrics Council to bring perspectives of informed members of the public to debates about biometric technologies. We believe the Council’s recommendations are a crucial component in responding to the increasingly ubiquitous role biometric technologies appear set to play in our world.

‘Public interest tests [relating to the use of biometrics] ought to be informed by the sentiment of the public, but that sentiment is not best read from simple public opinion surveys, although methodologically more sophisticated work may have a part to play.

For citizens to reach an informed view they need to be informed by a public debate – the sentiment of the public should be formed based on such evidence and reasoning.’

Paul Wiles, Biometrics Commissioner 2016–2020.[footnote] Wiles, P. (2020).[/footnote]

Methodology

The Citizens’ Biometrics Council ran from February to October 2020, in-person and online. It involved 50 members of the public who took part in 60 hours of deliberative workshops. During the workshops, they considered evidence about biometric technologies, heard from experts from a range of backgrounds, and participated in facilitated discussion.

The Ada Lovelace Institute conceived of and designed the Citizen’s Biometrics Council to address the following aim: to give an understanding of an informed public’s expectations, conditions for trustworthiness and red lines when it comes to the use of biometric technologies and data.

Throughout the process, all members of the Citizens’ Biometrics Council became informed on the topic, and considered the information and their task with thought and scrutiny. They concluded by developing a set of recommendations that respond to the urgent need for public voice on the use of biometric technologies.

Deliberative approaches such as those used in the Council enable detailed understanding of people’s perspectives on complex topic areas. Valuable in their own right, they also complement quantitative methods, such as surveys or opinion polls, like our Beyond Face Value report. A survey can offer population-level insights on attitudes, while qualitative and deliberative methods, such as those used with the Council, offer insight on why people hold certain opinions, what values or information inform those views, and what they would advise when informed.

Recruiting for the Council

We recruited Council members to include a broad and diverse range of perspectives while maintaining a manageable number of participants that could engage meaningfully in rich, facilitated discussions. We initially sought to recruit 60 participants to meet this aim within the bounds of our capacity Mini-publics such as the Citizens’ Biometrics Council can never be statistically representative of the wider population, nor should they aim to be. Instead, they should reflect the diversity of views within a population.[footnote]Steel, D. et al. (2020) ‘Rethinking Representation and Diversity in Deliberative Minipublics’, Journal of Deliberative Democracy, 16(1), pp. 46–57. doi: 10.16997/jdd.398.[/footnote]

To achieve this, we used a purposive approach to recruitment, inviting participants to the Council via a market research recruitment agency with selection criteria to ensure the representation of a diverse range of perspectives on the Council, and to account for the disproportionate and biased impacts of biometric technologies on underrepresented and marginalised groups.

We recruited participants against the following selection factors:

  • gender
  • age
  • ethnicity
  • disability
  • life stage
  • current working status and type
  • socio-economic background
  • urban or rural place of residence
  • attitudes to the use of data.

Council members were recruited from both the Bristol and Manchester areas, creating two groups of 30 participants who came together to participate in workshops. We chose these locations to avoid a London-centric bias, and as they offer diverse populations but would enable participants to travel easily and meet face-to-face.

We paid participants incentives at industry best-practice rates for each workshop they attended, to remunerate them for their time and contributions to the Council.

Due to COVID-19, some participants had to withdraw from the project, and we additionally recruited some participants to ensure we maintained diversity against our criteria. Ultimately, the Council consisted of 50 people who participated in the majority of workshops and contributed to the development of the Council’s recommendations.

The Council workshops

The Council’s deliberations were designed around a series of three weekend-long workshops. These workshops were planned to take place between 10:00 and 16:00, Saturday and Sunday across six weekends in February, March and April 2020 (so that each Council group in Bristol and Manchester took part in three workshop weekends).

Each workshop involved a combination of:

  • Considering a balanced range of information and evidence about biometric technologies and the challenges they pose. Evidence was drawn from: news articles, academic research, research carried out by the Ada Lovelace Institute, public information provided by technology companies, policy papers and other literature.
    Where necessary, researchers at the Ada Lovelace Institute, and facilitators at Hopkins Van Mil summarised the evidence, or made it more accessible.
  • Hearing from, and posing questions to expert speakers who represented technology developers, organisations deploying biometrics, civil rights advocates and campaigners, academic researchers, government bodies and regulators. (See appendix for a list of speakers).
  • Engaging in facilitated discussion and deliberation with other Council members and expert speakers to address questions and develop recommendations.
The workshop structure
The workshop structure

Although the two groups met separately, they took part in the same workshop structure, and there were no notable differences in the topics and themes discussed between the groups. This report reflects the perspectives of Council members from across both groups.

Community Voices workshops

The project was designed specifically to involve and amplify the voices of people from minority ethnic groups, members of the LGBTQI+ community and people with disabilities. Existing research, and our 2019 survey Beyond Face Value, showed that these groups are often disproportionately impacted by biometric technologies, and face unique challenges in response to them but are too-often underrepresented in debates about technology.

In addition to the Citizens’ Biometrics Council workshops, we convened one Community Voices group for each of the above groups, including between seven and twelve members in each, recruited via community groups and charities. The Community Voices groups met once before the main Council workshops began, and again during the reporting phase after the Council’s workshops ended, for around two hours each time. The Community Voices workshops aimed to ensure these groups’ perspectives were embedded in the Council’s deliberations by:

  • informing the design process by considering what topics and concerns the groups felt the Council should consider
  • focusing on how to address the experiences of marginalised groups and the disproportionate impacts of biometric technologies
  • reviewing the Council’s findings and recommendations to feed back on how to amplify the perspectives of marginalised groups
  • ensuring that the entire process is informed by, and appropriately weighted to consider, the views of minority ethnic groups, members of the LGBTQI+ community and people with disabilities.

At least one participant from each group also participated in the Council’s workshops. Members of the Community Voices workshops, as well as all members of the Council, were engaged through an intersectional approach that encouraged them to speak from their
own pluralistic experience, rather than represent ‘the view’ of one particular demographic.

The discussions these groups had are reflected throughout this report as part of the overall project, as well as in other reports by the Ada Lovelace Institute.[footnote] See: Patel R., Peppin A. (2020) ‘Making visible the invisible: what public engagement uncovers about privilege and power in data systems’. Ada Lovelace Institute. Available at: www.adalovelaceinstitute.org/blog/public-engagement-uncovers-privilege-and-power-in-data-systems (Accessed: 8 January 2021); Ada Lovelace Institute (2020) No green lights, no red lines. Available at: www.adalovelaceinstitute.org/wp-content/uploads/2020/07/No-green-lights-no-red-lines-final.pdf (Accessed: 8 January 2021)[/footnote] In particular, bias, discrimination and inequality became core themes throughout the Council’s deliberations,
strengthened and enriched by the contributions of the Community Voices groups.

COVID-19: disruption and going online

Planning for the project began in September 2019, long before the COVID-19 pandemic arrived. This meant the project was designed to take place in-person, and was adapted to work online following the implementation of lockdown restrictions.

The Citizens’ Biometrics Council was midway through its workshops in March 2020 when the UK began to witness a rise in Coronavirus cases, and the UK Government implemented lockdown restrictions. At this time, the Manchester group had completed their first weekend of workshops, and the Bristol group had completed its second weekend.

We immediately postponed the process, aiming to reconvene in Autumn 2020. Initially, we had hoped to reconvene the Council in-person, but as the world rapidly adapted to online working, and as it became clear that meeting in large groups would continue to be unsafe until the arrival of a vaccine, we explored approaches to bringing the Council together online.

In the intervening months, many public engagement organisations and researchers began to iterate and develop tools and methods for conducting public engagement deliberative workshops in online environments.[footnote] Hughes, T. (2020) ‘Digital tools for participation: Where to start?’, Involve. involve.org.uk. Available at: www.involve.org.uk/resources/ blog/opinion/digital-tools-participation-where-start (Accessed: 23 February 2021)[/footnote] [footnote]Mckeon, A. (2020) ‘Moving Online’, Traverse. Available at: https://traverse.ltd/moving-online (Accessed: 23 February 2021).[/footnote]  We drew from these to adapt the remaining workshops to work online, via Zoom, as well as establishing an online forum where we could continue to share some materials, create ‘homework’ tasks and keep in contact with participants.

The Manchester group resumed their deliberation in September completed their second workshop online across evenings and weekends. Both Manchester and Bristol groups then conducted their final ‘weekend’ via a series of online workshops in October 2020.

The online workshops were one-and-a-half to two hours long, and followed developing best practice about suitable lengths for a comfortable and productive online session. Moving online led to some challenges, for example, participants couldn’t enjoy the creative benefits of being in the same room, nor could they work together to craft and explore ideas on paper.

However, the online workshops had no travel requirements, and some participants found it easier to fit them into their schedule. Online workshops also offered different ways of working, such as using breakout rooms or chat messaging to capture spontaneous thoughts, and some participants felt more comfortable contributing from their own home environment. It also meant we could engage a broader range of expert speakers, who could easily participate for short sessions without needing to give additional time for travel.

Moving the format online required considerable thought in redesign, but ultimately presented a different way of conducting the workshops. The end result was a robust and rigorous deliberation, which produced an insightful set of recommendations. It is likely that the benefits and challenges offered by online engagement will continue to be understood, as public dialogue, engagement and deliberation projects continue while social distancing remains. Online participation will become a common method that practitioners opt to use even after the ability to meet in-person returns; there will be times where the qualities of online participation lend themselves to a particular topic or project.

Project delivery, oversight group and evaluation

The design and delivery of the Citizens’ Biometrics Council was guided by an oversight group consisting of experts in: biometric technology, technology industry practices and policies, public attitudes towards technology, and responsible and trustworthy data use and technology
(see appendix). The group gave advice on the topics and evidence discussed by the Council, the issues the Council should address, and on ensuring the process was balanced and robust. Some oversight group members also acted as expert speakers to the Council, and shared feedback on reporting.

The Council was delivered in partnership with public engagement specialists, Hopkins Van Mil (HVM). The Ada Lovelace Institute conceived the Council and developed its overall design and objectives, and commissioned HVM to act as a delivery partner, responsible for participant recruitment, project logistics and administration, workshop design and facilitation, and transcription. The Ada Lovelace Institute was responsible for researching materials, speakers and content, supporting workshop design, project management, analysis and reporting.

The project was also independently evaluated by Ursus Consulting, who observed workshops and planning meetings, and gathered feedback from participants, expert witnesses and other stakeholders. The evaluation aims to offer insight into the project to help understand the processes strengths, limitations and impact. The evaluation findings will be reported separately.

The Council’s recommendations

The Citizens’ Biometrics Council developed a set of recommendations in response to the question: What is or isn’t ok when it comes to the use of biometric technologies?

These recommendations were developed at the end of the Council’s 60 hours of deliberative workshops. In their final workshop, the Council members were asked to reflect on all the perspectives, evidence and topics they had considered throughout their deliberations, and develop recommendations for addressing the challenges raised by biometric technologies.

Rather than seek agreement from the entire Council on a small list of recommendations, these statements were developed through several smaller facilitated discussion groups to ensure each Council member had the space to reflect and contribute, and to ensure we captured the entire range of their ideas. Their recommendations should therefore not be seen as consensus, but instead a range of conclusions.

We present their recommendation statements here in full and in the Council members’ own words.[footnote]We have made minor grammar and phrase edits for readability.[/footnote]

With feedback from a subset of Council members and the project oversight group, the Ada Lovelace Institute developed categories that group the recommendations according to where they overlap or converge around common themes.

The order of the recommendations correspond to the order in which they were collated from Council member’s workshop groups, and do not represent order of preference or hierarchy.

Independent oversight body, legislation and regulation

  • Legislation should be created to define the boundaries of what is or isn’t ok in the use of biometrics, and there should be a legal body which holds people accountable for breach.
  • An independent body should bring governance and oversight together. There are too many bodies currently all trying to do different things in this space. The independent body should have some ability to decide what’s ok and what’s not, through a licensing process that considers permission to collect certain data, why they are using the data, how it is stored, and that it won’t be shared with other companies. There should be recompense when companies don’t do the right thing, and the body must have some teeth (e.g. the Financial Conduct Authority).
  • There needs to be an independent body overseeing the development, implementation and legislation of biometric technologies and it needs to have all major players involved to create safe practices.
  • Until legislation is put in place and laws are set these biometric technologies shouldn’t be rolled out on a large scale.
  • Strong legislation, The Biometrics Act 2020 set, these should be created and kept up to date (reviewed annually). It should include punitive measures – not just fines, i.e., someone could go to prison. All the data must be transparent, and able to be reviewed by the public – it must be published. There needs to be a framework for opt in/opt out. There need to be human accountability built into the system. As such we want to see a ‘Biometrics Officer’ in every company that’s going to deal with the Ethics Committee (see recommendation 6) and be accountable.
  • We recommend establishing an Ethics Committee which sets out the ethical and moral framework for assessing all uses of biometric technologies including commercial use and advertising. Biometric data shouldn’t be sold on by companies. The committee should have representatives from across society on it. Committee findings must be published.
  • Legislation should be developed with a diversity of perspectives and should have ’real teeth’ to enforce penalties for breach of the law. E.g. the penalty for breach should be greater than the benefit of selling data to a third party. In order to ensure this occurs, neither business nor government should take the lead, but it should be co-developed with an independent panel/group, including members of the public.
  • A continually evolving framework of governance that includes a register to use biometric technologies, that is overseen by a broad representative group of individuals (and including public).
  • Governance standards need to be futureproofed – regular reviews written into new legislation to take into account new technology as it changes over time. Accurate now and reviewed to allow for adaptations to be kept current.

Data management

  • Data collection, storage and handling, length of storage are all important areas for consideration. Biometric information should be destroyed once a data subject leaves an organisation/ company; e.g. only held for as long as a person uses the gym/bank. Specific details could be broadly broken down in to three categories:
    • financial/private sector
    • regulation for police
    • general productivity (social media/mobile uses/going to the gym).
  • Increase data security to minimise chances of biometric data being stolen.
  • Improving data security is CRUCIAL before the usage of biometrics becomes even more widespread and mainstream, to reduce the risk of biometric data (that can’t be
    changed, e.g. retinal scan) being stolen.
  • Commercial use: private companies shouldn’t share data between themselves (e.g. Asda sharing with your gym: why?) to prevent them forming a bigger picture on you.

Proportionality across different contexts

  • It is not ok when biometrics are used for social control. We aren’t fully comfortable with immunity passports and linking biometric data to wider (government) control of our health history or status.
  • Mixed views in its use in crime prevention. Use in crowds – CCTV outside a railway station – seems ok, but at an individual level (body-worn cams) disproportionately affects Black and ethnic minorities and that’s not ok.
  • National security use needs proper definition: use of biometrics is warranted, and may involve holding data on us – need to accept that, so some compromises may be necessary.

Bias, discrimination and accuracy

  • Increase accuracy in biometric technologies to 99% for police uses and at least 95% in other uses, to build trust and fairness into the technology. Diversity in software development should be highly encouraged. Increase data security to minimise chances of biometric data being stolen.
  • Technologies should not be deployed if they are going to be inaccurate – they need to be accurate at the outset. Without this people will lose faith in the tech and its use. Trial it more thoroughly.
  • Technology needs to be 100% accurate (concern about damage to individuals if it isn’t).
  • We need to prevent bias, discrimination and ensure it is inclusive for everyone.
  • At an individual level (body-worn cams) biometrics disproportionately affect Black and ethnic minorities and that’s not ok. Technologies are not up to scratch with people that have darker skin tones. 1) Remove all racial bias first – fix the technologies. 2) Then they can be taken for review to an Ethics Committee.
  • Representative algorithms should be developed in biometric technologies to enhance accuracy and trust in the tech as much as possible. More representative datasets, and also a more diverse group of software developers, for example.

Consent and opt out

  • The sharing of biometric data should be restricted to certain circumstances, e.g. health/national security. In order to ask for consumer consent in other circumstances, an app/company/body needs to have permission from a verified, legal, independent body.
  • In respect to private-sector use, consumers need to be able to opt in to biometrics being used. We need to provide consent. Different approach for public sector where there is a need for red lines.
  • Ideally there would be a practical and fair opt-out system for people who don’t want their biometrics used, with the possible exception of health/national security in certain contexts.
  • It’s not ok to use biometric technologies where informed consent is not at the heart of its design.
  • There must be opt in consent which is clear and easy to give, there cannot be assumed consent. We need to know what happens with our data: clear explanations.

Transparency

  • We need to be confident that biometrics are being used properly. This involves accurate tech, public information and education, and more openness about how it is being used.
  • It needs to be clear to every individual/citizen what information is held, for how long and in simple language. There needs to be education (for people using, developing it etc.) and we need to prevent bias, discrimination and ensure it is inclusive for everyone.
  • All the data must be transparent, and able to be reviewed by the public – it must be published.
  • Biometric technologies are ok as long as we know they’re being used, and there is a method personally available to you to investigate their use.

Findings: the Council’s deliberations

Through the deliberative process the Council members became better informed about biometric technologies; how they work, where they are used, and the ethical implications, controversy and resistance arising from their deployment.

Council members were given space to weigh the complexities of biometric technologies, and consider what might be needed to ensure their use is responsible and to protect people from their irresponsible use. The Council’s recommendations are a product of their informed deliberations, and reflect the breadth and depth of their enquiry.

Here we provide an analysis of the core themes the Council considered throughout their deliberation, to describe how the Council reached its conclusions and offer deeper understanding of the members’ concerns and perspectives.[footnote] For an example of approaches to thematic analysis, see: Attride-Stirling, J. (2001) ‘Thematic networks: an analytic tool for qualitative research’, Qualitative research, 1(3), pp. 385–405.[/footnote] This analysis does not supersede the Council’s recommendations, but instead offers additional understanding of their perspectives.

The following is the Ada Lovelace Institute’s interpretation, and should not be considered a definitive representation of the Council’s perspectives. That is presented only through their quoted words and their recommendations.

Purpose, justification and proportionality

A primary theme in the Council’s deliberations was the purpose for which a biometric technology is deployed and who benefits from its use. Council members recognised the reasons motivating the deployment of biometric technologies, and considered varied scenarios where they may be used, such as to support policing and public safety, or enable identity verification and age estimation in online or socially distanced shops.

Many of these uses, like online identification or unlocking smartphones, were considered ‘uncontroversial’, and Council members understood and often agreed with or supported aims to improve public safety and security. But the Council also acknowledged the pluralistic
nature of biometric technologies, in which they may simultaneously pose both benefits and risks, as the following quote from a Council member illustrates:

‘Using it [biometric technology] for self-identification, for example to get your money out of the bank, is pretty uncontroversial. It’s when other people can use it to identify you in the street, for example the police using it for surveillance, that has another range of issues.’

Privacy and surveillance

The Council considered seriously issues of over-surveillance and infringements on people’s liberties and privacy. As well as references to ‘big brother’ and ‘police states’, Council members raised concerns about how other countries, both historically and in recent years, have
oppressed people and diminished their privacy through surveillance. The phrase ‘who watches the watchers’ was raised more than once in their discussions.

Many Council members considered some loss of privacy through surveillance as a trade-off for living in a society which is kept safe from crime or other harms: ‘If it’s for national security reasons, and now COVID, then I’m not too bothered.’ But they also recognised that trade-offs must be balanced carefully, and some rights must never be infringed. They were interested to hear about mechanisms to limit over-surveillance and privacy infringement, such as requirements for police watchlists and immediate data-deletion. However, many Council members questioned the extent to which such mechanisms are currently used at the discretion of those deploying biometric technologies, and according to varying interpretations of existing law and regulation:

‘It’s in the interest of public safety, [but] to what lengths does the law permit the police to go to, to protect us, life, property? To what extent can they go?’

‘This line, “the use of surveillance camera systems always specifies purpose in pursuit of a legitimate aim”, which ties in with what [the expert speaker] said – that these people are only observed if they’re on the list. But you could have anything on the list. They’ve said you’re on the list, but what’s on the list? I could be observed because I went to the Extinction Rebellion protests in London.’

Another trade-off the Council recognised was that the use of a biometric technology often does not affect just one individual, but groups of people and often the whole of society. Many participants considered the tensions this raised when the impacts on, benefits for and rights of different people are in opposition. For some participants, the collective benefit or the ‘greater good’ was a priority:

‘There’s a fine balance between people’s rights and safety. Whenever the public safety of a group comes into question, that always overpowers other’s rights because it’s obviously for the safety of the public.’

‘I know there’s a lot about individual rights: You can’t take my photo and I want this and I want that… But it’s not always about you, it’s also about everybody else around you.’

Council members were interested in ways to assist with navigating the tricky balancing of such competing interests. The question of ‘who benefits?’ emerged often, both explicitly and implicitly.

Who benefits?

When the interests of members of the public were the priority, using biometrics was often considered to be ‘more ok’ than when the interests of private actors were put first. This was particularly clear in the Council’s discussions after the lockdowns came into effect in the UK:

‘There has to be a genuine need for it, in my opinion. With COVID, I’ve not heard anybody object to track and trace when I’ve been out in public. You either scan it or you give your details, because people can see that it’s protecting the public.’

In addition to public safety and health, it was recognised that many biometric tools are used to offer better services. One example considered was a gym company that replaced its membership cards with a facial recognition system, leading one Council member to reflect that such systems make it ‘easier to check in, as there’s nothing worse than forgetting your code or your card to get into the gym. Whereas your face is always on you.’ Unlocking mobile phones with ‘face ID’, voice or fingerprint was regarded as a similarly useful tool, particularly for people who may have difficulty typing.

However, participants were concerned with uses of biometrics where private organisations, government or other actors gained benefits at the expense of the public, or where people’s rights were infringed. Some of these concerns centred around what happens to the data collected by biometrics systems.

‘It depends on the context of the company, doesn’t it? If it’s a private business wanting to sell and market stuff like we’ve mentioned before, no I wouldn’t be very pleased, but if it’s being done for a particular reason that you think is positive, then I wouldn’t mind my image being shared.’

‘With the gym, it’s just for sheer convenience. Don’t get me wrong, it’s handy but it’s not like it’s going to make the process that much quicker. It just feels like an extra layer that doesn’t make that much difference, but all of a sudden [the gym have] got all this very personal information, and gym companies aren’t renowned for their data security.’

‘What is it going to be used for? Obviously, if it’s for security, fine. But I think someone talked in the last webinar about how there’s a lot of data on our Tesco Clubcard and that is really more useful for people to hack into and use against us.’

In addition to being less comfortable about uses that don’t have a clear benefit for the public, some also considered that certain uses of biometrics were just simply inappropriate:

‘It just feels like a bit of overkill – it is there to prevent fraud and prevent crime, but I think there’s probably other measures that could be done, that don’t need to use biometric data.’

The Council also considered how uses of biometrics that seem more beneficial, or even benign, could act as gateways to rolling out more controversial uses with less resistance, as the ‘acceptance’ of biometric technologies would become normalised.

Many of these discussions reflected questions about where and when the use of biometrics is acceptable, how those uses are justified, and what mechanisms exist to ensure proportionality and prevent uses which stretch beyond those limits. Ultimately, these are questions of whether biometrics are needed or not, and who gets to decide.

The Council’s recommendations address these questions by calling for clearer consensus around what constitutes a proportionate use of biometrics, the prioritisation of public benefit over commercial or political gains, and diverse public representation in agreeing what acting in the public interest looks like for uses of biometrics.

Choice, trust and transparency

Consent

Even where biometric technologies may be considered proportionate and justified, Council members recognised that their use would still need to be trustworthy, responsible and accountable. Consent was a prevalent theme in relation to this, and Council members often referred to the importance of choice in how biometric data about them is collected and used, citing the other kinds of consent options made commonplace by the GDPR, like cookie notices:

‘I think allowing consumers to opt in is very important. If I have to opt in to accepting cookies for every webpage that I visit, I should certainly be able to opt in to having my face recognised.’

‘Have you seen the feature on Facebook where it says, “Your friend’s got a photo of you that you haven’t tagged yourself in?” So Facebook has a copy of my face, they’ve used biometrics for that. I think it’s mad that when I go on to whatever website, I have to opt in to cookies but I don’t have option about whether to opt in to having my face shared. With biometrics, it’s much more important. If we can do it with cookies, we can do it with faces.’

During their deliberations about consent, Council members considered that opportunities to opt in to a use of biometric technology wouldn’t pose genuine choice or agency if opting out meant being denied access to a service or place, or being treated differently:

‘There’s an element of being made to feel uncomfortable on the opt out, so you’ve got to wait in a queue and “Oh, do you really want to do that?” So all of a sudden, we’re making people feel uncomfortable.’

‘[If you want to go to a restaurant during COVID], you either have to do the track and trace on your phone or you have to give written information. If you’re not prepared to do it, then you’re basically asked to leave. There is no choice there, but I think people accept that this is to try and keep us all safe and protect us.’

Council members also considered how offering consent in some circumstances posed practical and technical challenges, such as gathering consent in public places or assuming consent is given by virtue of participation:

‘One of the things that really bugs me is this notion of consent: in reality [other] people determine how we give that consent, like you go into a space and by being there you’ve consented to this, this and this. So, consent is nothing when it’s determined how you provide it.’

‘So, entering a supermarket, you go to a self-service checkout, are you consenting for them to capture your image?’

‘I think sometimes we do share information and quite naively don’t realise that we are giving consent with, say, for instance, social media, and I think then you’re getting into murky waters. Although, you could argue that yes, you’ve put it out there, you know it’s out there.’

Council members also acknowledged that consent could undermine the effectiveness of deploying biometric tools in certain contexts, such as policing, where enabling everyone to opt in or out would mean that those intending to break the law would ‘opt out of everything so the police just wouldn’t be able to track [them].’

Recognising both the importance of consent and the practical challenges related to it, some Council members considered that different levels of consent are needed for different circumstances.

Where public health and safety is the goal, consent could be obtained by broad public consensus or approval – such as seen in the measures introduced to tackle the pandemic. Here public debate would be needed to understand what is acceptable, and uses must still meet expectations for proportionality with sufficient checks, balances and oversight in place.

Where biometric technologies are used in other settings without such ‘high stakes’, such as age verification in shops, fraud prevention or membership systems, Council members considered explicit consent mechanisms and adequate opt-out options for individuals to be necessary.

Transparency

Council members often expressed the view that uses of biometrics must be transparent and accountable. This is necessary to ensure its uses are responsible, and to enable people to be sufficiently informed when consenting. Many Council members, however, felt that currently both accountability and transparency were lacking:

‘It all feels a bit secret. People are taking your picture, you don’t know why, you don’t know what they’re doing with it, you don’t know if the information’s correct or not, and there’s really nothing you can do about it.’

‘You get those terms and conditions, really lengthy terms and conditions, I think that’s not the way to go about it. Companies need to be more concise and open with what data they’re taking and how they’re going to use it.’

Improving transparency, for many Council members, requires going beyond the ‘10,000 pages of gobbledegook’ that constitute many data privacy notices or terms and conditions, to provide clear, accessible, intelligible information about how biometric technologies are used, what data is collected and why. There was a strong sense that current information about how and where biometric technologies are used is ‘woolly’, ‘unclear’ and in some cases perhaps even ‘deceptive’.

Council members expressed the need for the public to be provided with general information about biometric technologies, reflecting that greater digital literacy is needed to better equip people to navigate an increasingly digital world, and better understand how data is collected and used. For many, those deploying the technologies have a role to play in enabling transparent and accountable biometrics.

Transparency was considered as more than just good practice or a nice-to-have. It was considered a fundamental aspect of enabling people to feel they have more control over how their biometric data is used, how biometric technologies are deployed in society and how to hold those using them to account.

‘The most important thing is to be able to query it, challenge it. Because I don’t want to be misidentified. […] If we all know what’s going on, we can all be okay with it. If we don’t really know what’s going on, it just feels like Big Brother doesn’t it?’

Bias and accuracy

Accuracy

The Council considered a range of evidence about the accuracy of biometric technologies. This included information about how biometric technologies aim to improve accurate identification of individuals in contexts where humans will usually perform the task of identification – often inaccurately and inefficiently.

However, while Council members recognised that digital tools do not get distracted or tired, as human ID-checkers might, they also considered research about how many facial recognition systems are systematically less accurate for minority ethnic groups such as Black and Asian people. They also considered examples of where real-world conditions can mean biometric systems do not perform as well ‘in the wild’ as they do ‘in the lab’.

The accuracy of a biometric technology can be understood in a variety of ways. The Council heard from a number of experts who discussed technical aspects of how to measure and assess aspects like false positive or negative rates, or how thresholds for match probabilities can vary depending on the context. When discussing these aspects, Council members were interested in how technical accuracy can be improved, and considered that all efforts should be made to ensure accuracy of any biometric system.

Council members were concerned that inaccuracy – in whatever form – means technologies can cause erroneous or harmful outcomes. Many shared personal experiences where they, or someone they knew, had suffered because of a technical error. Throughout all their discussions, the Council was concerned with how inaccuracies and errors can cause harms and damage trust, a perspective highlighted by the pandemic:

‘These technologies shouldn’t be deployed if they’re bound to be inaccurate or imprecise, because it affects people’s lives in so many different ways. If a technology’s going to be deployed out there – say, the track and trace app – the Government has to be sure that it will deliver. […] If it’s not going to deliver then there is no point because it will just bring a lot more confusion and people might not believe in it.’

It was clear from the discussions, and in the Council’s recommendations, that measures to minimise errors, and ensure people have the option to challenge outcomes and seek redress, would be fundamental to making biometric technologies acceptable. Or, in the words of one Council member, ‘What we really, really need to have is a way of challenging false results.’

As part of this, many Council members felt that ‘humans in the loop’ would be crucial to not only minimising errors, but also enabling recourse:

‘I think that the combination of human and technology is going to be safer, stronger, more resilient and robust, than either one or the other.’

‘You need people that are trained in errors. When things go wrong – because you can’t just say I’m sorry, I’ve got the wrong person – you need to actually explain what happened and be empathetic to the situation.’

Moreover, inaccurate technologies run the risk of damaging trust and confidence in the use of technology:

‘It has to be accurate. If we hear stories that things are failing, things aren’t working, then it’s going to lose confidence with the general public, isn’t it?’
While recognising that inaccurate technologies are a problem, some of the expert speakers posed a challenge to the Council: what if biometric technologies were completely accurate, meaning those using them for surveillance have even more powerful tools to do so?

Here, Council members largely felt little difference between biometric technologies that are accurate and those that are prone to error. In their view, all biometric technologies pose risks and require safeguards.

Some Council members considered inaccuracy and error to be so concerning that biometric technologies cannot be deployed unless they are completely accurate, articulated powerfully by one member of a Community Voices group who said: ‘I don’t see the point in it being 70% accurate, 80%. It’s got to be 100%. That’s going to stop mistakes happening and further issues.’ Here, the link between accuracy and errors – or ‘mistakes and further issues’ – that negatively affect people is clear.

These sentiments around the need for accuracy and error minimisation are reflected throughout the recommendations made by the Council.

Identity, bias and discrimination

Much of the Council’s concern around accuracy was in response to the disproportionate impact biometric technologies have on marginalised groups.

These disproportionate impacts occur when the technologies deployed reflect and amplify biases that can exist in unrepresentative datasets, be baked into poorly designed algorithms, or be prevalent in institutional and social norms. Council members heard how datasets used to train many facial recognition algorithms, for example, do not contain diverse representations of the populations they are then used on, which can lead to those algorithms performing less accurately for minority ethnic groups.

When those inaccuracies are combined with existing discrimination or prejudice in society and institutions, biometric technologies may exacerbate, not ameliorate the harms. Some Council members shared and discussed personal stories about how they have experienced discrimination or negative experiences through biases reflected or amplified by technology:

‘There is a stigma attached to my ethnic background as a young Black male. Is that stigma going to be incorporated in the way technology is used? And do the people using the technologies hold that same stigma? It’s almost reinforcing the fact that people like me get stopped for no reason.’

‘My voice is soft; I have a sibilant ‘S’. I lisp slightly and this is often a way that people use to recognise my sexuality or to make an assumption about me. I’ve had that my whole life. Now, that makes me anxious about voice recognition technology, because I know that the average person in the street makes these assumptions about me, and I don’t want technology making that assumption about me as well.’

Council members recognised that these discriminatory experiences could be exacerbated by biometric technologies. They considered how biometric data has an ‘intimate and permanent nature’, relating to people’s physical bodies and intertwined with people’s experiences of their own identity. Not only does this heighten the sensitivity of the data – as is recognised by the inclusion of biometric data as a ‘special category’ in the GDPR – but it heightens the sensitivity of the impacts on people when biometric technologies cause discrimination.

For instance, for people who are transgender, ‘incorrect medical data about their gender and sex can put them in danger’. Council members felt that biometric technologies pose similar risks for transgender people when they do not account for a spectrum of gender identities, particularly in countries with weaker equality laws or more discriminatory attitudes. Council members were particularly concerned to hear about unethical research using facial recognition and other biometrics to attempt to identify people according to their sexuality or target them because of their gender:[footnote]For more on the ethical concerns surrounding research using biometrics, see: Noorden, R. V. (2020) ‘The ethical questions that haunt facial-recognition research’, Nature, 587(7834), pp. 354–358. doi: 10.1038/d41586-020-03187-3.[/footnote]

‘Biometric technologies are fundamentally about bodies – what we do with them and how we allow them to be used. Queer bodies are often stigmatised and there is still a huge historic association with sin and moral transgression.’

Another injustice the Council were concerned about was structural and institutional racism. Here, some Council members appreciated the potential for technologies – if built and used correctly – to reduce human biases, for example in the action of powers like stop and search:

‘We already have stop and search laws, which are very controversial. Certainly when they were introduced they were extremely unfair, and they have been abused by the police in a lot of ways. […] How does facial recognition make the existence and the abuse of that law any worse? In fact, are there ways in which it could make it better: is it going to be as biased in the same way that human beings are?’

However, the Council raised many concerns about how institutional racism could be compounded by biometric technologies, particularly when they are less accurate for ethnic minorities:

‘The system, and the information that goes in, is dependent on who is putting it in. If you’ve already got companies who have a racial bias, then the system is basically useless. Ultimately all you’re doing is transferring a human bias into a computer. Before those kinds of things are implemented and put out into communities, race prejudice and discrimination needs to be sorted out.’

‘It comes back to the trust, it’s coming down to who is owning these companies who are collecting the data. Are they racist? Are they this, are they that?’

On policing powers like stop and search in particular, Council members implicitly acknowledged how technologies aren’t used in isolation from a social and organisational structure, but are intertwined with it: [footnote]For more on how trust and technologies are embedded across socio-technical systems, see: Ada Lovelace Institute (2020) No green lights, no red lines. Available at: www.adalovelaceinstitute.org/wp-content/uploads/2020/07/No-green-lights-no-red-lines-final.pdf (Accessed: 8 January 2021).[/footnote]

‘For me, I think it’s about trust. Stop and search has been abused over the years and to add on top of that – to have technology that supports stop and search – it’s not going to make young black males trust the police anymore than they already do.’

For many Council members, whether or not biometric technologies would exacerbate or minimise discrimination and injustice depends on how they are designed, built and deployed. In addition to concerns about how social and institutional biases can be amplified through the use of technology, many Council members expressed concern about ‘one-size-fits-all’ approaches to technology design:

‘I suffer with a syndrome called Guillain-Barré syndrome. For me, the fingerprint on your phones, I never get right. It’s lucky I can put in my passcode because the fingerprint from my phone, it’s never the same. It always changes. I also, and others like me, can get Bell’s palsy, so facial recognition is a no-no as well.’

In these discussions, and throughout their deliberations, the Council considered the significant potential for biometric technologies to have disproportionate impacts on already marginalised communities. Accuracy, bias and discrimination are incredibly complex topics, and each can manifest and be understood in different ways depending on what biometric technology is used and how.
Council members recognised the motivations to use tools like facial and voice recognition to reduce bias or increase access. However, their deliberations and recommendations reflect that good motivations are not enough: they expect that biometric technologies must work for everyone, and must not unfairly disadvantage anyone.

Protections for people and data

Equalities and marginalised groups

Many of the expectations outlined in the Council’s recommendations advocate for the importance of standards and protections. It is not enough to call for better accuracy and the reduction of bias if each developer or deployer of biometrics chooses for themselves what constitutes ‘accurate’ or ‘unbiased’. The use of biometrics must adhere to widely agreed standards, not the values of any one group or organisation. One Council member expressed their disappointment that some technology companies cannot be relied on to demonstrate best practice:

‘I would have hoped it would have been these huge corporate companies that saw it as a problem. That it was that one Black employee, and she was the only one who realised it was an issue, I thought that was pretty sad and alarming.’

Many suggested that diverse datasets and developer teams should be the norm in an industry that develops these technologies, an idea which carried through in more than one of their recommendations.
However, some Council members, and particularly participants in the Community Voices workshops, acknowledged that sometimes standards aren’t enough. This was reflected in discussions of how, without strong oversight, issues for marginalised groups can be overlooked.

This was exemplified even in the Council’s deliberation itself, where the focus on some injustices was stronger than others. Members of the LGBTQI+ group highlighted that the discussion centred more on racial injustice than on the prejudice experienced by gay people or transgender people, for example. This may have been a consequence of the fact that much of the deliberation occurred while the Black Lives Matter protests were making headlines and very much on the minds of Council members. However, it may have also reflected a sense that:

‘Some people are uncomfortable talking about LGBT issues. This is just an observation really about how hard it can be to raise issues in some communities, or issues can be received in silence. This is often how discrimination starts/is perpetuated.’

The difficulty in ensuring marginalised communities’ perspectives are fully considered is highlighted by how, even in a process designed to specifically include those perspectives, ‘it was never going to be fully possible to give time to challenging the implicit internal, often unconscious biases’ that exist in society.
For many Council members, the representation of a diverse range of perspectives needs to be included in not just the development of biometric technologies, but in the standards, governance and oversight relating to them. Moreover, for those communities most at risk from the harms these technologies may pose, standards and oversight are not enough if they are not backed by law: ‘Without that you aren’t safe.’

Data protection

Another concern where the Council had strong expectations for standards and protections was the management and governance of biometric data:

‘The problem is you and I don’t know where the data goes. That is the real issue, where the data goes. You stick your finger on some machine that reads it, but where does it go?’
In recent years, many people have become increasingly aware of, and knowledgeable about how data about people is collected and used by organisations for a range of purposes. Council members discussed how they suspect many of these uses do not benefit the data subject, but instead support commercial incentives, often at the expense of the data subject.

‘You’ve got to remember, of all the systems you know about, the most valuable thing is the data. The technology isn’t valuable, it’s the data that is valuable.’

‘Who has the data, how good is it and who has access to it? Can I trust them?’

‘We have to assume as well that organisations do in fact sell, pass on and share information. So, we can’t just say, “Oh, these ones are okay and those ones need to be controlled.” They all need to be controlled.’
Council members also recognised the heightened sensitivity of biometric data, as it relates to unique and immutable characteristics, and is often used for high-stakes purposes like security and identification. Keeping biometric data secure was a serious concern for many:

‘It’s whether it’s safe. We have a history of data going awry, either maliciously or otherwise.’

‘If there was a data breach from the bank, if someone could have the raw data, my fingerprint, could they be able to replicate that electronically, and then utilise it on other websites? If it was to be hacked, would it still be safe?’

As with the protection of marginalised communities, Council members felt that the protection of biometric data should not be left to each organisation to determine for themselves, but instead would require standards and legislation. Though Council members recognised that many standards and laws for data protection already exist – the GDPR being the most prominent example – the recommendations reflect their discussions and expectations for stronger and more specific protections for biometric data.

‘I feel that it’s a double-edge sword. I think it’s got huge potential, but we really need to think about how we control it and who has access to the data.’

Understanding what is and isn’t ok

The Citizens’ Biometrics Council deliberations covered the breadth, depth and complexity of issues relating to biometrics. Aside from the major themes discussed above, the Council also considered topics like scope creep, the perceived inevitability of some biometric technologies, the power dynamics between governments and corporations and individual citizens, and how the increasing use of surveillance and identification technologies can influence or ‘nudge’ people’s behaviour, perhaps limiting their political participation or other liberties. There have also been themes, like trust and data protection, which have cut across many of the topics the Council discussed.

The Council members also considered and recognised the many benefits biometric technologies can bring, from improved services to better public safety. They saw why polices forces and border security were exploring the use of facial recognition, why banks are using voice recognition and other biometrics to tackle fraud, and why supermarkets are turning to biometrics to provide services like age-checking in an increasingly contactless society.

Ultimately though, the Council’s focus was on how to balance the opportunities of biometrics with the risks. Throughout their deliberations, the Citizens’ Biometrics Council recognised that this is a far from straightforward task. The solution to this challenge, they felt, would require more than ideas like sweeping bans or relying on incremental existing governance and oversight.

The interconnected nature of the themes the Council explored shows how complex and ‘wicked’ a problem biometrics pose.[footnote]Churchman, C. West (1967). ‘Wicked Problems’. Management Science. 14 (4): B-141–B-146. doi: 10.1287/mnsc.14.4.B141[/footnote] Addressing one issue requires balancing complex trade-offs that have consequences on other challenges. In the figure below, we outline how some of the core themes relate to one another.

The core themes raised through the Council’s deliberations show the way towards trustworthy, acceptable and responsible biometrics

Conclusion: addressing the Council’s recommendations

The Citizens’ Biometrics Council’s deliberations offer an in-depth understanding of what informed members of the public think makes the use of biometric data and technology responsible, trustworthy and proportionate. Their recommendations articulate their expectations, and what is required to enable acceptable uses and prevent unacceptable uses.

The Council’s recommendations range from very specific ideas to broad expectations. This is appropriate, as the group’s task was to express their informed opinions without being bound by any limitations.

Responding to such aspiring and broad recommendations poses practical and political challenges. The Ada Lovelace Institute identifies three clear clusters that the Council’s recommendations centre around, which suggest the direction of travel that policymakers and practitioners must take to respond to the Councils’ expectations:

  1. Developing more comprehensive legislation and regulation for biometric technologies.
  2. Establishing an independent, authoritative body to provide robust oversight.
  3. Ensuring minimum standards for the design and deployment of biometric technologies.

These three areas were presented to the oversight group, the Community Voices groups and some of the Council members. Their feedback contributed to developing possible approaches
for policy and practice to ensure the Citizens’ Biometrics Council recommendations and expectations are addressed.

‘It’s remarkable really that everyone, without fail, that we’ve spoken to and heard from has said “This needs to be sorted.” We need a framework and some legislation to provide oversight.’

1. Legislation and regulation

The Council members articulated a clear expectation for more comprehensive legislation and regulation relating to biometrics in the UK. In their deliberations, they considered how current law has not ‘kept pace with’ the advances in technologies, creating grey areas for their lawful
implementation, as well as gaps in the protections that ensure people’s rights and prevent wider societal harms.

Through these recommendations, the Council expressed the desire that the UK Government must review and develop the governance relating to the use of biometric technologies and data. One Council recommendation calls for primary legislation: ‘The Biometrics Act’,
while other expectations point towards secondary legislation, in the form of statutory codes of conduct or other rules created under existing acts such as the Data Protection Act 2018, or Equality Act 2010.

Whatever form it takes, the Council’s recommendations articulate clear expectations for biometrics legislation and regulation:

  • The law must cover biometric technologies and data comprehensively, across all contexts where they are deployed, not just law enforcement.
  • Regulations must be designed with the input of a broad range of stakeholders, including members of the public and particularly those from marginalised groups.
  • The law must be able to keep pace with rapid developments in technology. This could be achieved through adopting ‘principles-based’ legislation similar to the GDPR, supported by more specific and updated guidance or regulation.

The Council’s recommendations for stronger regulation around biometric technology and data should be recognised by existing bodies that provide oversight of biometric data and technology, including the new Biometrics and Surveillance Camera Commissioner.[footnote]See: HM Governments Public Appointments, Biometrics and Surveillance Camera Commissioner. Available at: https://publicappointments.cabinetoffice.gov.uk/appointment/biometrics-and-surveillance-camera-commissioner (Accessed: 27 November 2020).[/footnote]

This new office, which combines the remits of two existing commissioners, should have a clear mandate to promote the development of strong legislation around the use of biometrics, and not represent a weakening of regulation through the combined role.[footnote]Rowe, S. and Jones, J. (2020) ‘The Biometrics and Surveillance Camera Commissioner: streamlined or eroded oversight?’, Ada Lovelace Institute. Available at: www.adalovelaceinstitute.org/blog/biometrics-surveillance-camera-commissioner (Accessed: 12 January 2021).[/footnote]

The Council’s recommendations for the need for clearer legislation and regulation are echoed by the independent legal review of current UK governance of biometric data, commissioned by the Ada Lovelace Institute and due to report in 2021, as well as our call for a moratorium on further deployments of facial recognition technology until adequate regulation exists.

2. Independent oversight authority

Many of the Council’s recommendations express the expectation for a single, independent and authoritative body to provide oversight of the use of biometric technologies in the UK.

These recommendations respond to the evidence the Council heard about the currently fragmented oversight landscape for biometrics in the UK, as various offices and regulators provide different aspects of oversight in a manner that produces both overlapping remits and gaps. Council members expect a much clearer single point of oversight.

Council members also reflected that legislation, codes of conduct and other governance mechanisms will not be effective without enforcement, and people may not feel sufficiently protected without a body with the remit, authority and capacity to ensure biometric technologies are used in line with the law and with public expectations.

Such a body would need to fulfil a range of characteristics to meet the Council’s expectations:

  • It must represent a diverse cross-section of stakeholders, drawing on not only a range of expertise and sectors – from technologists to ethicists – but also including mechanisms for public participation and the involvement of marginalised groups.
  • The body must have ‘teeth’ – the authority to hold actors to account through sanctions, fines or other mechanisms.
  • It must be independent from financial or political influences which prevent it from acting in the interests of the public.
  • The body should have the capacity to respond to complaints, carry out investigations, and the potential to perform ethical or legal reviews.
  • The body should also have a remit which covers all uses of biometric technologies across public and private sectors.

To match all the Council’s expectations, particularly around having the required authority, powers and independence, the body would require appointment by Government or another public institution, but given an independent remit.

Establishing a new body with the express remit to maintain legal, practical and ethical scrutiny over deployments of biometric technology raises a range of practical and political challenges, as well as potentially adding more noise, not clarity, to the oversight of biometrics use in the UK.

A more pragmatic approach lies in giving an existing body within the biometrics governance landscape the single authority, remit and resource to offer comprehensive ethical scrutiny and oversight. Such an opportunity is potentially posed by the incoming appointment of a combined Biometrics and Surveillance Camera Commissioner. This combined role offers an opportunity to meet the Council’s expectations for a single point of oversight, if the new office is granted the appropriate powers and resource.

3. Minimum standards for the design and deployment of biometrics

Both legislation and regulation must ensure any biometric technologies are in line with the Council’s expectations for what is responsible, trustworthy and proportionate. This can be addressed by the development of standards that biometric technologies must meet before they can be deployed in public settings.

Much like standards that assure the quality and safety of goods, minimum standards for the design and deployment of biometric technologies would ensure that biometric technologies, where deployed, would be designed and deployed in line with the Council’s recommendations. They would also prevent uses that fail to meet these standards, in effect prohibiting uses of biometrics that are not considered acceptable.

There are a range of considerations that standards for biometric technologies should cover to meet the Council’s expectations:

  • Biometric technologies must not create biased, discriminatory or unequal outcomes across the populations they affect.
  • Inaccuracies and errors must be minimised as much as possible prior to deployment, not iteratively reduced after a technology is used in public.
  • When used outside of public-sector settings, people must be offered mechanisms to consent to or opt into uses of biometric technologies, and be provided equal service or access if they choose not to.
  • In addition to compliance with GDPR, standards for data protection and privacy, such as ISO 27001[footnote]International Organization for Standardization (no date) ISO – ISO/IEC 27001 – Information security management. Available at: www.iso.org/isoiec-27001-information-security.html (Accessed: 11 December 2020).[/footnote] should be adopted as a minimum starting point for good practice standards for managing and governing biometric data.
  • Standard practices for transparency should make clear where and how any biometric technology is used, including accessible information such as what data is collected and how it’s used, how people can consent or opt out (where necessary), and how they can challenge outcomes. Information about how proportionality has been justified must also be open to scrutiny.

This is a far from exhaustive list, and the Council recognised that though informed, they themselves should not be the sole authors of any list of design and deployment standards for biometrics. Rather, responsibility for developing standards for biometric technology should sit with the same independent authority advocated for by the Citizens’ Biometrics Council.

The principles informing these standards should be informed by broader public debate, and the standards themselves should be subject to a public review or appeal mechanism. Ultimately, any standards for the design and deployment of biometric technologies should be developed alongside legislation and should involve the input of a broad range of stakeholders, representing legal, technical, policy and ethical expertise, as well as a diverse cross-section of the public.

Public voice in the debate about biometrics

Public debate remains sorely needed to ensure biometric technologies are used for societal good and their harms are minimised. The Citizens’ Biometrics Council is a crucial step towards bringing the voices and perspectives of informed members of the public to this debate.

The Council members have indicated a clear set of concerns and desires with regards to the use of biometrics, but among their key findings is that more work must be done to involve the public in the development of biometrics policy and responsible practice. Continued consultation with, and representation of, a diverse cross-section of society is fundamental to ensuring that biometric technologies are only deployed in a way that is trustworthy, responsible and acceptable.

As articulated by Council through the recommendations, their deliberations should represent the start, not the end, of public involvement in the development of biometric technologies and policies.

‘If you put a frog into water, and you boil the water, it won’t jump out. The water’s boiling very slowly, and it doesn’t detect that. A concern I have is, what if that represents the general population? What happens if, in 20 years’ time, people don’t realise what’s happened until it’s too late?’

Appendix

We are grateful to a number of colleagues for their time, expertise and supportive contributions to the Citizens’ Biometrics Council.

Expert speakers:

  • Fieke Jansen Cardiff Data Justice Lab
  • Griff Ferris Big Brother Watch
  • Robin Pharoah Encounter Consulting
  • Julie Dawson Yoti
  • Ali Shah Information Commissioner’s Office
  • Peter Brown Information Commissioner’s Office
  • Zac Doffman Digital Barriers
  • Kenny Long Digital Barriers
  • Paul Wiles former Biometrics Commissioner
  • Tony Porter former Surveillance Camera Commissioner
  • Lindsey Chiswick Metropolitan Police Service
  • Rebecca Brown University of Oxford
  • Elliot Jones Ada Lovelace Institute
  • Tom McNeil West Midlands Police and Crime Commissioner’s Office

Oversight group:

  • Ali Shah Information Commissioner’s Office
  • Julie Dawson Yoti
  • Dr Jack Stilgoe UCL
  • Prof. Peter Fussey University of Essex
  • Lindsey Chiswick Metropolitan Police Service
  • Zara Rahman and Julia Keseru The Engine Room

Peer reviewers for this report:

  • Fieke Jansen Cardiff Data Justice Lab
  • Hetan Shah British Academy and Ada Lovelace Institute
  • Tom McNeil West Midlands Police and Crime Commissioner’s Office
  • Lindsey Chiswick Metropolitan Police Service
  • Dr Jack Stilgoe UCL
  • Ed Bridges Cardiff University

Hopkins Van Mil:

  • Henrietta Hopkins
  • Suzannah Kinsella
  • Grace Evans
  • Sophie Reid
  • Mike King
  • Hally Ingram
  • Kathleen Bailey

URSUS Consulting:

  • Anna MacGillivray.

This report was authored by Aidan Peppin, with contributions from Reema Patel and Imogen Parker.


Preferred citation: Ada Lovelace Institute. (2021). The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/

Image credit: Paul Wyatt

Diagrams by Soapbox

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 110

Skip to content

The Ada Lovelace Institute (Ada), AI Now Institute (AI Now), and Open Government Partnerships (OGP) have partnered for the first global study of the initial wave of algorithmic accountability policy for the public sector.

As governments are increasingly turning to algorithms to support decision-making for public services, there is growing evidence that suggests that these systems can cause harm and frequently lack transparency in their implementation. Reformers in and outside of government are turning to regulatory and policy tools, hoping to ensure algorithmic accountability across countries and contexts. These responses are emergent and shifting rapidly, and they vary widely in form and substance – from legally binding commitments, to high-level principles and voluntary guidelines.

This report presents evidence on the use of algorithmic accountability policies in different contexts from the perspective of those implementing these tools, and explores the limits of legal and policy mechanisms in ensuring safe and accountable algorithmic systems. 

Downloads

 

Algorithmic accountability for the public sector executive summary cover

Algorithmic accountability for the public sector: Executive summary

Read the executive summary for key findings

This new research highlights that although this is a relatively new area of technology
governance, there is a high level of international activity and different governments are using
a variety of policy mechanisms to increase algorithmic accountability.

The report explores in detail the variety of policies taking shape in the public sector, and draws out six lessons for policymakers and industry leaders looking to deploy and implement algorithmic accountability policies effectively.

Algorithmic accountability for the public sector - report cover

Algorithmic accountability for the public sector: Full report

Read the full report for further detail on these findings and practical case studies of implemented policies

This research forms part of Ada’s wider work on algorithm accountability and the public sector use of algorithms. It builds on existing work on tools for assessing algorithmic systems, mechanisms for meaningful transparency on the use of algorithms in the public sector, and active research with UK local authorities and government bodies seeking to implement algorithmic tools, auditing methods and transparency mechanisms.

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 110

Skip to content

Linking location data with other data about people and the world offers important insights and supports new services that can improve how people work, live and travel. Alongside these new data applications and opportunities, there are emerging privacy and ethical considerations. To continue to benefit from the widespread use of location data, it is important that location data is used in responsible and trustworthy ways that mitigate concerns and retain public confidence.

The Geospatial Commission has partnered with Sciencewise, Traverse and the Ada Lovelace Institute, to open a conversation with members of the public and gather evidence on public perceptions about location data use.

The Geospatial Commission published the findings of its public dialogue on location data ethics in 2021. These findings will inform the delivery of the UK Geospatial Strategy.

The Ada Lovelace Institute worked with Traverse to:

  • lead scoping research, drawing from our previous work on responsible stewardship of data and public perspectives to data;
  • support the facilitation of dialogue workshops; and
  • collaborate with Traverse to analyse and report findings.

This is the second time Traverse and the Ada Lovelace Institute have worked together to bring public perspectives to decisions about data. In May 2020 we partnered to run the Lockdown Debate, the first fully online public deliberation in the UK. The findings were shared in the report Confidence in a Crisis, which explored attitudes to lockdown exit strategies.

The Ada Lovelace Institute received funding of £16,600 for our work on this project. The project was commissioned by the Geospatial Commission in partnership with Sciencewise under the terms of the Crown Commercial Service Research Marketplace DPS.


Image credit: d1sk

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 110

Skip to content

COVID-19 Data Explorer

This report is accompanied by the 'COVID-19 Data Explorer', a resource containing country-specific data on timelines, technologies, and public response to be used to explore the legacy and implications of the rapid deployment of contact tracing apps and digital vaccine passports across the world.

Executive summary

The COVID-19 pandemic is the first global public health crisis of ‘the algorithmic age’.[1] In response, hundreds of new data-driven technologies have been developed to diagnose positive cases identify vulnerable populations and conduct public health surveillance of individuals known to be infected.[2] Two of the most widely deployed are digital contact tracing apps and digital vaccine passports.

For many governments, policymakers and public health experts across the world, these technologies raised hopes through their potential to assist in the fight against the COVID-19 virus. At the same time, they provoked concerns about privacy, surveillance, equity and social control because of the sensitive social and public health surveillance data they use – or are seen as using.

An analysis of the evidence on how contact tracing apps and digital vaccine passports were deployed can provide valuable insights about the uses and impacts of technologies at the crossroads of public emergency, health and surveillance.

Analysis of their role in societies can shed light on the responsibilities of the technology industry and policymakers in building new technologies, and on the opinions and experiences of members of the public who are expected to use them to protect public health.

These technologies were rolled out rapidly at a time when countries were under significant pressure from the financial and societal costs of the pandemic. Public healthcare systems struggled to cope with the high numbers of patients, and pandemic restrictions such as lockdowns resulted in severe economic crises and challenges to education, welfare and wellbeing.

Governments and policymakers needed to make decisions and respond urgently, and they turned to new technologies as a tool to help control the spread of infection and support a return to ‘normal life’. This meant that – as well as guiding the development of technologies – they had an interest in convincing the public that they were useful and safe.

Technologies such as contact tracing apps and digital vaccine passports have significant societal implications: for them to be effective, people must consent to share their health data and personal information.

Members of the public were expected to use the technologies in their everyday lives and change their behaviour because of them – for example, proving their vaccination status to access workplaces, or staying at home after receiving a COVID-19 exposure alert.

Examining these technologies therefore helps to build an understanding of the public’s attitudes to consent in sharing their health information, as well as public confidence in and compliance with health technologies more broadly.

As COVID-19 technologies emerged, the Ada Lovelace Institute was one of the first research organisations to investigate their potential legislative, technical and societal implications. We reviewed the available evidence and made a wide range of policy and practice recommendations, focusing on effectiveness, public legitimacy, governance and potential impact on inequalities.

This report builds on this work: revisiting those early recommendations; assessing the evidence available now; and drawing out lessons for policymakers, technology developers, civil society and public health organisations. Research from academia and civil society into the technologies concentrates largely on specific country contexts.[3]

There are also international studies that provide country-specific information and synthesise cross-country evidence but focus primarily on one aspect of law and governance or public attitudes. [4], [5], [6] This body of research provides valuable insights into diverse policies and practices and unearths legislative and societal implications of these technologies at different stages of the pandemic.

Yet research that investigates COVID-19 technologies in relation to public health, societal inequalities and regulations simultaneously and at an international level remains limited. In addition, efforts to track the development of global policy and practice have slowed in line with the reduced use of these technologies in many countries.

However, it remains important to understand the benefits and potential harms of these technologies by considering legislative, technical and societal aspects simultaneously. Despite the limitations, presenting the evidence and identifying gaps can provide cross-cutting lessons for governments and policymakers, to inform policy and practice both now and in the future.

These lessons concern the wide range of technical, legislative and regulatory requirements needed to build public trust and cooperation, and to mitigate harms and risks when using technologies in public crises, and in health and social care provision.

Learning from the deployment of contact tracing apps and digital vaccine passports remains highly relevant. As the infrastructure remains in place in many countries (for example, authentication services, external data storage systems, security operations built within applications, etc.), the technologies are easy to reinstate or repurpose.

Some countries have already transformed them into new health data and digital identity systems – for example, the Aarogya Setu app in India. In addition, on 27 January 2023, the World Health Organization (WHO) stated: ‘While the world is in a better position than it was during the peak of the Omicron transmission one year ago, more than 170,000 COVID-19-related deaths have been reported globally within the last eight weeks’.[7]

And on 5 May 2023, the WHO confirmed that while COVID-19 no longer constitutes a public health emergency of international concern and the number of weekly reported deaths and hospitalisations has continued to decrease, it is concerned that ‘surveillance reporting to WHO has declined significantly, that there continues to be inequitable access to life-saving interventions, and that pandemic fatigue continues to grow’. [8]

In other words, the pandemic is far from over, and we need to pay attention to the place of these technologies in our societies now and in future pandemics.

This report synthesises the available evidence on a cross-section of 34 countries, exploring technical considerations and societal implications relating to the effectiveness, public legitimacy, inequalities and governance of COVID-19 technologies.

Evidence was gathered from a wide range of sources across different disciplines, including academic and grey literature, policy papers, the media and workshops with experts.

Existing research demonstrates that governments recognised the value of health, mobility, economic or other kinds of personal data in managing the COVID-19 pandemic and deployed a wide range of technologies to collect and share data.

However, given that the technologies were developed and deployed at pace, it was difficult for governments to adequately prepare to use them – and the data collected and shared through them – in their broader COVID-19 pandemic management.[9]

It is therefore unsurprising that governments did not clearly define how to measure the effectiveness and social impacts of COVID-19 technologies. This leaves us with important evidence gaps, making it harder to fully evaluate the effectiveness of the technologies and understand their impact on health and other forms of social inequalities.

We also highlight evidence gaps that indicate where evaluation and learning mechanisms fell short when technologies were used in response to COVID-19. We call on governments to consider these gaps and retrospectively evaluate the effectiveness and impact of COVID-19 technologies.

This will enable them to improve their evaluation and monitoring mechanisms when using technologies in future pandemics, public health, and health and social care provision.

The report’s findings should guide governments, policymakers and international organisations when deploying data-driven technologies in the context of public emergency, health and surveillance. They should also support civil society organisations and those advocating for technologies that support fundamental rights and protections, public health and public benefit.

‘COVID-19 technologies’ refers to data-driven technologies and AI tools that were built and deployed to support the COVID-19 pandemic response. Two of the most widely deployed are contact tracing apps and digital vaccine passports, and they are main focus of this report. Both technologies aim to identify an individual’s risk to others and block or allow freedoms and restrictions accordingly. There are varying definitions of these technologies. In this report we define them through their common purposes and properties, as follows:

  • Contact tracing apps aim to measure an individual’s risk of becoming infected with COVID-19 and of transmitting the virus to others based on whether they have been in close proximity to a person known to be infected. If a positive COVID-19 test result is reported to the app (by the user or the health authorities), the app alerts other users who might have been in close proximity to the person known to be infected with COVID-19. App users who have received an alert are expected to get tested and/or self-isolate at home for a certain period of time.[10]
  • Digital vaccine passports show the identity of a person and their COVID-19 vaccine status or antigen test results. They are used to prove the level of risk an individual poses to others based on their COVID-19 test results, and proof of recovery or vaccine status. They function as a pass that blocks or allows access to spaces and activities (such as travelling, leisure or work).[11]

Cross-cutting findings

Despite the complex, conflicting and limited evidence available about contact tracing and digital vaccine passports, this report uses a wide range of available resources and identifies the cross-cutting findings summarised here under the four themes of effectiveness; public legitimacy; inequalities; and governance, regulation and accountability.

Effectiveness: Did COVID-19 technologies work?

  • Contact tracing apps and digital vaccine passports were – necessarily – rolled out quickly, without consideration of what evidence would be needed to demonstrate their effectiveness. There was insufficient consideration and no consensus reached on how to define, monitor, evaluate or demonstrate their effectiveness and impacts.
  • There are indications of the effectiveness of some technologies, for example the NHS COVID-19 app (used in England and Wales). However, the limited evidence base makes it hard to evaluate their technical efficacy or epidemiological impact overall at an international level.
  • The technologies were not well integrated into broader public health systems and pandemic management strategies, and this reduced their effectiveness. However, the evidence on this is limited in most of the countries in our sample (with a few exceptions, for example Brazil and India), and we do not have clear evidence to compare COVID-19 technologies with non-digital interventions or to weigh up their relative benefits and harms.
  • The evidence is inadequate on whether COVID-19 technologies resulted in positive change in people’s health behaviours (for example, whether people self-isolated after receiving an alert from a contact tracing app), either when the technologies were first deployed or over time.
  • Similarly, it is not clear how the apps’ technical properties and the various policies and approaches impacted on public uptake of the apps or adherence to relevant guidelines (for example, self-isolation after receiving an alert from a contact tracing app).

Public legitimacy: Did people accept COVID-19 technologies?

  • Public legitimacy was key to ensuring the success of these technologies, affecting uptake and behaviour.
  • People were concerned about the use of digital vaccine passports to enforce restrictions on liberty and increased surveillance. People protested against them, and the restrictive policies they enabled, in more than half of the countries in our sample.
  • Public acceptance of contact tracing apps and digital vaccine passports depended on trust in their effectiveness, as well as trust in governments and institutions to safeguard civil rights and liberties. Individuals and communities who encounter structural inequalities are less likely to trust government institutions and the public health advice they offer. Not surprisingly, these groups were less likely than the general population to use these technologies.
  • The lack of targeted public communications resulted in poor understanding of the purpose and technical properties of COVID-19 technologies. This reduced public acceptance and social consensus around whether and how to use the technologies.

Inequalities: How did COVID-19 technologies affect inequalities?

  • Some social groups faced barriers to accessing, using or following the guidelines for contact tracing apps and digital vaccine passports, including unvaccinated people, people structurally excluded from sufficient digital access or skills, and people who could not self-isolate at home due to financial constraints. A small number of sample countries adopted policies and practices to mitigate the risk of widening existing inequalities. For example, the EU allowed paper-based Digital COVID Certificates for those with limited digital access and skills.
  • This raises the question of whether COVID-19 technologies widened health and other societal inequalities. In most of the sample countries, there is no clear evidence whether governments adopted effective interventions to help those who were less able to use or benefit from these technologies (for example, whether they provided financial support for those who could not self-isolate after receiving an exposure alert due to not being able to work from home).
  • Most sample countries requested proof of vaccination from inbound travellers before allowing unconditional entry (that is, without a quarantine or self-isolation period) at some stage of the pandemic. This amplified global inequalities by discriminating against the residents of countries that could not secure adequate vaccine supply or had low vaccine uptake – specifically, many African countries.

Governance, regulation and accountability: Were COVID-19 technologies governed well and with accountability?

  • Contact tracing apps and digital vaccine passports combine health information with social or surveillance data. As they limit rights (for example, by blocking access to travel or entrance to a venue for people who do not have a digital vaccine passport), their use must be proportional. This means striking a balance between limitations of rights, potential harms and the intended purpose. To achieve this, it is essential that these tools are governed by robust legislation, regulation and oversight mechanisms, and that there are clear ‘sunset mechanisms’ in place to determine when they no longer need to be used.
  • Most countries in our sample governed these technologies in line with pre-existing legislative frameworks, which were not always comprehensive. Only a few countries enacted robust regulations and oversight mechanisms specifically governing contact tracing apps and digital vaccine passports, including the UK, EU member states, Taiwan and South Korea.
  • The lack of robust data governance frameworks, regulation and oversight mechanisms led to lack of clarity about who was accountable for misuse or poor performance of COVID-19 technologies. Not surprisingly, there were incidents of data leaks, technical errors and data being reused for other purposes. For example, contact tracing app data was used in police investigations in Singapore and Germany, and sold to third parties for commercial purposes in the USA.[12]
  • Many governments relied on private technology companies to develop and deploy these technologies, demonstrating and reinforcing the industry’s influence and the power located in digital infrastructure.

Lessons

These findings present clear lessons for governments and policymakers deciding how to use contact tracing apps and digital vaccine passports in the future. These lessons may also apply more generally to the development and deployment of any new data-driven technologies and approaches.

Effectiveness

To build evidence on the effectiveness of contact tracing apps and digital vaccine passports:

  • Support research and learning efforts which review the impact of these technologies on people’s health behaviours.
  • Weigh up the technologies’ benefits and harms by considering their role within the broader COVID-19 response and comparing them with non-digital interventions (for example, manual contact tracing).
  • Understand the varying impacts of apps’ different technical properties, and of policies and approaches to implementation on people’s acceptance of, and experiences of, these technologies in specific socio-cultural contexts and across geographic locations.
  • Use this impact evaluation to help set standards and strategies for the future use of these technologies in public crises.

To ensure the effective use of technology in future pandemics:

  • Invest in research and evaluation from the start, and implement a clear evaluation framework to build evidence during deployment that supports understanding of the role that technologies play in broader pandemic health strategies.
  • Define criteria for effectiveness using a human-centred approach that goes beyond technical efficacy and builds an understanding of people’s experiences.
  • Establish how to measure and monitor effectiveness by working closely with public health experts and communities, and set targets accordingly.
  • Carry out robust impact assessments and evaluation.

Public legitimacy

To improve public acceptance:

  • Build public trust by publishing guidance and enacting clear law about permitted and restricted uses and mechanisms to support rights (for example, the right to privacy) and how to tackle legal issues and enable redress (e.g., data leakage, which could involve using collected data for reasons other than health).
  • Effectively communicate the purpose of using technology in public crises, including the technical infrastructure and legislative framework for specific technologies, to address public hesitancy and build social consensus.

Inequalities

To avoid entrenching and exacerbating societal inequalities:

  • Create monitoring mechanisms that specifically address the impact of technology on inequalities. Monitor the impact on public health behaviours, particularly in relation to social groups who are more likely to encounter health and other forms of social inequalities.
  • Use the impact evidence to identify marginalised and disadvantaged communities and to establish strong public health services, interventions and social policies to support them.

To avoid creating or reinforcing global inequalities and tensions:

  • Harmonise global, national and regional regulatory tools and mechanisms to address global inequalities and tensions.

Governance and accountability

To ensure that individual rights and freedoms are protected:

  • Establish strong data governance frameworks and ensure regulatory bodies and clear sunset mechanisms are in place.
  • Create specific guidelines and laws to ensure technology developers follow privacy-by-design and ethics-by-design principles, and that effective monitoring and evaluation frameworks and sunset mechanisms are in place for the deployment of technologies.
  • Build clear evidence about the effectiveness of new technologies to make sure that their use is proportionate to their intended results.

To reverse the growing power imbalance between governments and the technology industry:

  • Develop the public sector’s technical literacy and ability to create technical infrastructure. This does not mean that the private sector should be excluded from developing technologies related to public health, but it is crucial that technical infrastructure and governance are effectively co-designed by government, civil society and private industry.

Effectiveness, public legitimacy, inequalities and accountability have varying definitions across disciplines. In this report we define them as follows:

 

Effectiveness: We define the effectiveness of contact tracing apps and digital vaccine passports in terms of the extent to which they positively affect public health, that is, result in decreasing the rate of transmission. We use a non-technocentric approach, distinguishing technical efficacy from effectiveness. Technical efficacy refers to a technology’s ability to perform a technical task (that is, a digital vaccine passport’s ability to generate QR code to share data).

 

Public legitimacy: We define this in terms of public acceptance of using contact tracing apps and digital vaccine passports. We also focus specifically on marginalised and disadvantaged communities, whose opinions and experiences might differ from the dominant dispositions.

 

Inequalities: We investigate inequalities both within and across countries. We look at whether COVID-19 technologies create new or reinforce existing health and other types of societal inequalities for disadvantaged and vulnerable groups (for example, people who could not use COVID-19 technologies due to inadequate digital access and skills). We also examine their impact on global inequalities by focusing on inequalities of resources, opportunities and power between countries and regions (for example, around access to vaccine supply).

 

Accountability: We use this to refer to the regulation, institutions and mechanisms that are ways of making governments and officials accountable for preserving civil rights and freedoms.

Introduction

The COVID-19 pandemic is the first global epidemic of ‘the algorithmic age’.[13] In response, hundreds of new technologies have been developed, to diagnose patients, identify vulnerable populations and conduct surveillance of individuals known to be infected.[14] Data and artificial intelligence (AI) have therefore played a key role in how policymakers and international and national health authorities have responded to the pandemic.

Digital contact tracing apps and digital vaccine passports, which are the focus of this report, are two of the most widely deployed new technologies. Although versions of contact tracing apps had previously been deployed in some countries, such as in Sierra Leone as part of the Ebola response, for most countries across the world this was their first experience of such technologies.[15]

These technologies differ from pre-existing state surveillance tools, such as CCTV, and from other types of technologies deployed in the context of the COVID-19 pandemic, such as machine learning algorithms that profile the risk of incoming travellers or predict infected patients at high risk of developing severe symptoms.[16]

To be effective, contact tracing apps and digital vaccine passports require public acceptance and cooperation, as individuals need to consent to share their health and other types of personal information and change their behaviour, for example, by showing evidence of health status to enter a venue via a digital vaccine passport, or by staying at home on receiving an exposure notification from a contact tracing app.[17]

These technologies are therefore at the crossroads of public emergency, health and surveillance and so have significant societal implications.

The emergence of contact tracing apps and digital vaccine passports resulted in public anxiety and resistance related to their effectiveness, legitimacy and proportionality, as well as concern about the implications for informed consent, privacy, surveillance, equality, discrimination and the role of technology in broader public health management.

These technologies were therefore high stakes and were perceived as necessary, but high-risk measures in dealing with the pandemic.

As the technologies brought together a range of highly sensitive data, they were a test of the extent of the public’s willingness to share sensitive personal data and to accept limits on freedoms and rights.

The technologies were developed and deployed to save lives, but in practice they both enabled and limited people’s individual freedoms, by scoring the risk they posed to others based on their health status, location or mobility data.

Despite the risks and sensitivities, due to the challenging conditions of the pandemic, they were created and implemented quickly, and without a clear consensus on how they should be designed, governed and regulated.

Countries adopted different approaches, and – while there are some commonalities across countries and dominant infrastructures – the technical choices, policies and practices were neither unified nor consistent. Frequent changes were made even at a regional level.

It was particularly challenging for countries with weaker technological infrastructures, financial capabilities or legislative frameworks to develop and deploy COVID-19 technologies. Even in countries with relatively comprehensive regulation, these technologies caused fresh concerns for human rights and civil liberties, as they intensified ‘top-down institutional data extraction’ across the world.[18]

Many critics correctly anticipated that such technologies would normalise surveillance via state ownership of sensitive data in a way that would persist beyond the pandemic.

This creates a complex picture, made more challenging by incomplete evidence on how the technologies were developed, used and governed – and, most importantly, on their impact on people, health, healthcare provision and society. It is therefore important to monitor their development, understand their impact and consider what legacy they might have as well as the lessons we can learn for the future.

A range of studies focus on aspects of contact tracing apps and digital vaccine passports at different stages of the pandemic. The Ada Lovelace Institute has monitored the evolution of these technologies over the last three years. However, compared with more traditional health technologies or policy interventions, there is a lack of in-depth research into them or evaluation of their effectiveness.

As the infrastructure is still in place in most countries, these technologies can easily be re-used or transformed into new technologies for new purposes. Therefore, these are live questions with tangible effects on people and societies.

By synthesising evidence from a cross-section of 34 countries, this report identifies cross-cutting issues and challenges, and considers what lessons we should learn from the deployment of COVID-19 technologies as examples of new and powerful technologies that have been embedded across society.

Scope and rationale of this report

In the first two years of the pandemic, from early 2020, the Ada Lovelace Institute conducted extensive research first on contact tracing apps and then on digital vaccine passports. This research focused on the technical considerations and societal implications of these new technologies and included public attitudes research, expert deliberations, workshops, webinars and evidence reviews.

To conduct this research, we engaged multidisciplinary experts from the fields of behavioural science, bioethics, ethics, development studies, immunology, law, public health and sociology. As well as analysing the technical efficacy of the technologies, this created a holistic picture of their legal, societal and public health implications.

We published nine reports based on our research, and two international monitors, which tracked policy and practice developments related to digital vaccine passports and contact tracing apps.

In this work, we acknowledged the potential of new data-driven technologies in the fight against COVID-19. However, we also identified the risks of rapid decision-making by governments and policymakers.

In most cases, there was not sufficient time or adequate research to consider and address the wide range of societal, political, legal and ethical risks. This led to significant challenges, related to effectiveness, public legitimacy, inequalities, and governance and accountability.

Risks and challenges of COVID-19 technologies contained in the Ada Lovelace Institute’s previous publications

When contact tracing apps and digital vaccine passports first emerged, we argued that governments and policymakers should pay attention to a wide range of risks and challenges when deploying these technologies.

From early 2020, the Ada Lovelace Institute – through reports, trackers and monitors – identified and warned about the risks of these technologies.[19]

The risks we identified and highlighted can be summarised as:

Effectiveness

  • Lack of resources to monitor effectiveness and impact. Impact monitoring and evaluation strategies were not developed, making it difficult to assess the effectiveness of the technologies. Digital vaccine passports and contact tracing apps were new technologies, developed and deployed at pace, so there was not enough time or resource to establish effective strategies and monitoring mechanisms to investigate their impacts on public health.
  • Undermining public health by treating a collective problem (public health) as an individual one (personal safety). This placed the emphasis on individualised risks or requirements, and greater health surveillance at an individual level. For example, contact tracing apps categorise an individual as lower risk based on their vaccine or test status, rather than focusing on a more contextual risk of local infection in a specific area.
  • An increase in higher-risk behaviours due to the technologies fostering a false sense of security. Experts highlighted that COVID-19 technologies could create a false sense of security and discourage people from adhering to other protection measures that reduce the risk of transmission, for example, wearing a mask.[20]

Public legitimacy

  • Harming public trust in health data-driven technologies if they were not governed properly or were used for reasons other than health (for example, surveillance). Damaged public trust could make it difficult for governments to roll out new data-driven approaches and technologies to deal with public crises and in general.

Inequalities

  • Creating new forms of stratification and discrimination (for example, discrimination against unvaccinated people or those unable to access accepted vaccines or tests) or amplifying existing societal inequalities (for example, digital exclusion or poor access to healthcare).
  • Amplifying existing global inequalities and geopolitical tensions, particularly in the case of inequitable access to vaccines on a global level. Digital vaccine passport schemes required proof of vaccination for international travel or access to domestic activities (for example, entering a venue for a concert) across the world. This created the risk of a global race for vaccine supply, leaving many low- and middle-income countries scrambling for access.

Governance and accountability

  • Facilitating restrictions on individual liberty and increased surveillance. Members of the public were expected to use these powerful and potentially invasive technologies that collected and stored their personal data. These tools could therefore be used for surveillance, invading privacy or controlling individuals’ activities and mobility in general.
  • Repurposing individuals’ data for reasons other than health, for example, tracking dissidents’ activities, selling data to third parties for commercial purposes, etc.
  • Uncertainty and lack of transparency about private sector involvement and the risks of concentrating power and enabling long-term digital infrastructure that is reliant on private actors.[21]

Our reports made several recommendations for policymakers about how to mitigate these risks and challenges. As well as detailed recommendations for each technology, our cross-cutting recommendations covered the lifecycle of development and implementation.

Recommendations for policymakers made in previous Ada Lovelace Institute reports (2020–2022)

Effectiveness

  • Demonstrate the effectiveness of these technologies within the broader public health ecosystem, publishing modelling and testing; considering uptake and adherence to guidelines around these technologies (for example, reporting a positive COVID-19 test result, self-isolating on receiving an exposure notification or getting vaccinated); and publicly setting success criteria and outcomes and identifying risks and harms, particularly for vulnerable groups.

Public legitimacy

  • Build public trust through clear public communications and transparency. These communications should consider ethical considerations; establish clear legal guidance about permitted and restricted uses and mechanisms to support rights; and demonstrate how to tackle legal issues and enable redress (for example, by making a formal complaint in the case of a privacy breach).

Inequalities

  • Proactively address the needs of, and risks in relation to, vulnerable groups.
  • Work with international bodies to seek cross-border agreements and mechanisms to counteract the creation or amplification of global inequalities.

Governance and accountability

  • Ensure data protection by design to prevent data breaches or misuse.
  • Develop legislation with clear, specific and delimited purposes, and ensure clear sunset clauses for the technologies, and the legislation governing them.[22]

The focus of this research

The Ada Lovelace Institute’s original research in 2020 and 2021 focused on the conditions and principles required to safely deploy and monitor COVID-19 technologies.

By early 2022 many countries had deployed these technologies. Therefore, we shifted our focus and began investigating whether the risks and challenges we identified had materialised and, if so, what could be done differently in deploying technologies in the future.

As identified above, contact tracing apps and digital vaccine passports were deployed without consistent research and monitoring mechanisms. This contributed to a limited evidence base and meant that we needed to use a broad range of resources and research methods to develop this report (see Methodology).

Academic and grey literature provided valuable insights. This was supplemented by media and civil society coverage, for example of the repurposing of data collected through the contact tracing app Luca in Germany or the blocking of protests through Health Code app in China.[23]

The evidence in this report includes qualitative and quantitative data related to the uses and impacts of COVID-19 technologies drawn from policy trackers, the media, policy papers, research papers and workshops convened with experts between January 2022 and December 2022.

To accompany the report, we have created the ‘COVID-19 Data Explorer: Policies, Practices and Technology’[24] to enable civil society organisations, researchers, journalists and members of the public to access the body of data.

The COVID-19 Data Explorer supports the discovery and exploration of policies and practices relating to digital vaccine passports and contact tracing apps across the world. The data on timelines, technologies and public response demonstrates the legacy and implications of their rapid deployment.

By using a wide range of resources, reviewing the existing evidence and identifying evidence gaps, we draw important cross-cutting lessons to inform policy and practice.

We synthesise the available evidence from a sample of 34 countries, with the aim of taking a macro view and identifying cross-cutting issues at an international level. The report contributes to the growing body of research on COVID-19 technologies, improving how we understand, investigate and build data-driven technologies for public good.

The evidence sources include:

  • the Ada Lovelace Institute’s previous work on contact tracing apps and digital vaccine passports in the first two years of the pandemic
  • academic and grey literature on digital vaccine passports, contact tracing apps and COVID-19 pandemic management, focusing on the 34 countries in our sample
  • government websites and policy papers
  • a workshop delivered by the Ada Lovelace Institute with cross-country experts, focusing on the effectiveness of contact tracing apps in Europe
  • papers submitted in response to The Ada Lovelace Institute’s international call for evidence on the effectiveness of digital vaccine passports and contact tracing apps
  • news media coverage of digital vaccine passports, contact tracing apps and pandemic management in the 34 countries in our sample.

See Methodology for more information on methods, sampling and resources.

Ada Lovelace Institute publications on COVID-19 technologies from 2020 to 2023[25]

  • Exit through the App Store? (April 2020): A rapid evidence review of the technical considerations and societal implications of using technology to transition from the first COVID-19 lockdown.
  • Confidence in a crisis? (August 2020): Findings of a public online deliberation project on attitudes to the use of COVID-19 technologies to transition out of lockdown.
  • Provisos for a contact tracing app (May 2020): A report that highlights the milestones that would have to be met by the UK Government to ensure the safety, equity and transparency of digital contact tracing apps.
  • COVID-19 digital contact tracing tracker (July 2020): A resource for monitoring the development, uptake and efficacy of global attempts to use smartphones and other digital devices for contact tracing.
  • No green lights, no red lines (November 2020): A report that explores the public perspectives on COVID-19 technologies and draws lessons to assist governments and policymakers when deploying data-driven technologies in the context of the pandemic.
  • What place should COVID-19 vaccine passports have in society? (February 2021): Findings from an expert deliberation on the potential roll-out of digital vaccine passports.
  • Public attitudes to COVID-19, technology and inequality (March 2021): A tracker summarising studies and projects that offer insights into people’s attitudes to and perspectives on COVID-19, technology and inequality.
  • The data divide (March 2021): Public attitudes research in partnership with the Health Foundation to explore the impacts of data-driven technologies and systems on inequalities in the context of the pandemic.
  • Checkpoints for vaccine passports (May 2021): A report on the requirements that governments and developers need to meet for any vaccine passport system to deliver societal benefit.
  • International COVID-19 monitor (June 2021): A policy and practice tracker that summarises developments concerning digital vaccine passports and COVID-19 status apps.
  • The rule of trust (July 2022): Principles identified by citizens’ juries to ensure that data-driven technologies are implemented in ways that the public can trust and have confidence in.

List of countries in our sample:

  1. Argentina (ARG)
  2. Australia (AUS)
  3. Brazil (BRA)
  4. Botswana (BWA)
  5. Canada (CAN)
  6. China (CHN)
  7. Germany (DEU)
  8. Egypt (EGY)
  9. Estonia (EST)
  10. Ethiopia (ETH)
  11. Finland (FIN)
  12. France (FRA)
  13. United Kingdom (GBR)
  14. Greece (GRC)
  15. India (IND)
  16. Israel (ISR)
  17. Italy (ITA)
  18. Jamaica (JAM)
  19. Kyrgyzstan (KGZ)
  20. South Korea (KOR)
  21. Morocco (MAR)
  22.  Mexico (MEX)
  23.  Nigeria (NGA)
  24.  New Zealand (NZL)
  25.  Romania (ROU)
  26.  Russia (RUS)
  27.  Saudi Arabia (SAU)
  28.  Singapore (SGP)
  29.  Tunisia (TUN)
  30.  Türkiye (TUR)
  31. Taiwan (TWN)
  32.  United States of America (USA)
  33.  South Africa (ZAF)
  34.  Zimbabwe (ZWF)

Contact tracing apps

Emergence

Contact tracing is an established disease control measure. Public health experts help patients recall everyone they have come into close contact with during the timeframe in which they may have been infectious. Contact tracing teams then inform exposed individuals that they are at risk of infection and provide them with guidance and information.[26]

In the early phase of the pandemic, the idea of building on this practice by digitising contact tracing quickly became prominent. With lockdowns contributing to social and economic hardships, the objective was to return to the pre-pandemic ‘normal’ as soon as possible, and the global consensus at the time was that vaccination would be the only long-term solution to achieve this.

While vaccines were being developed, many countries relied on contact tracing to break chains of infection so that they could ease pandemic restrictions such as lockdowns.

Research shows that contact tracing as a disease control measure reaches its full potential when carried out by trained public health experts, who are able to engage with patients and their contacts rapidly and sensitively.[27] However, many countries lacked adequate numbers of trained public health staff and resources (for example, testing capacity to detect contacts known to be infected) for this kind of manual tracking and isolation.[28] In this context, digital contact tracing offered the possibility of accelerating contact tracing.

Countries had varying approaches to contact tracing and the use of digital contact tracing technologies, depending on their existing infrastructure. South Korea, for example, established a national tower that oversaw data collection and monitoring activities. This was built on existing smart city infrastructures which contained data collected from immigration records, CCTV footage, card transaction data and medical records.[29]

Research in South Africa highlights the state’s surveillance capabilities using mobile network systems and tracking internet users’ online activities.[30] South Africa used location information from mobile network operators to help contact tracing teams who ‘tracked and traced’ people infected with COVID-19 with no prior public announcement or consultation, although it later abandoned this approach.[31]

In Asia and Africa, digital contact tracing involved extensive collection of personal data through mass surveillance. In Europe and the USA, on the other hand, the idea of digital contact tracing through a mobile app on citizens’ smartphones began to be considered. Contact tracing apps were considered a lower-risk alternative than the mass surveillance tools adopted in Asia and Africa.

The idea of building contact tracing apps eventually gained momentum not only in Europe and the USA but across the world. Governments needed to consider the technical infrastructure, efficacy and purpose of this new technology, and the related benefits, risks and harms.

As early research from the Ada Lovelace Institute showed, public legitimacy and trust were critical for these technologies to work effectively.[32] Members of the public had to use contact tracing apps in the way intended by governments and technology companies, such as by uploading their health information if diagnosed with COVID-19 or isolating after being informed they had had close contact with someone known to be infected with COVID-19. This was particularly challenging for countries and regions with low levels of digital access and skills.[33]

To support public trust, contact tracing apps needed to be built using established best-practice methods and principles, and uses of the technology and data had to be controlled through strong regulation. If the data were to be repurposed, such as for surveillance purposes, it could damage public trust in the government, limiting the effectiveness of using COVID-19 technologies to deal with public crises in the future.

Despite these challenges, many countries across the world deployed contact tracing apps at pace in 2020.[34] In this chapter, we outline the various technical approaches and infrastructure behind contact tracing apps to build understanding of the different debates and concerns around them. We then assess their effectiveness, public legitimacy, impact on inequalities and governance.

Types of contact tracing apps

Contact tracing apps can be divided into two types: centralised or decentralised. This determines where data is stored and who can access it.[35]

Table 1: Design approaches for contact tracing apps

Communication protocol How is data generated, stored and processed? Who can access the data?
Centralised system approach Users’ data is generated, stored and processed on a central server operated by public authorities. Public authorities have access to data. They score users according to their risk and decide which users to inform. For example, if person x has been in close proximity to y, who is known to be infected with COVID-19, public authorities will be able to identify x and contact them.
Decentralised system approach Users’ data is generated, stored and processed on users’ mobile phones. The data gathered through mobile phones can also be shared on a backend server. A backend server is responsible for storing, processing and communicating data. But decentralised contact tracing systems use arbitrary identifiers (for example, a set of numbers and letters) rather than identifiers (for example., IP address). Hence, even when public authorities access the data on a backend server, they cannot identify users or reconstruct their locations and social interactions.[36]

 

There are three main technologies that are used in both centralised and decentralised systems to detect and trace users’ contacts and estimate their risk of infection.

Table 2: Technologies of contact tracing apps

How do apps decide if a user has been in contact with a person known to be infected?
Bluetooth exposure notification system This approach is based on proximity tracing: this means determining whether two individuals were near each other in a particular context for a specific duration.[37] Contacts are identified through Bluetooth technology on mobile phones. By giving permission for contact tracing apps to use their smartphone’s Bluetooth function, users allow the app to track real-time and historical proximity to other smartphones using the app. The app will share an infection alert if a user has been in proximity to a person who is known to be infected with COVID-19.

Contact tracing apps based on Bluetooth technology are also referred to as exposure notification apps.

Location GPS data This approach is based on location: contact tracing apps use the mobile device’s location (GPS) feature to identify contacts who have been in the same location as a person who is known to be infected with COVID-19
QR code This approach is based on presence tracing; whether two individuals were present at the same time in a venue where infection could have taken place.[38] Users scan a QR code with their smartphone on entry to venues. If a user who is known to be infected with COVID-19 uploads this information to the app, other users who have scanned the same QR code are notified.

New Zealand incorporated Near Field Communication (NFC) codes as an alternative to QR codes in the NZ COVID Tracer app. NFC is a technology that allows two devices to connect through proximity. NFC codes work by tapping mobile phones on or near NFC readers, in the same way that contactless credit cards, Google and Apple Pay work by tapping on or near card readers.[39]

When contact tracing apps were being considered for development, many countries were enthusiastic about deploying apps with a centralised system approach, which stores the data of app users on a central server.

Supporters of this centralised approach argued that access to data would give epidemiologists and health authorities valuable information for analysis. However, many privacy, data security and human rights researchers and activists highlighted the risks created by user data being accessible to third parties through a centralised server. These risks included the privacy infringements, data repurposing and increased surveillance.

In this context, proposals emerged for technical protocols that would enable decentralised contact tracing, designed to be ‘privacy preserving’ by enabling users’ data to be stored on their mobile smartphones rather than on a centralised server.

Several decentralised protocols emerged in April 2020, including the open protocol DP-3T (Decentralized Privacy-Preserving Proximity Tracing), PEPP-PT (Pan-European Privacy-Preserving Proximity Tracing) and the Apple/Google Exposure Notification protocol (GAEN API).

In our research, we collected evidence about the system approaches of contact tracing apps in 25 countries.[40] We discovered that 15 out of 25 countries used a decentralised system approach. Of the 15 countries that adopted a decentralised approach, not all of these based their decision on their privacy-preserving infrastructure.

The Apple/Google protocol quickly became the dominant decentralised protocol, because of the control exercised by the platforms over the two main smartphone operating systems (iOS and Android, respectively).

The Apple/Google protocol gained dominance in part because centralised contact tracing apps could not perform well on Google and Apple’s operating systems[41] without the platforms making technical changes to these systems, which they refused to do because of concerns about users’ privacy.[42]

The centralised contact tracing apps of Australia and France, for example, had major technical problems.[43] In June 2020, France’s junior minister for digital affairs highlighted that the poor technical efficacy of France’s centralised app had led to decreased public confidence in the app, stating: ‘There has been an upward trend in uninstalling over the last few days, to the tune of several tens of thousands per day’.

Similarly, Australia’s contact tracing app, which combined Bluetooth technology with a centralised system server approach, identified only 17 contacts not found manually in two years.

This caused tensions between technology companies and governments that wanted to use centralised systems with Bluetooth technology, which was considered less invasive of privacy than collecting geographical location data. Countries such as the UK and Germany, which initially pursued centralised apps independently of the Apple/Google protocols, eventually had to deploy the GAEN API to enable their Bluetooth notification systems to work effectively.[44]

In some cases, the distinction between centralised and decentralised systems was blurred. There are decentralised contact tracing systems that centralise information, if users voluntarily upload data.

For example, Singapore’s Bluetooth exposure notification app is decentralised in that it does not store users’ data on a central server. However, when users sign up for TraceTogether, they provide their phone number and ‘unique identification number’ (a government ID used for a range of activities).

If a user is known to be infected with COVID-19, they can grant the Ministry of Health access to their Bluetooth proximity data. This allows the ministry to identify people who have had close contact with the infected app user within the last 25 days, so it follows a more centralised model at that point.[45]

The developers emphasised that they built this ‘hybrid model of decentralised and centralised approach specifically for Singapore’.[46] Similarly, Ireland’s COVID Tracker allows users to upload their contact data, age, sex and health status to a centralised data storage server.[47] There are also apps that use both GPS data and a Bluetooth exposure system, such as India’s Aarogya Setu.

QR codes were also widely used in contact tracing apps, especially those with Bluetooth exposure notification systems, such as the UK’s NHS COVID-19 app.

  • Romania, the USA, Russia and Greece are the only countries in our sample that did not launch a national contact tracing app.[48]
  • India, Ghana, South Korea, Türkiye, Israel and Saudi Arabia used both Bluetooth and location data with a centralised approach.[49]
  • Estonia, France, Finland, Canada, India and Australia discontinued their contact tracing apps and deleted all of the data gathered and stored through them.[50] England and Wales also closed down their contact tracing app NHS COVID-19, and the personal data collected was deleted, but anonymous analytical data may be retained for up to 20 years.[51]
  • Several contact tracing apps were expanded to include vaccine information – for example, Italy’s Immuni app, Türkiye’s Hayat Eve Sığar (HES; Life Fits into Home) app and Singapore’s TraceTogether (TT) app.
  • The USA did not have a federal contact tracing app. MIT Technology Review’s COVID Tracing Tracker demonstrates that only 19 states out of 50 had rolled out contact tracing apps as of December 2020, and to the best of our knowledge no contact tracing app was developed in the USA after this date.[52]

Effectiveness of contact tracing apps

In April 2020, the Ada Lovelace Institute published the rapid evidence review Exit through the App Store?. [53] This report explored technical and societal implications of a variety of COVID-19 technologies, including contact tracing apps. The review acknowledged that, given the potential of data-driven technologies ‘to inform research into the disease, prevent further infections and support the restoration of system capacity and the opening up of the economy’, it was right for governments to consider their use.

However, we urged decision-makers to consider the lack of scientific evidence demonstrating the potential efficacy and impact of contact tracing apps. And we pointed out that there had not been adequate time or resources to establish effective strategies and monitoring mechanisms to investigate their impacts on public health.

We emphasised that lack of credible evidence supporting the apps’ effectiveness could undermine public trust and hinder implementation due to low uptake.

Since then, a considerable number of studies have emerged investigating the effectiveness of contact tracing apps. This body of literature offers four key findings:

  • Some Bluetooth notification exposure apps with decentralised systems have been effective in identifying and notifying close contacts of people known to be infected with COVID-19, for example the UK’s NHS COVID-19 app.[54] However, the technical efficacy of this kind of system cannot be generalised at an international level. The evidence from South Africa and Canada, for example, indicates technical problems, including insufficient Bluetooth accuracy and smartphone batteries being quickly drained.[55] Such technical issues affected the apps’ ability to identify and notify close contacts of people who were known to be infected with COVID-19.
  • Apps with centralised systems and Bluetooth exposure notification systems, which were not compatible with Google and Apple’s GAEN API, had significant technical problems. This reduced their ability to identify close contacts.[56] For example, France’s contact tracing app had sent only 14 notifications after 2 million downloads as of June 2020.[57]
  • Low uptake of contact tracing apps reduced their effectiveness in some countries, for example in Australia.[58] This is because the proportion of potentially exposed people who actually receive an exposure notice and stay at home is, by definition, lower if fewer people are using the app overall.
  • Contact tracing apps were insufficiently integrated with government services and public health systems. An investigation of the effectiveness of contact tracing apps from a public health perspective in six countries found that apps did not reach their full potential, due to inadequate testing capacity and poor data sharing across local and central government authorities.[59]

However, there are still important evidence gaps which prevent us from definitively assessing the effectiveness of contact tracing apps.

To explore these gaps, we organised a multidisciplinary workshop with experts from the USA and Europe in October 2022 to discuss the effectiveness of contact tracing apps. The findings from the workshop (listed below) demonstrate the limitations of the evidence.

It was clear that there is still no consensus on what effectiveness means beyond apps’ technical efficacy. How can we define people-centred effectiveness?

Research is also limited on how contact tracing apps affected individual behaviours that would have supported wider public health measures: for example, whether users self-isolated after a COVID-19 exposure notification. The existing evidence is limited in both sample size and scope,[60] because (to date) people’s real-life experiences of contact tracing apps have received little research attention.

A Digital Global Health and Humanitarianism Lab (DGHH Lab) investigation of contact tracing apps provides a useful framework for how further research should evaluate people’s real-life experiences of contact tracing such apps. The investigation looks at people’s opinions and experiences of contact tracing apps in five countries: Cyprus, Iceland, Ireland, Scotland and South Africa.[61] It concludes that user engagement with the apps should be seen in four stages:

  1. Uptake (users download the app).
  2. Use (users run the app and keeps it updated).
  3. Report (users report a positive COVID-19 diagnosis via the app).
  4. React ( users follow necessary next steps when they receive an exposure notification from the app).[62]

Uptake alone does not guarantee continued use and change in behaviour (for example, getting tested or staying at home when notified of an exposure). The stage-based approach should therefore guide our understanding of individuals’ actual, ongoing usage of COVID-19 technologies.

Several studies demonstrate that uptake does not guarantee continued use. In France, for example, only a minority of users of the TousAntiCovid (Everyone Against COVID, formerly StopCovid) app used the contact tracing feature.

BBC News reported that although two million people downloaded the Protect Scotland app, only 950,000 people actively used it, and that around 50,000 people stopped using it a few months after its launch.[63] Similarly, there is evidence that millions of people who downloaded the NHS COVID-19 app (used in England and Wales) never technically enabled it, so despite having an intention to engage with it, they did not use it in practice.[64]

This evidence does not suggest that contact tracing apps were completely ineffective. But it challenges us to consider why people did not use the apps as anticipated by policymakers and developers.

Exploring this will help ensure that contact tracing apps and similar health technologies reach their full potential in the future.

A research study on the UK contact tracing apps demonstrates that some people also stopped using apps after a while because they lost confidence in their effectiveness.[65] Similarly, the Government of Canada’s evaluation of the COVID Alert app notes that its perceived lack of effectiveness among the public led to fewer downloads and less continued usage, which prevented the app from reaching its full potential.[66]

These findings demonstrate that more research is needed to investigate people’s views and practices in relation to contact tracing apps in real-life contexts and over time. This will help review the apps’ effectiveness, not just technically but in terms of outcomes for people and society.

How did different technologies, policies and public communications impact public attitudes when the apps were first deployed and over time?

We need more comparative evidence to understand how different technologies, policies and public communication strategies impacted public attitudes. The existing evidence, despite its limitations, indicates the importance of comparative research.

For example, there is an important distinction between tracing apps (location GPS data) and exposure notification apps (Bluetooth technology), in terms of the risks and challenges they pose. Yet there is no adequate research into how the public perceives the respective risks and effectiveness of these two different types of contact tracing apps.

A qualitative research study with 20 users of Canada’s COVID Alert app confirms the significance of this evidence gap. It demonstrates that participants favoured the app’s decentralised approach over centralised systems because of the higher level of privacy protection and optional level of cooperation.[67] The research also finds that users’ motivation to notify the app if known to be infected with COVID-19,and to follow government guidelines, increases with their understanding of the purpose and technical functionality of the app.

A limitation of the evidence base is that existing research largely investigates contact tracing apps in the first year of the pandemic. There is a need to understand the success and effectiveness in the context of changing nature of the pandemic. This will help understand how people’s confidence in apps’ effectiveness and their usage practices have changed over time.

Our recommendation when contact tracing apps emerged in 2020:

  • Establish the effectiveness of contact tracing apps as part of a wider pandemic response strategy.[68]

 

In 2023, the evidence on the effectiveness of the various apps can be summarised as follows:

  • Countries did not decide what effectiveness would look like when rolling out these apps.
  • Contact tracing apps have demonstrated that digital contact tracing is feasible. Some decentralised contact tracing apps with Bluetooth technology worked well, in that they demonstrated technical efficacy (for example, the NHS COVID-19 app in England and Wales[69]). However, the technical efficacy of decentralised Bluetooth exposure notification systems cannot be generalised at an international level. The evidence from South Africa and Canada, for example, indicates technical problems.
  • Apps with centralised systems and Bluetooth exposure notification systems, which were not compatible with Google and Apple’s GAEN API, had significant technical problems. This negatively impacted their ability to identify and notify close contacts (for example, in France).
  • Existing research and expert opinion indicate that the apps were not well integrated within broader public health systems and pandemic management strategies, which negatively impacted their effectiveness.
  • The impact of contact tracing apps on public health is unclear because significant evidence gaps remain that prevent understanding of their impact on public health behaviours at different stages of the pandemic. There is also a lack of clear evidence around how different technologies, policies and public communications have affected public attitudes towards the apps.

 

Lessons learned:

To build evidence around the effectiveness of contact tracing apps as part of the wider pandemic response strategy:

  • Support research and learning efforts on the impact of contact tracing apps on people’s public behaviours.
  • Understand how the apps’ technical properties, and different policies and implementation approaches, impact on people’s experiences of contact tracing apps in specific socio-cultural contexts and across geographic areas.
  • Use this impact evaluation to help set standards and strategies for the future use of technology in public crises. Weigh up digital tools’ benefits and harms by considering their role within the broader COVID-19 response and comparing them with non-digital interventions (for example, manual contact tracing).

 

To ensure the effective use of technologies in future pandemics:

  • Invest in research and evaluation from the outset, and implement a clear evaluation framework to build evidence during deployment that supports understanding of the role that COVID-19 technologies play in broader pandemic health strategies.
  • Define criteria for effectiveness using a human-centred approach that goes beyond technical efficacy and builds an understanding of people’s experiences.
  • Establish how to measure and monitor effectiveness by working closely with public health experts and communities, and set targets accordingly.
  • Carry out robust impact assessments and evaluation of technologies, both when first deployed and over time.

Public legitimacy of contact tracing apps

When they first emerged, we argued that public legitimacy was key to the success of contact tracing apps.

Members of the public were more likely to use the apps and follow the guidelines (for example, self-isolating after receiving a notification) if they trusted the technology’s effectiveness and believed that adequate regulatory mechanisms were in place to safeguard their privacy and freedoms.[70]

We also demonstrated that public support for contact tracing apps was contextual: people had varying views and experiences of the apps depending on how they were implemented locally (for example, whether uptake was mandatory or voluntary).[71]

In countries where contact tracing app use was mandatory, members of the public had to use them even if they did not think that they were legitimate technologies. For example, in China, the Health Code app was automatically integrated into users’ WeChat and Alipay, so that they could only deactivate the COVID-related functionality by deleting these applications.[72]

These applications are widely used, as smartphone-based digital payment is the main method of payment in China.[73] The app was therefore assigned mandatorily to 900 million users (out of 1.4 billion) in over 300 cities, using pre-existing legal mechanisms to justify and enforce the policy (for example, the Novel Coronavirus Pneumonia Prevention and Control Plans).[74]

The Health Code app was not the only automatically assigned technology across China. Cities and regions required their residents to use multiple technologies depending on their own local COVID-19 pandemic measures and mechanisms; however, there is not much information regarding local authorities’ administration of these technologies. Similarly, it was not always clear which government department had ultimate authority for oversight and enforcement.[75]

In the majority of the countries in our sample, contact tracing apps were voluntary. People were not obliged through legislation to use them, and only did so if they believed in their effectiveness and had the resources to adopt them and adhere to guidelines.

Seen through this lens, contact tracing apps can be taken as a test of public acceptance of powerful technologies that entail sensitive data and are embedded in everyday life.

A study that investigated voluntary contact tracing app adoption in 13 countries found that the adoption rate was 9% on average.[76] In 2020, the Ada Lovelace Institute conducted an online public deliberation project on the UK Government’s use of the NHS COVID-19 contact tracing app to transition out of lockdown.[77] This research demonstrated that the public demanded clarity on data use and potential risks as well as independent expert review of the technology’s efficacy. Since then, there has been a boom in research into public attitudes to contact tracing apps that confirms this point.

This demonstrates the reasons for low levels of public support for contact tracing apps. These include low levels of trust in government and concerns about apps’ security and effectiveness, leading to low adoption (or high rates of people discontinuing use) in some countries, for example, Australia, France and South Africa. [78]

While we do not have in-depth insights about public support for apps in the countries where uptake was mandatory, recent developments in China demonstrate people’s dissatisfaction with the Health Code app and the restrictions it enabled. When the Chinese government ended the Health Code mandate in December 2022, many people shared celebratory content on social media platforms.

Some of this content suggested that people were happy to make decisions and take precautions for themselves rather than rely on the Health Code algorithm.[79] A considerable number of privacy and human rights law experts were explicitly critical of the use of Health Code system (both about the use of the system in general and its use beyond the height of the pandemic) and urged the Chinese government to discontinue its use beyond the COVID-19 pandemic.[80]

Experts emphasise the importance of effective public communication strategies in pandemic management.[81] The existing research demonstrates that many governments across the world have not been able to communicate scientific evidence effectively, particularly to address vaccine hesitancy and misinformation.[82] This finding includes communications around digital interventions.

Research undertaken in the UK shows that the public do not have a clear understanding of the technical capabilities and uses of COVID-19 technologies.

When asked about digital contact tracing apps, participants in the research imagined these apps ‘being able to “see” or ‘visualise’ their every move’.[83]

This indicates a misunderstanding (or lack of knowledge) regarding the apps’ infrastructure. Contact tracing apps in the UK are built on the GAEN API using Bluetooth technology, so they do not collect geo-location data and are not able to track users’ location in the literal sense of knowing where a user is at a given point in time.

In Europe, Bluetooth technology has been widely used instead of geo-location data.[84] However, the perceived risk of surveillance and literal tracking has been a public concern in the majority of European countries, especially among social groups with lower levels of trust in government.[85] Similar evidence exists for South Africa, where the lack of focused and targeted communications reduced public trust, and the COVID Alert SA app was not widely used by members of the public.[86]

Perhaps an exception within our sample is Canada, which established an extensive communications campaign to increase awareness and understanding of the COVID Alert app.[87] Health Canada, the government department responsible for national health policy, spent C$21 million on this campaign to encourage Canadians to download and use the app.[88]

The official evaluation of the app published by Health Canada and the Public Health Agency of Canada concludes that these campaigns resulted in millions of downloads.[89] This evidence demonstrates the importance of effectively communicating the apps’ purpose and technical infrastructure to members of the public.

Existing political structures and socio-economic inequalities were also important in determining uptake. In many parts of the world, structural factors and inequalities mean that marginalised and disadvantaged communities are more likely to distrust the government, institutions and public health advice.[90]

It is unsurprising that these groups were less likely to use contact tracing apps. There is strong online survey research evidence from the UK that confirms this point, in an investigation of the adoption of and attitudes towards the NHS COVID-19 app:

  • 42% of Black, Asian and minority ethnic respondents downloaded the app compared with 50% of white respondents
  • 13% of Black, Asian and minority ethnic respondents downloaded then deleted the app compared with 7% of white respondents
  • Black, Asian and minority ethnic respondents were more concerned about how their data would be used and felt more frustrated as a result of a notification from the app than white respondents
  • Black, Asian and minority ethnic respondents had lower levels of trust in the National Health Service (NHS) and were less likely to download the app to help the NHS.[91]

Our recommendations when contact tracing apps emerged:

  • Build public trust by publicly setting out guidance and enacting clear law about permitted and restricted uses. Explain the legal guidance and mechanisms to support rights through clear public communications and transparency.
  • Ensure users understand apps’ purpose, the quality of its evidence, its risks and limitations, and users’ rights, as well as how to use the app.[92]

 

In 2023, the evidence that has emerged on the public legitimacy of contact tracing apps demonstrates these points:

  • Public acceptance of contact tracing apps depended on public trust in apps’ effectiveness and in governments and institutions, as well as the safeguard mechanisms in place to protect privacy and individual freedoms.
  • Individuals and communities who encounter structural inequalities were less likely to trust in government institutions and the public health advice they offered. Hence, they were less likely than the general population to use contact tracing apps.
  • Governments did not always do well at communicating with the public about the properties, purpose and legal mechanisms of contact tracing apps. This negatively impacted public legitimacy, since governments could not gain public trust in the safety and effectiveness of the apps.

 

Lessons learned:

To achieve public legitimacy for the use of technology in future pandemics:

  • Reinforce the need to build public trust by publicly setting out guidance and enacting clear law about permitted and restricted uses. Explain the legal guidance and mechanisms to support rights through clear public communications and transparency.
  • Effectively communicate the purpose, governance and properties of contact tracing technologies to the public.

Inequalities

The international evidence concerning the impact of COVID-19 on communities demonstrates higher infection and mortality rates among the most disadvantaged communities.

It highlights the intersections of socio-economic, ethnic, geographical, digital and health inequalities, particularly in unequal societies and regions.[93]

The introduction of contact tracing apps led to concerns that they could widen health inequalities for vulnerable and marginalised individuals in society (for example, around digital exclusion and poor access to healthcare). In this context, we called on governments to carefully consider the potential negative social impacts of contact tracing apps, especially on vulnerable and disadvantaged groups.[94]

A part of pandemic management, policymakers and technology companies developed and adopted new technologies rapidly. This left insufficient room to discuss questions about equality and impact, such as whether contact tracing apps would benefit everyone in society equally, who might not be able to benefit from them, and what the alternatives were for those individuals and communities.

There was a surge in techno-solutionism – the view that technologies can solve complex real-world problems – during the pandemic. As Marelli and others (2022) argue, ‘the rollout of COVID interventions in many countries has tended to replicate a mode of intervention based on ‘technological fixes’ and ‘silver-bullet solutions’, which tend to erase contextual factors and marginalize other rationales, values, and social functions that do not explicitly support technology-based innovation efforts’. [95]

This meant that non-digital interventions that could perhaps have benefited marginalised and disadvantaged communities – particularly manual contact tracing – were not adequately considered.

Research shows that contact tracing as a disease control measure, if effectively conducted in a timely way, can save lives, particularly for disadvantaged and marginalised communities.[96]

Manual contact tracing teams should ideally be trained to help individuals and families to access testing, identify symptoms, and secure food and medication when isolating. This type of in-depth case investigation and contact tracing requires knowing and effectively communicating with communities, which cannot be done via a mobile application.

Some contact tracing apps recognised this need and attempted to incorporate a manual function. COVID Tracker Ireland, for example, offered users the option of providing a phone number if they wanted to be contacted by public health staff.[97] This is important because it gives contact tracers the opportunity to contact people who are known to be infected with COVID-19 and address their needs.

However, it was unclear how these apps were intended to work alongside manual contact tracers, since it is a core function of majority of contact tracing apps that they inform individuals of exposure directly, with no involvement from public health staff.[98]

This raises the question of whether digital contact tracing was carried out at the expense of other health interventions (most notably, manual contact tracing) and led to the needs of particular individuals and families not being sufficiently considered.[99]

Furthermore, contact tracing apps’ success relies on the assumption that people will self-isolate if notified as a contact of someone who has tested positive for COVID-19. Yet as Landau, the author of People Count: Contact-Tracing Apps and Public Health, argues: ‘the privilege of staying at home is not evenly distributed’.[100]

While some people were able to work from home, many were not and therefore did not have the opportunity to self-isolate if notified of exposure. This shows that technologies cannot work efficiently in isolation and must be supported by strong social policies.

In some countries, governments introduced financial support for those who were ill or self-isolating. In the UK for example, the Government enabled citizens to claim a payment if notified by the NHS COVID-19 app.[101] But a report by Nuffield Foundation and the Resolution Trust found that the financial support given by the Government during the pandemic covered only a quarter of workers’ earnings.[102]

For health technologies such as contact tracing apps to result in changes in behaviour, policymakers need to address structural factors and inequalities that affect disadvantaged groups.

Similarly, people who did not have adequate digital access and skills were not able to use contact tracing apps, even if they wanted to. And these apps were particularly challenging for countries with low levels of internet access, such as South Africa and Nigeria.[103]

Our recommendation when contact tracing apps emerged:

  • Proactively address the needs of, and risks relating to, vulnerable groups.[104]

 

In 2023, the evidence on the impact of contact tracing apps on inequalities demonstrates these points:

  • The rapid introduction of apps caused concerns that they would widen health inequalities for vulnerable and marginalised individuals in society (for example, those who are digitally excluded or with poor access to healthcare) who would not be able to benefit from them.
  • The evidence is unclear around the impact of contact tracing apps on health inequalities and whether authorities produced effective non-digital solutions and services for marginalised and disadvantaged communities.
  • Marginalised and disadvantaged communities (for example, those facing digital exclusion or lacking the financial security to self-isolate) were less likely to use contact tracing apps. To increase their adoption, they had to be supported with non-digital solutions and public services (for example, with manual contact tracing or financial support).

 

Lessons learned:

To mitigate the risk of increasing inequalities when using technology in future pandemics:

  • Consider and monitor the impact of technologies on disadvantaged and marginalised communities. These communities may not benefit from technological solutions as much as the general population, which might increase health inequalities
  • Mitigate the risk of increasing (health) inequalities for these groups by establishing non-digital services and policies that will help them use the technologies and adhere to guidelines (for example, providing financial support for those who cannot work from home).

Governance, regulation and accountability

In deciding to introduce contact tracing apps, governments had to consider trade-offs between human rights and public health interests, because the apps used sensitive personal information and determined the freedoms and rights of individuals.

In the early stages of the pandemic, the Ada Lovelace Institute recommended that if governments wanted to build contact tracing apps, they should ensure that these new tools were governed by strong regulations and oversight mechanisms. We argued that contact tracing apps should be designed and governed in line with data protection and privacy principles.[105]

We acknowledge that these principles are not universal but are informed by political, cultural and social values. But they are underpinned by an international framework that informs the legal protection of human rights around the world.[106] It is beyond the scope of this report to evaluate country-specific laws. But the evidence we have uncovered suggests that different political cultures and pre-existing legislative frameworks of countries yielded varying governance mechanisms, which sometimes fell short of protecting civil rights and freedoms.

One of the most polarising issues concerning the launch of contact tracing apps was whether they should be mandatory or voluntary.

When contact tracing first emerged, we argued that making the use of contact tracing apps mandatory would not be proportionate given the lack of evidence for such apps’ effectiveness.

We also highlighted that contact tracing apps could facilitate surveillance and result in discrimination against certain groups (for example, those who are digitally excluded or refuse to use contact tracing apps). If these risks and challenges materialised, they could be detrimental to human rights.[107]

A comparative analysis of legislation and digital contact tracing policies in 12 countries shows that, in western countries, where privacy legislation strongly emphasises individual freedoms and rights, contact tracing app use was voluntary (for example, France, Austria and the UK).[108]

In Israel, China, Taiwan and South Korea, contact tracing app use was mandatory. Several studies demonstrate how the pre-existing laws and confidentiality requirements allowed Taiwan’s and South Korea’s governments to collect a wide range of social and surveillance data with relatively high levels of public acceptance.[109]

Both Taiwan and South Korea had had recent experiences of dealing with pandemics, and there was pre-existing legislation that permitted tracking through contact tracing apps, CCTV and credit card companies. These laws allowed the governments to carry out large-scale data collection programmes, and there were also strict confidentiality requirements in place.

Although digital contact tracing was mandatory and extensive, contact tracing app governance was transparent and civilian-run in both countries, based on pre-existing public emergency and data protection legislation.[110]

In China, on the other hand, there was no pre-existing comprehensive privacy legislation when the Health Code was deployed (as the Personal Information Protection Law came into effect in November 2021).[111] China enforced mandatory use of the Health Code app between February 2020 and December 2022.

Health Code served as both a contact tracing app and a digital vaccine passport, linked with users’ national identity numbers. It used GPS location in combination with data gathered through WeChat and Alipay, two of the most popular social commerce platforms in China.

These platforms were chosen to guarantee widescale adoption, since they provide the backbone for electronic financial transactions in China. The app categorised people into three categories to determine a risk score for users: green (low risk, free movement); yellow (medium risk, 7-day self-isolation); and red (high risk, 14-day mandatory quarantine)’.[112]

Health code systems were automatically added to citizens’ smartphones through Alipay and WeChat, and Chinese authorities were accused of misusing the systems to stop protests and conduct surveillance of activists.[113]

In Israel, where the contact tracing app was mandatory and centralised, the legislation relating to pandemics does not include digital data collection because it was established in 1940. When a state of emergency is declared, the government is empowered to enact emergency regulations that may suspend the validity of other laws that protect individual rights and freedoms.

In this context, the absence of digital data collection in the legislation relating to pandemics allowed the government to enact emergency regulations allowing the authorities to conduct extensive digital contact monitoring.[114]

The Lex-Atlas COVID-19 project also highlights that emergency powers were used to justify excessive data gathering and surveillance mechanisms in various countries.[115] Some countries unlawfully attempted to make the apps mandatory for domestic activities.

For example, in spring 2020, India made it mandatory for government and private sector employees to download the Aarogya Setu app. This decision was then questioned by experts, including a former Supreme Court judge in Kerala High Court, due to the lack of any law that backed mandatory use of the app.[116]

After the challenge was heard in early May 2020, the Ministry of Home Affairs issued a notification on 17 May 2020, clarifying that use of the Aarogya Setu app should be changed from mandatory to ‘best effort’ basis.[117] This allowed government employees to challenge the mandatory use of the app enforced by the government or a government institution.

In this case, the ‘competent authority’ to extend the scope of Aarogya Setu’s Data Access and Sharing Protocol was the Empowered Group on Technology and Data Management. However, the group was dissolved in September 2020, and the Protocol expired in May 2022. Therefore, the use of the app was anchored in a discontinued protocol and regulatory authority.[118]

Norton Rose Fulbright’s contact tracing global snapshot project demonstrates that countries with weaker legislation and enforcement mechanisms were less transparent when communicating information about their contact tracing apps. Türkiye and Russia, for example, did not clarify how long the data would be stored, whether a privacy risk assessment had been completed, or whether the data would be stored on a centralised or decentralised server.[119]

Another example demonstrating the importance of strong data protection mechanisms comes from the USA, where there are no federal privacy laws regulating companies’ data governance.[120] [121]

In 2020, we highlighted the risk of repurposing contact tracing apps being repurposed, that is, the technology and the data collected being used for reasons other than health.[122]

The company that owns the privacy and security assistant app Jumbo investigated the contact tracing app of the state of North Dakota in the USA. It reported that user location data was being shared with a third party, location data platform Foursquare.

Foursquare’s business model is based on providing advertisers with tools and data to target audiences at specific locations.[123] This exemplifies the repurposing of the data collected through a contact tracing app for commercial purposes, highlighting the importance of strong laws and mechanisms to safeguard users’ data.

Another important investigation was carried out by the Civil Liberties Union for Europe in 10 EU countries.[124] According to the EU General Data Protection Regulation (GDPR), providers should carry out a data protection and equality impact assessment before deploying contact tracing apps, as they posed risks to people’s rights and freedoms.

Yet the Civil Liberties Union for Europe investigation demonstrates that although these countries launched contact tracing apps in 2020, none had yet conducted these assessments by October 2021.

This point is also supported by Algorithm Watch’s evaluation of contact tracing apps in 12 European countries. It found that contact tracing app policies varied significantly within the EU, and that apps were deployed ‘not in an evidence-based fashion and mostly based on contradictory, faulty, and incomparable methods, and results’.[125]

Another relevant example is Singapore. The Criminal Procedure Code (2010) in Singapore allowed the police to use the data collected by contact tracing app TraceTogether data for reasons other than health.[126] In February 2021, it was reported that police had used the app in a murder investigation case.[127]

Following this, the government amended the COVID-19 (Temporary Measures) Act (2020) to restrict the use of the data. But according to this Act, personal data collected through digital contact tracing can still be used by law enforcement in investigations of ‘serious offences’.[128]

As the examples above show, unsurprisingly, countries with more comprehensive data protection and privacy legislation applied data protection principles more effectively than countries with weak legislation.

But incidents of privacy breaches and repurposing data also took place in countries with relatively strong laws and regulatory mechanisms. Germany has comprehensive personal data protection regulations under the EU GDPR and the new Federal Data Protection Act (BDSG).[129]

The Civil Liberties Union for Europe report highlights that Germany is one of the few EU countries that built and rolled out its contact tracing apps in line with the principles of transparency, public debate and impact assessments.[130] But the data gathered and stored through the Luca app, which provides QR codes to check in at restaurants, events and venues, was shared with the police and used in a murder investigation case.[131]

The role of the private sector

Our research reveals that contact tracing apps with centralised data systems were repurposed and/or used to restrict individual freedoms and privacy. This finding is also supported by Algorithm Watch’s COVID-related automated decision-making database project.

As highlighted in Algorithm Watch’s final report, there have been fewer cases of dangerous uses of data-driven technology and AI in EU countries, which largely used the decentralised GAEN API with Bluetooth technology, than in Asia and Africa.[132]

Many privacy advocates supported GAEN technology, which stored data on a decentralised server, since its use would prevent government mass surveillance and oppression.

Nonetheless, as this initiative was led by Google and Apple and not by policymakers and public health experts, it generated questions about the legitimacy of having private corporations decide the properties and uses of this kind of sensitive digital infrastructure.[133]

As digital rights academic Michael Veale argues, a GAEN-based contact tracing system may be ‘great for individual privacy, but the kind of infrastructural power it enables should give us sleepless nights’.[134] The pandemic demonstrated that big tech companies like Apple and Google hold enormous power over computing infrastructure, and therefore over significant health interventions such as digital contact tracing apps.

Apple and Google partnered to influence properties of contact tracing apps in a way that was not favourable to particular nation states (for example, France, which pursued a centralised system approach despite its incompatibility with Bluetooth technology).

This revealed the difficulty, even at state level, of engaging in advanced use of data without the cooperation of the corporations that control the software and hardware infrastructure.[135] While preventing government abuse is crucial, the growing power of technology companies, whose main interest is profit rather than public good, is equally concerning.

Some critics also – and rightly – challenge the common claim that contact tracing apps with GAEN API have been privacy preserving. The reason for the challenge is that it is very difficult to verify whether the data collected has been stored and processed as technology companies claim.[136] This indicates a wider problem: the lack of strong regulation to ensure clear and transparent insight into the workings of technology companies.

These concerns raise two important questions: how will governments rebalance power against dominant technology corporations; and how will they ensure that power is distributed to individuals and communities? As Knodel argues, governments need to move toward designing multistakeholder initiatives with increased ability ‘to respond and help check private sector motivations’.[137]

And as GOVLAB and Knight Foundation argue in their review of the use of data during the pandemic, more coordination between stakeholders would prevent fragmentation in management efforts and functions in future pandemics.[138]

In the light of evidence identified above, as we have already recommended, strong legislation and regulations should be enacted to impose strict purpose and time limitations on digital interventions in times of public crisis. Regulations and oversight mechanisms should be incorporated into emergency legal systems to curb state powers. Governments need to consider a long-term strategy that focuses on collaborating effectively with private technology companies.

Our recommendation when contact tracing apps emerged:

  • Governments should develop legislation, regulations and accountability mechanisms to impose strict purpose and time limitations.[139]

 

In 2023 the evidence on the governance, regulations and accountability of contact tracing apps demonstrates that:

  • Most countries in our sample rolled out contact tracing apps at pace, without strong legislation or public consultation. The different political cultures and pre-existing legislative frameworks of countries yielded varying governance mechanisms, which sometimes fell short of protecting civil rights and freedoms.
  • Some countries used existing emergency powers to sidestep democratic processes and regulatory mechanisms (for example, Türkiye, Russia and India). Even in those countries with relatively strong regulations, privacy breaches and repurposing of data took place, mostly notably in Germany.
  • We have not come across any incidents of misuse of the decentralised contact tracing apps using the Apple/Google GAEN API. But private sector influence on public health technologies is a factor in the ability of governments to develop regulation and accountability mechanisms. The COVID-19 pandemic (and particularly the roll-out of contact tracing apps) showed that national governments are not always able to use their regulatory powers, due to their reliance on large corporations’ infrastructural power.

Lessons learned:

  • Define specific guidelines and laws when deploying new technologies in emergency situations.
  • Develop the public sector’s technical literacy and ability to create technical infrastructure. This does not mean that the private sector should be excluded from developing technologies related to public health. But it is crucial that the technical infrastructure and governance are effectively co-designed by government, civil society and private industry.

Digital vaccine passports

Emergence

From the beginning of the COVID-19 pandemic, establishing some form of ‘immunity passport’ based on evidence or assumption of natural immunity and antibodies after infection with COVID-19 was seen as a possible route out of restrictions.

Governments hoped that immunity passports would allow them to lift mobility restrictions and restore individual freedoms, at least for those who had acquired immunity to the virus.

However, our understanding of infection-induced immunity from the virus was still inadequate due to lack of evidence concerning the level and longevity of antibody levels against COVID-19 after infected by the virus. In this context, these plans were slowed down to allow evidence to accumulate about the efficacy of natural immunity to protect people.[140]

In the meantime, there was considerable investment in efforts to develop vaccine against COVID-19 to protect people through vaccine-induced immunity. On 7 October 2020, Estonia and the World Health Organization (WHO) announced a collaboration to develop a digitally enhanced international certificate of vaccination to help strengthen the effectiveness of the COVAX initiative, which provides COVID-19 vaccines to poorer countries.[141]

The WHO eventually decided to discontinue this project, because the impacts and effectiveness of digital vaccine passports could not be estimated. It also pointed to several scientific, technical and societal concerns with the idea of an international digital vaccine passport system, including the fact that it could prevent citizens of countries unable to secure a vaccine supply from studying, working or travelling abroad.[142]

In November 2020, Pfizer and BioNTech announced their vaccine’s efficacy against COVID-19.[143] In December 2020, the first patient received COVID-19 vaccination in the UK.[144] In the same month, China approved its state-owned COVID vaccine for general use.[145]

Many other vaccines were quickly rolled out, including Moderna, Oxford AstraZeneca and Sputnik V. Countries aimed to roll out vaccination programmes as rapidly as possible to bring down numbers of deaths and cases, and facilitate the easing of COVID-19 restrictions.[146]

This re-energised the idea of establishing national and regional digital vaccine passport systems – among governments, but also among universities, retailers and airlines that sought an alternative to lockdowns.[147]

Despite the lack of scientific evidence on their effectiveness, the majority of countries in our sample eventually introduced digital vaccine passports, with two main purposes: to create a sense of security and to increase vaccine uptake when ending lockdowns.[148]

Unsurprisingly, technology companies raced towards building digital vaccine passports to be used domestically and internationally.[149] The digital identity industry strongly advocated for the introduction of digital vaccine passports.[150] Their argument in support of this was that, if enacted successfully, digital vaccine passports could prove the feasibility of national, regional and international schemes based on proving one’s identity and health status digitally.[151]

Private companies went on to build vaccine passports with the potential to be used in various industries as well by governments, for example, the International Air Transport Association’s Travel Pass app for international travel.[152]

Vaccine passports are not a new concept: paper vaccine passports have been around since the development of smallpox vaccines in the eighteenth century.[153] Although yellow fever is the only disease specified in the International Health Regulations (2005) for which countries may require proof of vaccination as a condition of entry, in the event of outbreaks the WHO recommends that countries ask for proof of vaccines.[154]

COVID-19 vaccine passports are the first digital health certificates that indicate someone’s vaccination against a particular disease. Due to their data-driven digital infrastructure, the health information of individuals can be easily collected, stored and shared. Digital infrastructure of COVID-19 vaccine passports caused public controversy.

When digital vaccine passports emerged, arguments offered in support of them included that they could: allow countries to lift lockdown measures more safely; enable those at lower risk of infection and transmission to help to restart local economies; and allow people to re-engage in social contact with reduced risk and anxiety.

Using a digital rather than a paper-based approach would accommodate future changes in policy, for example vaccine passes expiring or being re-enabled after subsequent infections, based on individual circumstances, countrywide policies or emerging scientific evidence.

Arguments against digital vaccine passports highlighted their potential risks and challenges. These included creating a two-tier society between unvaccinated and vaccinated people, amplifying digital exclusion, and risking privacy and personal freedoms. Experts also highlighted that vaccine passports attempt to manage risks and permit or restrict liberties at an individual level, rather than supporting collective action and contextual measures.

They categorise an individual as lower risk based on their vaccine or test status rather than taking into account a more contextual risk of local infection in a given area. They could also reduce the likelihood of individuals observing social distancing or mask wearing to protect themselves and others.[155]

Digital vaccine passport systems carry specific risks because they gather and store medical and other forms of sensitive personal information that can be compromised through hacking, leaking or selling of data to third parties. They can also be linked to other digital systems that store personal data, for example, the digital identity system Aadhaar in India and the health system Conecte SUS in Brazil.

Experts recommended that strong privacy-preserving technical designs and regulations were needed to prevent such problems, but these were challenging to establish at pace.[156]

These risks and challenges raised questions around public legitimacy and fuelled public resistance to digital vaccine passports in some countries, making it difficult for countries to gain public trust – particularly given the sharp rise in public discontent with governments and political systems due to the pressures of the pandemic.[157]

The Ada Lovelace Institute closely followed the debate regarding digital vaccine passports as they emerged. We conducted evidence reviews, convened workshops with scientists and experts, and published evidence-based research to support decision-making at pace.

Based on the evidence we gathered, we argued that although governments’ attempts to find digital solutions were understandable, rolling out these technologies without high standards of governance could lead to wider societal harms.

The expert deliberation we convened in 2021 suggested that governments should pause their digital vaccine passport plans until there was clear evidence that vaccines were effective in preventing transmission, and that they would be durable and effective against new variants of COVID-19.[158]

We also concluded that it was important to address public concerns and build public legitimacy through transparent adoption policies, secure technical designs and effective communication strategies.

Finally, we highlighted the risk of poorly governed vaccine passports being incorporated into broader systems of identification, and the wider implications of this for the UK and other countries (a risk that has been realised in various countries).[159]

Before proceeding to explaining whether the risks, aspirations and challenges outlined above have materialised, we need to identify the various digital vaccine restrictions and understand how these new technologies have been implemented across the world. In the next section, we discuss digital vaccine passport systems, and the restrictions they have enabled based on a person’s vaccination status or test results.

Types of digital vaccine passport systems and restrictions

In this section, we identify the types of digital vaccine passport systems and restrictions in 34 countries. All countries in our sample introduced digital vaccine passports between January and December 2021 – with varying adoption policies.

Digital vaccine passports were in use in two important public health contexts to either limit or enable individuals’ ability to access certain spaces and activities during the COVID-19 pandemic:

  1. Domestic vaccine passport schemes: providing a valid vaccine passport to prove immunity status when participating in public activities (for example, going to a restaurant).
  2. International vaccine passport schemes: providing a valid vaccine passport to show immunity status when travelling from one country to another.

The majority of the countries in our sample changed their vaccine passport schemes at multiple times throughout the pandemic.[160] For example, both Türkiye and France introduced digital vaccine passports in summer 2021, internationally for inbound travellers and domestically for residents to access particular spaces (for example, restaurant, museums, concert halls, etc.).

By spring 2022, both countries had lifted vaccine passport mandates domestically but still required inbound travellers to provide immunity proof to avoid self-isolation and testing.

By August 2022, digital vaccine passports were no longer in use or enforced in either country (although the infrastructure is still in place in both countries and can be reused at any time). At the time, China and New Zealand were still enforcing digital vaccine passports – to varying degrees – to maintain their relatively low number of deaths and cases by restricting residents’ eligibility for domestic activities and inbound travellers’ eligibility to visit.

Contrary to China and New Zealand’s severe vaccine passport schemes, many countries, especially European countries, implemented domestic vaccine passport schemes to ease COVID-19 measures and transition from lockdown measures, despite increasing number of cases and hospitalisations (for example, in summer 2022).[161]

We identified eight different vaccine passports systems that allowed or blocked freedoms for residents and inbound travellers in the 34 countries in our sample.

We have coded them according to the severity of their implementation.

Digital vaccine passport restrictions

  1. Available but not compulsory. In use but not enforced for inbound travellers and domestic use.
  2. Mandatory for inbound travellers. Not mandatory for domestic use.
  3. Not mandatory for inbound travellers. Domestic use decided by regional governments.
  4. Mandatory for inbound travellers unless they are nationals and/or residents. Domestic use decided by regional governments.
  5. Mandatory for inbound travellers. Domestic use decided by regional governments.
  6. Mandatory for inbound travellers unless they are nationals and/or residents. Domestic use decided at a federal level.
  7. Mandatory self-isolation for non-national inbound travellers, regardless of possession of vaccine passports.
  8. Mandatory self-isolation for non-national inbound travellers, regardless of vaccine passport. Federal policy for domestic use.

There is currently no universal vaccine passport scheme that can determine how and under what circumstances digital vaccine passports can be used internationally as well as for domestic purposes.[162]

In the absence of internationally accepted criteria, countries determined when and how to use digital vaccine passports themselves, leading to a wide range of adoption policies.

A map of the world that shows the introduction of vaccine passports in countries in our sample by quarter

  • Asian and European countries were among the first to introduce digital vaccine passports in early 2021
  • North and South America from mid-2021
  • Oceania from late 2021.

The different approaches to using digital vaccine passports in different countries stem from their different technical capabilities, politics, public tolerance, finance and, most importantly, approaches to pandemic management.

Countries with zero-COVID policies, for example China and New Zealand, implemented stringent vaccine passport policies along with closing borders and imposing strict lockdowns on residents to suppress transmission.[163]

Many countries relied on a combination of various measures at different phases of the pandemic. In 2023, all countries in our sample currently have either no or moderate measures in place and seem to have chosen a ‘living with COVID’ policy.

Despite the varying approaches, in all the countries in our sample the technological and legislative infrastructure of vaccine passports are still in place. This is important not only because vaccine passports can still be reused, but because they can be transformed into other forms of digital systems in the future.

Examples of how varying pandemic management approaches and political contexts affected digital vaccine passport systems across the world include:

  • Brazil: Former Brazilian president Bolsonaro was against vaccination in general.[164] This meant that most of the pressure for vaccination campaigns came from the federal regions. The judiciary also played a strong role in pressuring the government to take measures against COVID-19, including vaccination. A Supreme Court justice ruled that inbound travellers had to show digital or paper-based proof of vaccination against COVID-19.[165]
  • USA: Digital vaccine passports, particularly for domestic use, were a politically divisive issue in the USA. Some states banned vaccine mandates and the use of digital vaccine passports within their states. Citizens in these states could acquire paper-based vaccine passports to prove their vaccination status for international travel. Several studies demonstrated that political affiliation, perceived effectiveness of vaccines and education level shaped individuals’ attitudes towards digital vaccine passports. Unsurprisingly, fear of surveillance was prominent in determining whether people trusted the government and corporations with their personal data.[166] The federal US administration did not initiate a national domestic vaccine passport but was involved in efforts to establish standards for vaccine passports for international travel.
  • Italy: Italy was the first country in Europe to be hit by the COVID-19 pandemic.[167] The government was confronted with high numbers of hospitalisations and deaths, and faced criticism for being slow to act. It responded by taking stricter measures than many of its European counterparts, and so Italy had one of the strictest vaccine passport schemes in Europe. It separated each region into a coloured zone depending on how severe the rate of transmission and hospitalisation numbers were in that area. It operated a two-tiered green pass system. The ‘super green pass’ was valid proof of vaccination or recovery, the ‘green pass’ was proof of a negative COVID test. Different venues and activities required one or both of the passes.[168]
  • The EU: Member states in the EU experienced the pandemic differently – some countries had higher number of deaths, cases and hospitalisations than others. Vaccine uptake across the member states differs significantly.[169] While the EU Digital COVID Certificate helped the EU to reintroduce freedom of movement and revive the economy within the zone, member states have the liberty to implement vaccine passports domestically as they see fit. This led to considerable differences in domestic vaccine passport schemes across the EU zone.[170] For example, Romania, one of the least vaccinated countries in the EU, made digital vaccine passports mandatory for inbound national travellers for only a short period of time to address the surge in numbers of cases and deaths as lockdowns were ended. Finland, which had a high vaccination rate, required a digital vaccine passport for all inbound travellers, including nationals, for nine months before it stopped enforcing digital vaccine passports completely.

Effectiveness

Digital vaccine passports essentially demonstrate an individual’s transmission risk to other people.

A digital vaccine passport scheme relies on the assumption that an individual is a lower risk to others if they have been vaccinated (or if they have gained natural immunity after being infected with and recovering from the disease).

In early 2021, we argued that there was no clear evidence about whether being vaccinated reduced an individual’s risk of transmitting the disease. We suggested that governments should pause deploying vaccine passports until the evidence was clearer.[171]

We also called on governments to build evidence that considers the benefits and risks of digital vaccine passports – in particular, whether they would increase risky behaviours (for example, not observing social distance) by creating a false sense of security.

Despite this lack of evidence, many governments across the world moved forward to introduce digital vaccine passports in 2021.[172]

Policymakers saw digital vaccine passports as valuable public health tools, once the initial scientific trials of vaccines suggested that they would reduce the likelihood of severe symptoms, and hence hospitalisations and deaths.

This was critical for policymaking in many countries whose healthcare systems were under immense pressure.

At the same time, vaccine scepticism was on the rise in many countries. In this context, the idea developed that digital vaccine passport schemes would give people an incentive to get vaccinated. This represented a considerable shift in their purpose, from a digital health intervention aimed at reducing transmission to a behaviour control tool aimed at increase vaccine uptake.

Many countries considered mandatory vaccination for domestic activities as a way to increase uptake. For example, in January 2022, announcing domestic vaccine mandates, French President Macron stated ‘the unvaccinated, I really want to hassle them. And so, we will continue to do it, until the end.’[173]

Mandatory digital vaccine passport schemes raise the question of ‘whether that is ethically acceptable or instead may be an unacceptable form of coercion, detrimental to the right to free self-determination, which is guaranteed for any medical treatment, thus coming to resemble a sort of roundabout coercion’.[174]

In short, it was hoped that digital vaccine passports would positively impact public health in two main ways: (1) reducing transmission, hospitalisations and deaths, and (2) increasing vaccine uptake.

In this section, we will look at the evidence on the effectiveness of digital vaccine passports in both of these senses. We will then briefly explain several evidence gaps that prevent us from building a full understanding of digital vaccine passports’ overall impact on public health.

Impact of digital vaccine passports on reducing transmission, hospitalisations and deaths

In 2023, the scientific evidence on the efficacy of vaccines to reduce transmissions still needs to be elucidated. Although there is some evidence that being vaccinated makes it less likely that one will transmit the virus to others, experts largely agree that ‘a vaccinated person’s risk of transmitting the virus is not considerably lower than an unvaccinated person’.[175] [176] Yet there is strong evidence that vaccines are effective in protecting individuals from developing severe symptoms (although the experts say that their efficacy reduces over several months).[177]

Therefore, even if mandatory domestic vaccine passport schemes did not help to decrease rates of transmission, they might have reduced the pressure on public healthcare because fewer number of people needed medical care. This would only be the case if digital vaccine passports were indeed effective in increasing vaccine uptake (see next section below).

Vaccines have been found to be effective against new variants, but the level of effectiveness is unclear.[178] According to the WHO, there are five predominant variants of COVID-19 and more than 200 subvariants. The WHO also reports that it is becoming more difficult to monitor new variants, since many countries have stopped testing and surveillance.

The infrastructure and legislation of digital vaccine passports are still in place, meaning that they can be reused at any time.

But limited monitoring and research on (sub)variants raises concerns around vaccines’ durability and their ability to be used more widely Governments need to invest in building evidence on the vaccines’ efficacy against rapidly evolving variants if they decide to re-use digital vaccine passport.

Impact of digital vaccine passports on vaccine uptake

Digital vaccine passport systems had a mixed impact on vaccine uptake at an international level. Several countries reported a significant increase in vaccination after the introduction of digital vaccine passports. In France for example, after the digital vaccine passports were introduced, ‘the overall uptake of first doses… increased by around 15% in the last month following a lull in vaccinations.’[179]

Another study suggests that the vaccine passport requirement for domestic travelling and accessing different social settings led to higher vaccination rates in the majority of the EU countries.[180] However, levels of COVID-19 vaccine acceptance were low particularly in West Asia, North Africa, Russia, Africa and Eastern Europe despite the use of digital vaccine passports.[181]

For example, one out of four Russians continued to refuse vaccination despite the government’s plan to introduce mandatory digital vaccine passports for accessing certain spaces (for example, workplaces).[182] Similarly, in Nigeria, Bulgaria, Russia and Romania, black markets for fake passports were created by anti-vaxxers,[183] demonstrated the strength of resistance among some people to getting vaccinated or sharing their data. These examples indicate the importance of political and cultural contexts and urge us to avoid broad international conclusions.

Important evidence gaps

As well as vaccination, the scientific evidence shows that a wide range of measures can reduce the risk of COVID-19 transmission. How have vaccine passports affected individuals’ motivation to follow other COVID-19 protection measures? This question is fundamental: one of the major concerns about digital vaccine passports was that they might give people a false sense of security, leading them to stop following other important COVID-19 health measures such as wearing a face mask.

Some experts argue that digital vaccine passport schemes in the EU led to more infections because they led to increased social contact.[184] But studies that explore this were either conducted in the early phase of the pandemic or remain limited in their scope. This means that we cannot fully evaluate the impact of digital vaccine passports on public health behaviours, so we cannot weigh their benefits against the risks in a comprehensive manner.

To fill this evidence gap, we need studies that examine (and compare) unvaccinated and vaccinated people’s attitudes to other COVID-19 protection measures over time.

A systematic review of community engagement to support national and regional COVID-19 vaccination campaigns demonstrates that working with members (or representatives) of communities to co-design vaccination strategies, build trust in authorities and address misinformation is an effective way to increase vaccine uptake.

The review points to the success of several COVID-19 vaccination rollout programmes, including the United Nations High Commissioner for Refugees efforts to reach migrant workers and refugees, a female-led vaccination campaign for women in Sindh province in Pakistan and work with community leaders to reach out to the indigenous population in Malaysia.[185]

The standard and quality of countries’ healthcare systems also played a huge role in how successfully they tackled vaccine hesitancy. For example, Morocco’s pre-existing national immunisation programme, supported by a successful COVID-19 communications campaign, led to in higher vaccination rates in Morocco compared with other African countries.[186]

This raises another important question, which cannot be comprehensively answered due to limited evidence: were digital vaccine passport policies deployed at the expense of other (non-digital) interventions, such as targeted community-based vaccination programmes?

Governments’ ambition to increase vaccine uptake by using digital vaccine passport schemes (for example, by not allowing unvaccinated people to enter venues) raises the question of whether they expected digital vaccine passports to ‘fix’ the problem of vaccine hesitancy instead of working with communities and effectively communicating scientific evidence.

To comprehensively address this question, governments would need to provide detailed documentation of vaccination rollout programmes and activities and support expert evaluations of the risks and benefits of digital vaccine passport systems, compared with non-digital interventions like vaccination campaigns targeted at communities with high levels of vaccine hesitancy.

Our recommendations when digital vaccine passports emerged:

  • Build an in-depth understanding of the level of protection offered by individual vaccines in terms of duration, generalisability, efficacy regarding mutations and protection against transmission.
  • Build evidence of the benefits and risks of digital vaccine passports. For example, consider whether they reduce transmission but also increase risky behaviours (for example, not observing social distancing), with a new harmful effect.[187]

 

In 2023, the evidence on the effectiveness of digital vaccine passports reveals:

  • Countries initially aimed to use digital vaccine passports to score an individual’s transmission risk based on their vaccination status, test results or proof of recovery. They established digital vaccine passport schemes without clear evidence of the vaccine’s effectiveness in reducing a transmission risk. Governments hoped that even if vaccines did not reduce transmission risk, digital vaccine passports would increase vaccine uptake, and hence decrease an individual’s risk of developing severe symptoms and increase vaccine uptake.
  • Vaccines were effective at reducing the likelihood of developing severe symptoms, and therefore of hospitalisations and deaths. This meant that they decreased the pressure on health systems because fewer people required medical care.
  • However, there is no clear evidence that vaccinated people are less likely to transmit the virus than unvaccinated people, which means that vaccines have not reduced transmissions as hoped by governments and policymakers.
  • In some countries (for example, France) digital vaccine passport schemes increased vaccine uptake, but in other countries (for example, Russia and Romania) people resisted vaccinations despite digital vaccine passport restrictions. Black markets for fake digital vaccine passports were created in some places (for example, Italy, Nigeria and Romania). This demonstrates that we cannot reach broad international conclusions about digital vaccine passports’ impact on vaccine uptake.
  • Significant gaps in the evidence prevent us from weighing the benefits of digital vaccine passport systems against the harms. These include the impact of digital vaccine passports on other COVID-19 protection measures (for example, wearing mask) and whether governments relied on digital vaccine passport systems to increase vaccine uptake instead of establishing non-digital community-targeted interventions to address vaccine hesitancy.

 

Lessons learned:

To build evidence on the effectiveness of digital vaccine passports as part of the wider pandemic response strategy:

  • Support research and learning to understand the impact of digital vaccine passports on other COVID-19 protection measures (for example, wearing mask and observing social distancing).
  • Support research and learning to understand the impact of digital vaccine passports on non-digital interventions (for example, effective public communications to address vaccine hesitancy).
  • Use this impact evaluation to weigh up the risks and harms of digital vaccine passports and to help set standards and strategies for the future use of technology in public crises.

To ensure the effective use of technologies in future pandemics:

  • Invest in research and evaluation from the outset, and implement a clear evaluation framework to build evidence during deployment that supports understanding of the role that digital technologies play in broader pandemic health strategies.
  • Define criteria for effectiveness using a societal approach that goes beyond technical efficacy and takes account of people’s experiences.
  • Establish how to measure and monitor effectiveness by closely working with public health experts and communities, and set targets accordingly.
  • Carry out robust impact assessments and evaluation of technologies, both when first deployed and over time.

Public legitimacy

Public legitimacy was key to ensuring that digital vaccine passports were legitimate and effective health interventions. In the first two years of the pandemic, we conducted a survey and public deliberation research to investigate public attitudes to digital vaccine passports in the UK.

We found that digital vaccine passports needed to be supported by strong governance and accountability mechanisms to build public trust. Our work also highlighted public concern with regards to digital vaccine passport schemes’ potential negative impacts on marginalised and disadvantaged communities. We called on governments to build public trust and create social consensus on whether and how to use digital vaccine passports.[188]

Since then, wider evidence has emerged that complements our findings. For example, an IPSOS Mori survey from March 2021 found that minority ethnic communities in the UK were more concerned than white respondents about vaccine passports being used for surveillance.[189]

This reflects a general trend in UK society: minoritised and disadvantaged people trust public institutions less with personal data than the white majority do.[190] Unsurprisingly, there is also a link between people’s attitudes to digital vaccine passports and vaccine hesitancy.

Those who are less likely to take up the COVID-19 vaccine feel their sense of personal autonomy is threatened by mandatory vaccine passport schemes.[191]

It is difficult to draw conclusions about public acceptance of digital vaccine passports at an international level, since public legitimacy depends on existing legal and constitutional frameworks as well as moral, cultural and political factors in a society.

But we can say that more than 50% of countries in our sample experienced protests against digital vaccine passports and the restrictive measures that they enabled (for example, not being eligible to enter the workplace or travel without proof of vaccination), showing the widespread public resistance across the world.

Countries that saw such protests vary in terms of political cultures and attitudes to technology, including Italy, Russia, France, Nigeria and South Africa. In most cases, anti-digital vaccine passport protests started shortly after national or regional governments had announced mandatory schemes, demonstrating public resistance to using data-driven technology in everyday contexts.

Several studies demonstrated that people were less favourable towards domestic uses of digital vaccine passports than towards their use for international travel.

This was particularly the case for schemes that required people to use a digital vaccine passport to access work, education, and religious settings and activities.[192] Lack of trust in government and institutions, vaccine efficacy and digital vaccine passports’ effectiveness all contributed to public resistance to digital vaccine passport systems.[193]

Our recommendations when digital vaccine passports emerged:

  • Build public trust through strong regulation, effective public communication and consultation.[194]
  • Ensure social consensus on whether and how to use digital vaccine passports.

 

In 2023, the evidence on the public legitimacy of digital vaccine passports reveals that:

  • Many countries experienced protests against digital vaccine passports (more than half of the countries in our sample) and the restrictive measures that they enabled. This demonstrates the lack of public acceptance of, and social consensus around, digital vaccine passport systems.
  • Lack of trust in government and institutions, vaccine efficacy and digital vaccine passports’ effectiveness all contributed to public resistance to digital vaccine passports.[195]

 

Lesson learned:

  • Ensure that people’s rights and freedoms are safeguarded with strong regulations, oversight and redressal mechanisms. Effectively communicate the purpose and legislative and regulatory basis of health technologies to build public trust and social consensus.

Inequalities

Digital vaccine passports posed significant inequality risks, including discrimination based on immunity status, excess policing of citizens, and amplification of digital inequalities and other forms of societal inequalities.[196]

In this context, one of the major risks highlighted by the Ada Lovelace Institute was that mandatory vaccine passports could lead to discrimination against unvaccinated people. Mandatory vaccination policies were frequently adopted by (national or regional) governments or workplaces across the countries in our sample.[197]

For example, in November 2021, the Austrian government announced mobility restrictions for unvaccinated people.[198] The measure was ended in January 2022 due to dropping case numbers and decreasing pressure on hospitals. However, the government announced a vaccine mandate policy with penalties of up to €3,000 for anyone who refused to be vaccinated. The controversial law was never enforced due to civil unrest and international criticism.[199]

In Italy, people had to show a ‘green pass’, which included vaccination proof, recovery proof and a negative Polymerase Chain Reaction (PCR) test, to access workplaces between October and December 2021.

The policy officially ended on 1 May 2022, making it illegal for employers to ask for vaccine passports.[200] In 2021, the Moscow Department of Health declared that only vaccinated people could receive medical care.[201] The Mayor of Moscow also instituted a mandatory vaccine passport system for gaining entry to restaurants, bars and clubs after 11pm in the city.

In relation to digital exclusion, we recommended that if governments were to pursue digital vaccine passport plans, they should create non-digital (paper) alternatives for those with no or limited digital access and skills. We also recommended that plans should include different forms of immunity in vaccine passports – such as antigen test results – to prevent discrimination against unvaccinated people.[202]

In some countries, for example, Türkiye, although physical vaccine passports were available, people had to download their vaccination proof as an electronic PDF (portable document format) , which excluded those who were unable to use the internet.[203]

Some countries adopted good practices and policies to mitigate the inequality risks. In India, for example, the Supreme Court decided that vaccination could not be made compulsory for domestic activities and directed the federal government to provide publicly available information on any adverse effects of vaccination.[204]

The UK Government introduced a non-digital NHS COVID Pass letter.[205] Those who did not have access to a smartphone or internet could request this physical letter via telephone.

The European Union’s Digital COVID Certificate could be obtained after taking a biochemical test that demonstrates a form of immunity or lack of infection and hence does not discriminate against those who cannot be or refuse to be vaccinated. This made the Digital COVID Certificate available to wider population, as 25% of the EU population remained unvaccinated as of August 2022.[206]

Global inequalities

Tackling pandemics requires global cooperation. Effective collaboration is needed to fight diseases at regional and global levels.[207] Digital vaccine passports, which were used for border management in the name of public health, created vaccine nationalism, and as a result they amplified global inequalities.[208]

Digital vaccine passports did not emerge in a vacuum; state-centric perspectives that prioritise the ‘nation’s health’ by restricting or controlling certain communities and nations have existed for decades.[209] Securitising trends using the unprecedented compilation and analysis of personal data intensified following the 9/11 terrorist attack in New York.[210]

Countries compiled pandemic-related data about other countries to score risk and produce entry schemes for inbound travellers. This led to the emergence of an international digital vaccine passport scheme where individuals were linked to a verifiable test or vaccine.[211]

Low-income countries found it difficult to meet rigid standards for compliance due to low access to and uptake of vaccines.[212]

There is a positive correlation between a country’s GDP and the share of vaccinated individuals in the population.[213]

According to Our World in Data, when digital vaccine passports were introduced, the share of fully vaccinated people was 17% in Jamaica, 18% in Tunisia and 11% in Egypt.[214] At the other end of the scale, 56% of the population was fully vaccinated in Singapore, 32% in Italy and 37% in Germany.[215]

International digital vaccine passport schemes also resulted in new global tensions. The COVAX initiative led by the WHO, aimed at ensuring equitable access to COVID-19 treatments and vaccines through global collaboration.[216]

COVISHIELD, a COVID-19 vaccine manufactured in India, was distributed largely to African countries through the COVAX initiative. Nonetheless, the EU, which donated €500 million donation to support the initiative, did not authorise COVISHIELD as part of the EU Digital COVID Certificate system.[217] This meant that the digital vaccine passports of people who had received COVISHIELD in Africa were not recognised as valid in the EU, restricting their ability to travel to EU countries.

As of December 2022, Africa still had the slowest vaccination rate of any continent, with just 33% of the population receiving at least one dose of a vaccine.[218]

In this context, many low- and middle-income countries sought vaccines approved by the European Medicine Agency (EMA). This was challenging due to lack of financial means and the limited number of vaccine manufacturing companies.

The EU Digital COVID Certificate system eventually expanded to only 49 non-EU countries, including Monaco, Türkiye, the UK and Taiwan (to give a few examples from our sample).[219] These countries’ national vaccination programmes offered vaccines authorised for use by EMA in the EU.

Our recommendations when digital vaccine passports emerged:

  • Carefully consider the groups that might face discrimination if mandatory domestic and international vaccine passport policies are adopted (for example, unvaccinated people).
  • Make sure policies and interventions are in place to mitigate the amplification of societal and global inequalities – for example, provide paper-based vaccine certificates for people who are not able or not willing to use digital vaccine passports.[220]

 

In 2023, the evidence on the impact of digital vaccine passports on inequalities demonstrates that:

  • The majority of countries in our sample adopted mandatory domestic and international vaccine passport schemes at different stages of the pandemic, which restricted the freedoms of individuals.
  • Some countries in our sample (for example, the EU and UK) adopted physical digital vaccine passports and approved a biochemical test to demonstrate a form of immunity or lack of infection as part of their digital vaccine passports. These helped to mitigate the risk of discrimination against unvaccinated individuals and individuals who lack adequate digital access and skills.
  • Countries compiled pandemic-related data about other countries to score risk and produce entry schemes for inbound travellers. This led to the emergence of an international digital vaccine passport scheme where individuals were linked to a verifiable test or vaccine. Low-income countries found it difficult to meet rigid standards of compliance due to low access to and uptake of vaccines.

 

Lessons learned:

  • Address the needs of vulnerable groups and offer non-digital solutions where necessary to prevent discrimination and amplification of inequalities.
  • Consider the implications of national policies and practices relating to technologies at a global level. Cooperate with national, regional and international actors to make sure technologies do not reinforce existing global inequalities.

Governance, regulation and accountability

Like contact tracing apps, digital vaccine passports had implications for data privacy and human rights, provoking reasonable concerns about proportionality, legality and ethics.

Data protection regimes are based largely on principles that aim to protect rights and freedoms. Included within these is a set of principles and ‘best practices’ that guide data collection in disaster conditions. These include that:

  • measures are transparent and accountable
  • the limitations of rights are proportional to the harms they are intended to prevent or limit
  • data collection is minimised and time constrained
  • data is retained for research or public use purposes and unused personal data is destroyed
  • data is anonymised in such a way that individuals cannot be reidentified
  • third party sharing both within and outside of government is prevented.[221]

In the Checkpoints for vaccine passports report, we made a set of legislative, regulatory and technical recommendations in line with the principles outlined above.

We highlighted the importance of oversight mechanisms to ensure technical efficacy and security, as well as the enforcement of relevant regulations.[222] It is beyond the scope of this report to analyse country-specific regulations and how they were shaped by differences in legal systems and ethical and societal values. But there are several cross-cutting issues and reflections that are worth drawing attention to.

As far as we know, there were fewer incidents of repurposing data and privacy breaches in the case of digital vaccine passports than in relation to contact tracing apps. Yet in some countries, critics warned that data protection principles were not always followed despite relevant regulations being in place.[223] For example, central data systems had security flaws in some countries, for example, in Brazil and Jamaica, which resulted in people’s health records being hacked.[224]

The effectiveness of digital vaccine passports was critical when deciding whether they were proportionate to their intended purpose.[225] When they emerged, some bioethicists argued that digital vaccine passport policies were a justified restriction on civil liberties, since vaccinated people were unlikely to spread the disease and hence posed no risk to others’ right to life.[226]

However, as explained in the previous sections, the evidence does not confirm vaccines’ effectiveness at reducing transmission. And it is noteworthy that some places for example, Vietnam, successfully managed the disease without a focus on technology due to their pre-existing strong healthcare systems.[227]

Our evidence also reveals that although some countries established specific regulations for digital vaccine passports (for example, UK and Canada), this was not the case for most of the countries in our sample.

In many countries, digital vaccine passports were regulated through existing public laws, protocols and general data protection regulations.

This created concerns in those countries without data protection frameworks, for example, South Africa.[228]

In our sample of 34 countries, the EU Digital COVID Certificate regulation is the most comprehensive regulation. It clearly states when the vaccine passport scheme will end (June 2023).[229] It also provides detailed information regarding security safeguards and time limitation.

But it is important to note that the EU does not determine member states’ national policies on vaccine passport use, which means that countries can choose to keep the infrastructure and reuse digital vaccine passports domestically.

Our recommendations when digital vaccine passports emerged:

  • Use scientific evidence to justify the necessity and proportionality of digital vaccine passport systems.
  • Establish regulations with clear, specific and delimited purposes, and with clear sunset mechanisms .
  • Ensure best-practice design principles to ensure data minimisation, privacy and safety.
  • Ensure that strong regulations and regulatory bodies and redressal mechanisms are in place to safeguard individual freedoms and privacy.

 

In 2023, the evidence on governance, regulations and accountability of digital vaccine passports demonstrates that:

  • Only a handful of countries (for example, the UK and the EU) enacted specific regulations before rolling out digital vaccine passports.
  • In many countries, digital vaccine passports were regulated using existing public laws, protocols and general data protection regulations. This created concerns in countries without data protection frameworks, for example, South Africa.
  • There were fewer incidents of repurposing data and privacy breaches in the case of digital vaccine passports than there were in connection with contact tracing apps. But the lack of strong regulation or oversight mechanisms and poor design still resulted in data leakages, privacy breaches and repurposing of the technology in some countries (for example, hacking digital vaccine passport data in Brazil).

 

Lessons learned:

  • Justify the necessity and proportionality of technologies with sufficient relevant evidence in public health emergencies.
  • If technologies are found to be necessary and proportional and therefore justified, create specific guidelines and regulations. These guidelines and regulations should ensure that mechanisms for enforcement are in place as well as methods of legal redress.

Conclusions

Contact tracing apps and digital vaccine passports have been two of the most widely deployed technologies in COVID-19 pandemic response across the world.

They raised hopes through their potential to assist countries in their fight against the COVID-19 virus. At the same time, they provoked concerns about privacy, surveillance, equity and social control, because of the sensitive social and public health surveillance data they use – or are perceived to use.

In the first two years of the pandemic, the Ada Lovelace Institute extensively investigated the societal, legislative and regulatory challenges and risks of contact tracing apps and digital vaccine passports. We published nine reports containing a wide range of recommendations for governments and policymakers about what they should do mitigate these risks and challenges when using these two technologies.

This report builds on this earlier work. It synthesises the evidence on contact tracing apps and digital vaccine passports from a cross-section of 34 countries. The findings should guide governments, policymakers and international organisations when using data-driven technologies in the context of public emergencies, health and surveillance.

They should also support civil society organisations and those advocating for technologies that support fundamental rights and protections, public health and public benefit.

We also identify important gaps in the evidence base. COVID-19 was the first global health crisis of ‘the algorithmic age’, and evaluation and monitoring efforts fell short in understanding the effectiveness and impacts of the technologies holistically.

The evidence gaps identified in this report indicate the need to continue research and evaluation efforts, to retrospectively investigate the impact of COVID-19 technologies so that we can decide on their role in our societies, now and in the future. The gaps should also guide evaluation and monitoring frameworks when using technology in future pandemics and in broader contexts of public health and social care provision.

This report synthesises the evidence by focusing on four questions:

  1. Did the new technologies work?
  2. Did people accept them?
  3. How did they affect inequalities?
  4. Were they well governed and accountable?

The limited and inconsistent evidence base and the wide-ranging, international scope present some challenges to answering these questions. Using a wide range of resources, we aim to provide some balance and context to compensate for missing information.

These resources include the media, policy papers, findings from the Ada Lovelace Institute’s workshops, evidence reviews of academic and grey literature, and material submitted to international calls for evidence.

We illustrate the findings on both contact tracing apps and digital vaccine passports with policy and practice examples from the sample countries.

Within the evidence base, the two technologies were implemented using a wide range of technical infrastructures and adoption policies. Despite these divergences and the often hard-to-uncover evidence, there are important cross-cutting findings that can support current and future decision-making around pandemic preparedness, and health and social care provision more broadly.

Cross-cutting findings

Effectiveness: did COVID-19 technologies work?

  • Digital vaccine passports and contact tracing apps were – of necessity – rolled out quickly, but without consideration of what evidence would be required to demonstrate their effectiveness. There was insufficient consideration and no consensus reached on how to define, monitor, evaluate or demonstrate their effectiveness­ and impacts.
  • There are indications of the effectiveness of some technologies, for example the NHS COVID-19 app (used in England and Wales). However, the limited evidence base makes it hard to evaluate their technical efficacy or epidemiological impact overall at an international level.
  • The technologies were not well integrated within broader public health systems and pandemic management strategies, and this reduced their effectiveness. However, the evidence on this is limited in most of the countries in our sample (with a few exceptions, for example Brazil and India), and we do not have clear evidence to compare COVID-19 technologies with non-digital interventions and weigh up their relative benefits and harms.
  • It is not clear whether COVID-19 technologies resulted in positive change in people’s health behaviours (for example, whether people self-isolated after receiving an alert from a contact tracing app).
  • It is also not clear if public support was impacted by the apps’ technical properties, or the associated policies and implementations.

Public legitimacy: Did people accept COVID-19 technologies?

  • Public legitimacy was key to ensuring the success of these technologies, affecting uptake and behaviour.
  • The use of digital vaccine passports to enforce restrictions on liberty and increased surveillance caused concern. There were protests against them, and the restrictive policies they enabled, in more than half the countries in our sample.
  • Public acceptance of contact tracing apps and digital vaccine passports depended on trust in their effectiveness, as well as trust in governments and institutions to safeguard civil rights and liberties. Individuals and communities who encounter structural inequalities are less likely to trust government institutions and the public health advice they offer. Not surprisingly, these groups were less likely than the general population to use these technologies.
  • The lack of targeted public communications resulted in poor understanding of the purpose and technical properties of COVID-19 technologies. This reduced public acceptance and social consensus around whether and how to use the technologies.

Inequalities: How did COVID-19 technologies affect inequalities?

  • Some social groups faced barriers to accessing, using or following the guidelines for contact tracing apps and digital vaccine passports, including unvaccinated people, people structurally excluded from sufficient digital access or skills, and people who could not self-isolate at home due to financial constraints. A small number of sample countries adopted policies and practices to mitigate the risk of widening existing inequalities. For example, the EU allowed paper-based Digital COVID Certificates for those without sufficient digital access and skills.
  • This raises the question of whether these technologies widened health and other societal inequalities. In the majority of sample countries, there is no clear evidence as to whether governments adopted effective interventions to help those who were less able to use or benefit from these technologies (for example, whether financial support was provided for those who could not self-isolate after receiving an exposure alert due to not being able to work from home).
  • The majority of sample countries requested proof of vaccination from inbound travellers before allowing unconditional entry (that is, without a quarantine or self-isolation period) at some stage of the pandemic. This amplified global inequalities by discriminating against the residents of countries that could not secure adequate vaccine supply or had low vaccine uptake – specifically, many African countries.

Governance, regulation and accountability: Were COVID-19 technologies well governed and accountable?

  • Contact tracing apps and digital vaccine passports combine health information with social or surveillance data. As they limit rights (for example, by blocking access to travel or entrance to a venue for people who do not have a digital vaccine passport), they must be proportional. This means striking a balance between limitations of rights, potential harms and intended purpose. To achieve this, it is essential that they are governed by robust legislation, regulation and oversight mechanisms, and that there are clear sunset mechanisms in place to determine when they no longer need to be used.
  • Most countries in our sample governed these technologies in line with pre-existing legislative frameworks, which were not always comprehensive. Only a few countries enacted robust regulations and oversight mechanisms specifically governing contact tracing apps and digital vaccine passports, including the UK, EU member states, Taiwan and South Korea.
  • The lack of robust data governance frameworks, regulation and oversight mechanisms led to lack of clarity about who was accountable for misuse or poor performance of COVID-19 technologies. Not surprisingly, there were incidents of data leaks, technical errors and data being reused for other purposes. For example, contact tracing app data was used in police investigations in Singapore and Germany, and sold to third parties for commercial purposes in the USA.[230]
  • Many governments relied on private technology companies to develop and deploy these technologies, demonstrating and reinforcing the industry’s influence and the power located in digital infrastructure.

Lessons

In light of these findings, there are clear lessons for governments and policymakers deciding how to use digital vaccine passports and contact tracing apps in the future.

These lessons may also apply more generally to the development and deployment of new data-driven technologies and approaches.

Effectiveness

To build evidence on the effectiveness of contact tracing apps and digital vaccine passports:

  • Support research and learning efforts on impact of these technologies on people’s health behaviours.
  • Understand the impacts of apps’ technical properties, and of policies and approaches to implementation, on people’s acceptance of, and experiences of, these technologies in specific socio-cultural contexts and across geographic locations.
  • Weigh up their benefits and harms by considering their role within the broader COVID-19 response and comparing with non-digital interventions (for example, manual contact tracing).
  • Use this impact evaluation to help set standards and strategies for the future use of these technologies in public crises.

To ensure the effective use of technology in future pandemics:

  • Invest in research and evaluation from the start, and implement a clear evaluation framework to build evidence during deployment that supports understanding of the role that technologies play in broader pandemic health strategies.
  • Define criteria for effectiveness using a human-centred approach that goes beyond technical efficacy and builds an understanding of people’s experiences.
  • Establish how to measure and monitor effectiveness by working closely with public health experts and communities, and set targets accordingly.
  • Carry out robust impact assessments and evaluation.

Public legitimacy

To improve public acceptance:

  • Build public trust by publicly setting out guidance and enacting clear law about permitted and restricted uses and mechanisms to support rights, and redress and tackle legal issues.
  • Effectively communicate the purpose of using technology in public crises, including the technical infrastructure and legislative framework of specific technologies, to address public hesitancy and create social consensus.

Inequalities

To avoid making societal inequalities worse:

  • Create monitoring mechanisms that specifically address the impact of technology on inequalities. Monitor the impact on public health behaviours, particularly in relation to social groups who are more likely to encounter health and other forms of social inequalities.
  • Use the impact evidence to identify marginalised and disadvantaged communities and to establish strong public health services, interventions and social policies to support them.

To avoid creating or reinforcing global inequalities and tensions:

  • Harmonise global, national and regional regulatory tools and mechanisms to address global inequalities and tensions.

Governance and accountability

To ensure that individual rights and freedoms are protected:

  • Establish strong data governance frameworks and make sure that regulatory bodies and clear sunset mechanisms are in place.
  • Create specific guidelines and laws to make sure that technology developers follow privacy-by-design and ethics-by-design principles, and that effective monitoring and evaluation frameworks and sunset mechanisms are in place for the deployment of technologies.
  • Build clear evidence about the effectiveness of new technologies to make sure that their use is proportionate to their intended results.

To reverse the growing power imbalance between governments and the technology industry:

  • Develop the public sector’s technical literacy and ability to create technical infrastructure. This does not mean that the private sector should be excluded from developing technologies related to public health, but it is crucial that technical infrastructure and governance are effectively co-designed by government, civil society and private industry.

The legacy of COVID-19 technologies? Outstanding questions

This report synthesises evidence that has emerged on contact tracing apps and digital vaccine passports from 2020 to 2023. These technologies have short histories, but they have potential long-term, societal implications and bring opportunities as well as challenges.

In this research we have attempted to uncover evidence of existing practices rather than speculating about the potential long-term impacts.

In the first two years of the pandemic, the Ada Lovelace Institute raised concerns about the potential risks and negative longer-term implications of COVID-19 technologies for society, beyond the COVID-19 pandemic. The main concerns were about:

  • repurposing of digital vaccine passports and contact tracing apps beyond the health context, such as for generalised surveillance
  • expanding or transforming of digital vaccine passports into wider digital identity systems by allowing digital vaccine passports to ‘set precedents and norms that influence and accelerate the creation of other systems for identification and surveillance’
  • damaging public trust in health and social data-sharing technologies if these technologies were mismanaged, repurposed or ineffective.[231]

In this section, we identify three outstanding research questions which would allow these three potential longer-term risks and implications. Addressing these questions will require consistent research and thinking on the evolution of COVID-19 technologies and their longer-term implications for society and technology.

Governments, civil society and the technology industry should consider the following under-researched questions, and should work together to increase understanding of contact tracing apps and digital vaccine passports and their long-term impact.

Question 1: Will contact tracing apps and digital vaccine passports continue to be used? If so, what will happen to the collected data?

Only a minority of countries, including Australia, Canada and Estonia,[232] have decommissioned their contact tracing apps and deleted the data collected. Digital vaccine passport infrastructure is still in place in many countries across the world, despite most countries having adopted a ‘living with COVID’ policy.

It is important to consider the current and future objectives of governments that are preserving these technological infrastructures, as well as how they intend to use the collected data beyond the pandemic. Given that most countries in our sample did not enact strong regulations with sunset clauses that restrict use and clarify structures or guidance to support deletion, it is crucial that we continue to monitor the future uses of these technologies and ensure that they are not repurposed beyond the health context.

Question 2: How will the infrastructure of COVID-19 technologies and related regulation persist in future health data and digital identity systems?

Digital vaccine passports have accelerated moves towards digital identity schemes in many countries and regional blocs.[233] In Saudi Arabia, the Tawakkalna contact tracing app has been transformed into a comprehensive digital identity system, which received a public service award from the United Nations for institutional resilience and innovative responses to the COVID-19 pandemic.[234]

The African Union, which built the My COVID Pass vaccine passport app in collaboration with African Centres for Disease Control and Prevention, is working towards building a digital ID framework for the African continent. The EU introduced uniform and inter-operable proofs of vaccination through the EU Digital COVID Certificate .

It is not yet clear what the societal implications of these changes of use are, or how they will affect fundamental rights and protections. Following the Digital COVID Certificate’s perceived success among policymakers, the European Commission plans to introduce an EU digital wallet that will give every EU citizen digital identity credentials that are recognised throughout the EU zone.

In some countries, healthcare systems have been transformed as a result of COVID-19 technologies. India has transformed its contact tracing app Aarogya Setu to become the nation’s health app.[235]

In the UK, data and AI have been central to the Government’s response to the pandemic. This has accelerated proposals to use health data for research and planning services. NHS England has initiated a ‘federated data platform’. This will enable NHS organisations to share their operational data through software.

It is hoped that researchers and experts from academia, industry and the charity sector will use the data gathered on the platform for research and analysis to improve the health sector in England.[236]

The federated data platform initiative has been recognised for its potential to transform the healthcare system, but it has also caused concerns about accountability and trustworthiness, as patients’ data will be accessible to many stakeholders. [237] These include private technology companies like Palantir, which has been reported as not always being transparent in how it gathers, analyses and uses people’s data.[238]

These changes in digital identity and health ecosystems can provide significant economic and societal benefits to individuals and nations.[239] But they should be well designed and governed in order to benefit everyone in society. In this context, it is necessary to continue monitoring the evolution of COVID-19 technologies into new digital platforms and to understand their legislative, technical and societal legacies.

Question 3: How have COVID-19 technologies affected public’s attitudes towards data-driven technologies in general?

There is a lot of research on public attitudes towards  COVID-19 technologies. This body of research was largely undertaken in the first years of the pandemic.[240] But, the question of whether, and how, they have affected people’s attitudes towards data-driven technologies beyond the pandemic has not had much attention.

People had to use these technologies in their everyday lives to prove their identity and share their health and other kinds of personal information. But, as demonstrated in this report, there have been incidents that might have damaged people’s confidence in the technologies’ safety and effectiveness.

In this context, we believe that it is crucial to continue to reflect on COVID-19 technologies’ persistent impacts on public attitudes towards data-driven technologies – particularly, those technologies that entail sensitive personal data.

Methodology

In 2020 and 2021, the Ada Lovelace Institute conducted extensive research on COVID-19 technologies. We organised workshops and webinars, and conducted public attitudes research, evidence reviews and desk research. We published nine reports and two monitors. This body of research highlighted the risks and challenges these  technologies posed and made policy recommendations to ensure that they would not cause or exacerbate harms and would benefit everyone in society equally.

In the first two years of the pandemic, many countries rolled out digital vaccine passports and contact tracing apps, as demonstrated in ‘International monitor: vaccine passports and COVID-19 status apps’.[241] In January 2022, as we were entering the third year of the pandemic, we adjusted the scope and objectives of the COVID-19 technologies project. In the first two years of the pandemic, we had focused on the benefits, risks and challenges; now we started focusing on the lessons learned from these technologies from January 2022 onwards. We aimed to address the following questions:

  1. Did COVID-19 technologies work? Were they effective public health tools?
  2. Did people accept them?
  3. How did they affect inequalities?
  4. Were they governed well and with accountability?
  5. What lessons can we learn from the deployment and uses of these new technologies?

Sampling

We aimed for regional representation in our sample. We decided to focus on policies and practices in 34 countries in total. We based our sampling on geographical regions of North Africa, Central Africa, South Africa, South East Asia, Central Asia, East Asia, North America, South America, Eastern Europe, European Union, West Asia, North Africa and Oceania.

Relying on Our World in Data[242] datasets on total deaths, total cases and the share of people who had completed the initial vaccine protocol in 194 countries on 5 June 2022, we created a pandemic impact score for each country, giving equal weight to each of the three variables.

In each geographical region, we then selected two countries with the highest impact score, two countries with medium impact score, and two countries with low impact score for detailed review.

Methods and evidence

This research project encompasses evidence from 34 countries (see the list of the countries in our sample).

Unsurprisingly, the amount and type of evidence on each country varies significantly. Our aim in this research project is not to compare these countries with very different technical infrastructures, political cultures and pandemic management strategies, but to have a number of shared criteria against which we can assess the policies, practices and technical infrastructure in these countries.

With this aim in mind, we established a list of data categories to collect country-specific information:

  • introduction date of vaccine passports
  • end date of vaccine passport regulations
  • protests against vaccine passports or contact tracing apps
  • implementations of vaccine passports, for example, being mandatory in workplaces, for international travel, etc.
  • cumulative number of cases when digital vaccine passports were introduced
  • cumulative number of deaths when digital vaccine passports were introduced
  • share of the vaccinated people when digital vaccine passports were introduced
  • whether there was a government-launched contact tracing app
  • technical infrastructure of contact tracing apps
  • reported cases of surveillance
  • reported cases of repurposing data
  • reported cases of rights infringements
  • evidence on whether COVID-19 technologies increased societal inequalities (for example, around digital exclusion)
  • evidence on whether COVID-19 technologies increased global inequalities
  • evidence on the effectiveness of digital vaccine passports and contact tracing apps.

We used the following methods and resources to gather evidence on the data categories outlined above:

External datasets

We used quantitative datasets of other organisations’ data trackers and policy monitors for the following data categories:

  • proportion of the vaccinated people from Our World in Data.[243]
  • COVID restrictions (for example, school closures, lockdowns, etc.) from Blavatnik School of Government, Oxford University.[244]
  • cumulative number of cases from Our World in Data.[245]
  • cumulative number of deaths from Our World in Data.[246]

Call for evidence

In July 2022, we announced an international call for input on the effectiveness and social impact of digital vaccine passports and contact tracing apps. We incorporated the relevant evidence submitted to this call into the evidence base. For some countries, the evidence submitted was helpful as it either provided us with the missing information or confirmed that the respective country did not have an official regulation (or protocol) to govern vaccine passports or contact tracing apps.

We also worked with some of the individuals and organisations that submitted evidence as consultants to acquire further information on their respective country of expertise.

Workshop

We organised a workshop for evidence building in October 2022. The workshop aimed to discuss the effectiveness of contact tracing apps with experts from the disciplines of epidemiology, cybersecurity, public health, law and media and communications.

The aim of the workshop was to deliberate on the effectiveness of contact tracing apps in Europe. The multidisciplinary background of the workshop participants allowed a focus on the effectiveness beyond technical efficacy by considering the social, legislative and regulatory impacts of apps.

Desk research

Between August 2022 and January 2023 we conducted multiple, structured internet search queries using a set of keywords for each country in our sample. These keywords include ‘vaccine certificate’, ‘vaccine passport’, ‘immunity certificate’, ‘digital contact tracing’, ‘contact tracing app’, ‘COVID technologies’ and ‘the name of the country’.

This approach to desk research enabled collection and analysis of evidence from three different types of resources: media news, government websites, and academic and grey literature (produced by organisations who are not traditional publishers, including government documents, or third-sector organisation reports).

Limitations

There are 34 countries in this research sample. Although the sampling covers every continent, as discussed in the sampling section, we do not claim that our country-specific findings are representative of continents, regions or political blocs. Similarly, we also do not claim exhaustive evidence on developments in every country.

We also recognise that as a UK-based organisation, there might be barriers to discovering evidence emerging from various parts of the world. Our qualitative evidence on media reports in particular is largely in the English language – although there are a few exceptions. We worked with consultants from Brazil, India, Egypt, China and South Africa who provided us with non-English language media and government reports that we had not been able to capture through desk research.

The language barrier also emerged in our policy analysis. We aimed to collect data on policies and regulations from government websites and official policy papers. We used online translation software to conduct research in the official languages of the countries in our sample.

The low rate of success in discovering official policy papers of countries indicates that there are limitations to this method. Not all governments made policies and practices of contact tracing apps and digital vaccine passports publicly available. In this context, while the low amount of policy papers we gathered is partly due to the language barrier, it also relates to governments’ lack of transparency about the uses and governance of these technologies.

Acknowledgements

This report was lead-authored by Melis Mevsimler, with substantive contributions from Bárbara Prado Simão, Dr Nagla Rizk, Gabriella Razzano and Prateek Waghre, who provided evidence and analysis as consultants.

Participants in the workshop:

Professor Christophe Fraser, University of Oxford

Professor Susan Landau, Tufts University

Dr Frans Folkvord, Tilburg University

Claudia Wladdimiro Quevedo, Uppsala University

Dr Simon Williams, Swansea University

Francisco Lupianez Villanueva, Open University of Catalonia

Krzysztof Izdebski, Open Spending EU Coalition

Dr Stephen Farrell, Trinity College Dublin

Dr Laszlo Horvath, Birbeck University

Dr Mustafa Al-Haboubi, London School of Hygiene & Tropical Science

Danqi Guo, Free University of Berlin

Dr Federica Lucivero, University of Oxford

Shahrzad Seyfafheji, Bilkent University

Dr Agata Ferretti, ETH Zurich

Yasemin Gumus Agca, Bilkent University

Boudewijn van Eerd, AWO

Peer reviewers:

Eleftherios Chelioudakis, AWO

Hunter Dowart, Bird & Bird

Professor Ana Beduschi, University of Exeter


Footnotes

[1] Carly Kind, ‘What will the first pandemic of the algorithmic age mean for data governance?’ (Ada Lovelace Institute, 2 April 2020) www.adalovelaceinstitute.org/blog/first-pandemic-of-the-algorithmic-age-data-governance/#:~:text=Coronavirus%20is%20the%20first%20pandemic,its%20detection%2C%20treatment%20and%20prevention accessed 12 April 2023.

[2] The BMJ, ‘Artificial intelligence and Covid-19’, www.bmj.com/AICOVID19 accessed 31 March 2023.

[3] For example, G Samuel and others, ‘COVID-19 Contact Tracing Apps: UK Public Perceptions’ (2021) 32:1 Critical Public Health 31, https://doi.org/10.1080/09581596.2021.1909707; MC Mills and T Ruttanauer, ‘The Effect of Mandatory COVID-19 Certificates on Vaccine Uptakes: Synthetic-Control Modelling of Six Countries’ (2022) 7:1 The Lancet 15, https://doi.org/10.1016/S2468-2667(21)00273-5.

[4] ‘COVID-19 Law Lab’ https://covidlawlab.org accessed 31 March 2023; ‘Lex-Atlas: Covid-19’ https://lexatlas-c19.org accessed 31 March 2023; ‘Digital Global Health and Humanitarianism Lab (DGHH Lab)’ https://dghhlab.com/publications/#PUB-DRCOVID19 accessed 31 March 2023.

[5] AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023.

[6] MIT Technology Review, ‘Covid Tracing Tracker’ www.technologyreview.com/tag/covid-tracing-tracker accessed 31 March 2023.

[7] World Health Organization, ‘Statement on the fourteenth meeting of the International Health Regulations (2005) Emergency Committee regarding the coronavirus disease (COVID-19) pandemic’ (WHO, 30 January 2023) www.who.int/news/item/30-01-2023-statement-on-the-fourteenth-meeting-of-the-international-health-regulations-(2005)-emergency-committee-regarding-the-coronavirus-disease-(covid-19)-pandemic accessed 31 March 2023.

[8] World Health Organization, ‘Statement on the fifteenth meeting of the IHR (2005) Emergency Committee on the COVID-19 pandemic’, (WHO 5 May 2023) https://www.who.int/news/item/05-05-2023-statement-on-the-fifteenth-meeting-of-the-international-health-regulations-(2005)-emergency-committee-regarding-the-coronavirus-disease-(covid-19)-pandemic accessed 31 May 2023

[9] GOVLAB and Knight Foundation, ‘The #Data4Covid19 Review’ https://review.data4covid19.org accessed 12 April 2023.

[10] M Shahroz and others, ‘COVID-19 Digital Contact Tracing Applications and Techniques: A Review Post Initial Deployments’ (2021) 5 Transportation Engineering 100072, https://doi.org/10.1016/j.treng.2021.100072.

[11] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[12] A Hussain, ‘TraceTogether data used by police in one murder case: Vivian Balakrishnan (Yahoo! News, 5 January 2021) https://uk.style.yahoo.com/trace-together-data-used-by-police-in-one-murder-case-vivian-084954246.html?guccounter=2 accessed 12 April 2023; DW, ‘German police under fire for misuse of COVID app’ DW (11 January 2022) www.dw.com/en/german-police-under-fire-for-misuse-of-covid-contact-tracing-app/a-60393597 accessed 31 March 2023.

[13] Carly Kind, ‘What will the first pandemic of the algorithmic age mean for data governance?’ (Ada Lovelace Institute, 2 April 2020) www.adalovelaceinstitute.org/blog/first-pandemic-of-the-algorithmic-age-data-governance/#:~:text=Coronavirus%20is%20the%20first%20pandemic,its%20detection%2C%20treatment%20and%20prevention accessed 26 April 2023.

[14] The BMJ, ‘Artificial intelligence and covid-19’, www.bmj.com/AICOVID19 accessed 31 March 2023.

[15] LO Danquah and others, ‘Use of a Mobile Application for Ebola Contact Tracing and Monitoring in Northern Sierra Leone: A Proof-of-Concept Study’ (2019) 19 BMC Infectious Diseases 810, https://doi.org/10.1186/s12879-019-4354-z.

[16] Fabio Chiusi and others, ‘Automating COVID Responses: The Impact of Automated Decision-Making on the COVID-19 Pandemic’ (AlgorithmWatch 2022) https://algorithmwatch.org/en/wp-content/uploads/2021/12/Tracing-The-Tracers-2021-report-AlgorithmWatch.pdf accessed 26 April 2023.

[17] F Yang, L. Heemsbergen and R Fordyce, ‘Comparative Analysis of China’s Health Code, Australia’s COVIDSafe and New Zealand’s COVID Tracer Surveillance App: A New Corona of Public Health Governmentality?’ (2020) 178:1 Media International Australia 182, 10.1177/1329878X20968277.

[18] F Yang, L Heemsbergen and R Fordyce, ‘Comparative Analysis of China’s Health Code, Australia’s COVIDSafe and New Zealand’s COVID Tracer Surveillance App: A New Corona of Public Health Governmentality?’ (2020) 178:1 Media International Australia 182, 10.1177/1329878X20968277.

[19] Ada Lovelace Institute, ‘Health data and COVID-19 technologies’ https://www.adalovelaceinstitute.org/our-work/programmes/health-data-covid-19-tech/
accessed 31 May 2023.

[20] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 30 March 2023.

[21] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 30 March 2023; Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 30 March 2023.

[22] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023; ‘Exit through the App Store? COVID-19 Rapid Evidence Review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 12 April 2023; ‘No Green Lights, No Red Lines’ (2020) www.adalovelaceinstitute.org/report/covid-19-no-green-lights-no-red-lines accessed 12 April 2023; ‘Confidence in a Crisis? Building Public Trust in a Contact Tracing App’ (2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 12 April 2023.

[23] DW, ‘German police under fire for misuse of COVID app’ DW (11 January 2022) www.dw.com/en/german-police-under-fire-for-misuse-of-covid-contact-tracing-app/a-60393597 accessed 31 March 2023; E Tham, ‘China Bank Protest Stopped by Health Codes Turning Red, Depositors Say’ (Reuters, 16 June 2022) www.reuters.com/world/china/china-bank-protest-stopped-by-health-codes-turning-red-depositors-say-2022-06-14 accessed 31 March 2023.

[24] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (2023) https://covid19.adalovelaceinstitute.orgaccessed 31 May 2023.

[25] Ada Lovelace Institute, ‘Health data and COVID-19 technologies’  https://www.adalovelaceinstitute.org/our-work/programmes/health-data-covid-19-tech accessed 31 May 2023.

[26] Centers for Disease Control and Prevention ‘Contact Tracing’ (2022) www.cdc.gov/coronavirus/2019-ncov/easy-to-read/contact-tracing.html accessed 31 March 2023.

[27] M Hunter, ‘Track and Trace, Trial and Error: Assessing South Africa’s Approaches to Privacy in Covid-19 Digital Contact Tracing’ (December 2020) www.researchgate.net/publication/350896038_Track_and_trace_trial_and_error_Assessing_South_Africa%27s_approaches_to_privacy_in_Covid-19_digital_contact_tracing accessed 31 March 2023.

[28] Some areas used manual contact tracing effectively, for example Vietnam and the Indian state of Kerala. See G Razzano, ‘Digital hegemonies for COVID-19’ (Global Data Justice, 5 November 2020) https://globaldatajustice.org/gdj/188 accessed 31 March 2023.

[29] C Yang, ‘Digital Contact Tracing in the Pandemic Cities: Problematizing the Regime of Traceability in South Korea’ (2022) 9:1 Big Data & Society https://doi.org/10.1177/20539517221089294.

[30] Freedom House ‘Freedom on the net 2021: South Africa’ (2021) https://freedomhouse.org/country/south-africa/freedom-net/2021 accessed 31 March 2023.

[31] M Hunter, ‘Track and Trace, Trial and Error: Assessing South Africa’s Approaches to Privacy in Covid-19 Digital Contact Tracing’ (December 2020) www.researchgate.net/publication/350896038_Track_and_trace_trial_and_error_Assessing_South_Africa%27s_approaches_to_privacy_in_Covid-19_digital_contact_tracing accessed 31 March 2023.

[32] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 9 June 2023.

[33] Ada Lovelace Institute, ‘Provisos for a contact tracing app: The route to trustworthy digital contact tracing’ (4 May 2020) www.adalovelaceinstitute.org/evidence-review/provisos-covid-19-contact-tracing-app accessed 31 March 2023.

[34] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (2023), https://covid19.adalovelaceinstitute.org  accessed 31 May 2023

[35] M Ciucci and F Gouarderes, ‘National COVID-19 Contact Tracing Apps’ (Think Tank European Parliament, 15 May 2020) www.europarl.europa.eu/thinktank/en/document/IPOL_BRI(2020)652711 accessed 31 March 2023.

[36] M Briers, C Holmes and C Fraser, ‘Demonstrating the impact of the NHS COVID-19 app: Statistical analysis from researchers supporting the development of the NHS COVID-19 app’ (The Alan Turing Institute, 2020) www.turing.ac.uk/blog/demonstrating-impact-nhs-covid-19-app accessed 31 March 2023.

[37] M Veale, ‘The English Law of QR Codes: Presence Tracing and Digital Divides’ (Lex-Atlas: Covid-19, 25 May 2021) https://lexatlas-c19.org/the-english-law-of-qr-codes accessed 31 March 2023.

[38] M Veale, ‘The English Law of QR Codes: Presence Tracing and Digital Divides’ (Lex-Atlas: Covid-19, 25 May 2021) https://lexatlas-c19.org/the-english-law-of-qr-codes accessed 31 March 2023.

[39] Ministry of Health, ‘Ministry of Health to trial Near Field Communication (NFC) tap in technology with NZ COVID Tracer’ (Ministry of Health, New Zealand, 2021) www.health.govt.nz/news-media/media-releases/ministry-health-trial-near-field-communication-nfc-tap-technology-nz-covid-tracer accessed 14 April 2023.

[40] We draw on evidence on a cross-section of 34 countries in this report. Three countries in our sample never launched a national contact tracing app, and we could not find reliable information on six countries. You can find more information on technical infrastructure of contact tracing apps on COVID-19 data explorer. Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (May 2023), https://covid19.adalovelaceinstitute.org  l accessed 31 May 2023

[41] L White and P Basshuysen, ‘Privacy versus Public Health? A Reassessment of Centralised and Decentralised Digital Contact Tracing’ (2021) 27 Science and Engineering Ethics 23 https://doi.org/10.1007/s11948-021-00301-0 accessed 31 March 2023

[42] M Ciucci and F Gouarderes, ‘National COVID-19 Contact Tracing Apps’ (Think Tank European Parliament, 15 May 2020) www.europarl.europa.eu/thinktank/en/document/IPOL_BRI(2020)652711 accessed 31 March 2023.

[43] E Braun, ‘French contact-tracing app sent just 14 notifications after 2 million downloads’ (Politico, 23 June 2020) www.politico.eu/article/french-contact-tracing-app-sent-just-14-notifications-after-2-million-downloads accessed 31 March 2023; BBC News ‘Australia Covid: Contact tracing app branded expensive “failure”’ (10 August 2022) www.bbc.co.uk/news/world-australia-62496322 accessed 31 March 2023.

[44] M Veale, ‘Opinion: Privacy is not the problem with the Apple-Google contact tracing app’ (UCL News, 1 July 2020) www.ucl.ac.uk/news/2020/jul/opinion-privacy-not-problem-apple-google-contact-tracing-app accessed 31 March 2023; N Lomas ‘Germany ditches centralized approach to app for COVID-19 contacts tracing’ (TechCrunch, 27 April 2020) https://techcrunch.com/2020/04/27/germany-ditches-centralized-approach-to-app-for-covid-19-contacts-tracing accessed 31 March 2023.

[45] G Goggin, ‘COVID-19 Apps in Singapore and Australia: Reimagining Health Nations with Digital Technology’ (2020) 177:1 Media International Australia 61, 10.1177/1329878X20949770.

[46] G Goggin, ‘COVID-19 Apps in Singapore and Australia: Reimagining Health Nations with Digital Technology’ (2020) 177:1 Media International Australia 61, 10.1177/1329878X20949770.

[47] M Ciucci and F Gouarderes, ‘National COVID-19 Contact Tracing Apps’ (Think Tank European Parliament, 15 May 2020) www.europarl.europa.eu/thinktank/en/document/IPOL_BRI(2020)652711 accessed 26 May 2023 C Gorey, ‘4 things you need to know before installing the HSE Covid-19 contact-tracing app’ (Silicon Republic, 7 July 2020) www.siliconrepublic.com/enterprise/hse-COVID-19-contact-tracing-app accessed 31 March 2023.

[48] AL Popescu, ‘România în urma pandemiei. Statul ignoră propria aplicație anti-Covid, dar și una lansată gratis’ (Europa Libera Romania, 27 November 2020) https://romania.europalibera.org/a/rom%C3%A2nia-%C3%AEn-urma-pandemiei-statul-ignor%C4%83-propria-aplica%C8%9Bie-anti-covid-dar-%C8%99i-una-lansat%C4%83-gratis/30972627.html accessed 31 March 2023; Fabio Chiusi and others, ‘Automating COVID Responses: The Impact of Automated Decision-Making on the COVID-19 Pandemic’ (AlgorithmWatch 2022) https://algorithmwatch.org/en/wp-content/uploads/2021/12/Tracing-The-Tracers-2021-report-AlgorithmWatch.pdf accessed 31 March 2023 https://romania.europalibera.org/a/românia-în-urma-pandemiei-statul-ignoră-propria-aplicație-anti-covid-dar-și-una-lansată-gratis/30972627.html

[49] Several countries in our sample, such as China and India, had a very fragmented contact tracing app ecosystem, with various states/cities/municipalities attempting to create their own apps. There are therefore notable differences across provinces, making difficult to capture the diversity of implementation and experiences.

[50] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (2023), https://covid19.adalovelaceinstitute.org l accessed 31 May 2023

[51] UK Health Security Agency, ‘NHS COVID-19 app’ (gov.uk, 2020) www.gov.uk/government/collections/nhs-covid-19-app accessed 31 March 2023.

[52] MIT Technology Review, ‘Covid Tracing Tracker’ (2021) www.technologyreview.com/tag/covid-tracing-tracker accessed 31 March 2023.

[53] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (19 April 2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 31 March 2023, 4.

[54] C Wymant, ‘The epidemiological impact of the NHS COVID-19 app’ (National Institutes of Health, 2021) https://pubmed.ncbi.nlm.nih.gov/33979832/ accessed 31 March 2023.

[55] RW Albertus and F Makoza, ‘An Analysis of the COVID-19 Contact Tracing App in South Africa: Challenges Experienced by Users’ (2022) 15:1 African Journal of Science, Technology, Innovation and Development 124,  https://doi.org/10.1080/20421338.2022.2043808; Office of Audit and Evaluation (Health Canada) and the Public Health Agency of Canada, ‘Evaluation of the National COVID-19 Exposure Notification App’ (Health Canada, 20 June 2022) www.canada.ca/en/health-canada/corporate/transparency/corporate-management-reporting/evaluation/covid-alert-national-covid-19-exposure-notification-app.html accessed 26 May 2023.

[56] F Vogt and others, ‘Effectiveness Evaluation of Digital Contact Tracing for COVID-19 in New South Wales, Australia’ (2022) 7:3 The Lancet E250, https://doi.org/10.1016/S2468-2667(22)00010-X; Ada Lovelace Institute, ‘Provisos for a contact tracing app: The route to trustworthy digital contact tracing’ (2020) www.adalovelaceinstitute.org/evidence-review/provisos-covid-19-contact-tracing-app accessed 26 May 2023.

[57] E Braun, ‘French contact-tracing app sent just 14 notifications after 2 million downloads.’ (Politico, 23 June 2020) www.politico.eu/article/french-contact-tracing-app-sent-just-14-notifications-after-2-million-downloads accessed 31 March 2023.

[58] F Vogt and others, ‘Effectiveness Evaluation of Digital Contact Tracing for COVID-19 in New South Wales, Australia’ (2022) 7:3 The Lancet E250, https://doi.org/10.1016/S2468-2667(22)00010-X; AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023.

[59] AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023.

[60] For example, see Y Huang and others, ‘Users’ Expectations, Experiences, and Concerns with COVID Alert, and Exposure-Notification App’ (2022) 6: CSCW2 ACM Journals: Proceedings of the ACM on Human–Computer Interaction 350, https://doi.org/10.1145/3555770.

[61] ‘Digital Global Health and Humanitarianism Lab (DGHH Lab)’ https://dghhlab.com/publications/#PUB-DRCOVID19 accessed 31 March 2023.

[62] ‘Digital Global Health and Humanitarianism Lab (DGHH Lab)’ https://dghhlab.com/publications/#PUB-DRCOVID19 accessed 31 March 2023.

[63] BBC News, ‘Covid in Scotland: Thousands turn off tracking app’ (24 July 2021) www.bbc.co.uk/news/uk-scotland-57941343 accessed 31 March 2023.

[64] S Trendall, ‘Data suggests millions of users have not enabled NHS contact-tracing app’ (Public Technology, 30 June 2021) www.publictechnology.net/articles/news/data-suggests-millions-users-have-not-enabled-nhs-contact-tracing-app accessed 31 March 2023.

[65] V Garousi and D Cutting, ‘What Do Users Think of the UK’s Three COVID-19 Contact Tracing Apps? A Comparative Analysis’ (2021) 28:1 BMJ Health Care Inform e100320, 10.1136/bmjhci-2021-100320.

[66] Office of Audit and Evaluation (Health Canada) and the Public Health Agency of Canada, ‘Evaluation of the National COVID-19 Exposure Notification App’ (Health Canada, 20 June 2022) www.canada.ca/en/health-canada/corporate/transparency/corporate-management-reporting/evaluation/covid-alert-national-covid-19-exposure-notification-app.html accessed 31 March 2023.

[67] Y Huang and others, ‘Users’ Expectations, Experiences, and Concerns with COVID Alert, and Exposure-Notification App’ (2022) 6: CSCW2 ACM Journals: Proceedings of the ACM on Human–Computer Interaction 350, https://doi.org/10.1145/3555770.

[68] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 31 March 2023.

[69] C Wymant, ‘The epidemiological impact of the NHS COVID-19 app’ (National Institutes of Health, 2021) https://directorsblog.nih.gov/2021/05/25/u-k-study-shows-power-of-digital-contact-tracing-in-the-pandemic accessed 26 May 2023.

[70] Ada Lovelace Institute, ‘Confidence in a crisis? Building public trust in a contact tracing app’ (2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 26 May 2023.

[71] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 26 May 2023.

[72] F Yang, L. Heemsbergen and R Fordyce, ‘Comparative Analysis of China’s Health Code, Australia’s COVIDSafe and New Zealand’s COVID Tracer Surveillance App: A New Corona of Public Health Governmentality?’ (2020) 178:1 Media International Australia 182, 10.1177/1329878X20968277.

[73] Planet Payment. ‘Alipay and WeChat Pay’ https://www.planetpayment.com/en/merchants/alipay-and-wechat-pay/ accessed 26 May 2023.

[74] F Liang, ‘COVID-19 and Health Code: How Digital Platforms Tackle the Pandemic in China’ (2021) 6:3 Social Media + Society, https://doi.org/10.1177/2056305120947657; National Health Commission of the People’s Republic of China, ‘Prevention and control of novel coronavirus pneumonia’ (7 March 2020) www.nhc.gov.cn/xcs/zhengcwj/202003/4856d5b0458141fa9f376853224d41d7.shtml accessed 26 May 2023.

[75] W Bin and others, ‘Depositors Are Forcibly Given Red Codes, the Latest Responses from All Parties’ (Southern Metropolis Daily, 14 June 2022) https://mp.weixin.qq.com/s/KAc8_3rCviqnVv05aQvSlw?fbclid=IwAR1xfMQtjZsRikz9vkisYxQBVAAkE9tgekKnMQ4nPaynr2BN9Ceyep3mjq8 accessed 13 April 2023.

[76] S Chan, ‘COVID-19 contact tracing apps reach 9% adoption in most populous countries’ (Sensor Tower, July 2020) https://sensortower.com/blog/contact-tracing-app-adoption accessed 26 May 2023.

[77] Ada Lovelace Institute, ‘Confidence in a crisis? Building public trust in a contact tracing app’ (2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 26 May 2023

[78] L Muscato, ‘Why people don’t tryst contact tracing apps, and what to do about it’ (Technology Review, 12 November 2020) www.technologyreview.com/2020/11/12/1012033/why-people-dont-trust-contact-tracing-apps-and-what-to-do-about-it accessed 31 March 2023; AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023; L Horvath and others, ‘Adoption and Continued Use of Mobile Contact Tracing Technology: Multilevel Explanations from a Three-Wave Panel Survey and Linked Data’ (2022) 12:1 BMJ Open e053327, 10.1136/bmjopen-2021-053327; Ada Lovelace Institute, ‘Public attitudes to COVID-19, technology and inequality: A tracker’ (2021) https://www.adalovelaceinstitute.org/resource/public-attitudes-covid-19/ accessed 26 May 2023; A Kozyreva and others, ‘Psychological Factors Shaping Public Responses to COVID-19 Digital Contact Tracing Technologies in Germany’ (2021) 11 Scientific Reports 18716, https://doi.org/10.1038/s41598-021-98249-5; G Samuel and others, ‘COVID-19 Contact Tracing Apps: UK Public Perceptions’ (2022) 1:32 Critical Public Health 31, 10.1080/09581596.2021.1909707; M Caserotti and others, ‘Associations of COVID-19 Risk Perception with Vaccine Hesitancy Over Time for Italian Residents’ (2021) 272 Social Science & Medicine 113688, 10.1016/j.socscimed.2021.113688.

[79] M Koetse ‘Goodbye, Health Code: Chinese netizens say farewell to the green horse’ (What’s on Weibo, 8 December 2022) www.whatsonweibo.com/goodbye-health-code-chinese-netizens-say-farewell-to-the-green-horse accessed 26 May 2023; L Houchen, ‘Are you ready to use the “Health Code” all the time?’ (7 April 2020) https://mp.weixin.qq.com/s/xDKKicV22IBRGnNnNStOVg accessed 26 May 2023. The National Health Commission’s notice to end the Health Code mandate did not immediately translate into municipal governments discontinuing their policies. See Health Commission, ‘Notice on printing and distributing the Prevention and Control Plan for Novel Coronavirus Pneumonia (Ninth Edition)’ (Health Commission, 28 June 2022) www.gov.cn/xinwen/2022-06/28/content_5698168.htm accessed 13 April 2023.

[80] For example, see Southern Metropolis Daily’s interview with a number of experts on the impacts of using health codes in China. W Bin and others, ‘Depositors Are Forcibly Given Red Codes, the Latest Responses from All Parties’ (Southern Metropolis Daily, 14 June 2022) https://mp.weixin.qq.com/s/KAc8_3rCviqnVv05aQvSlw?fbclid=IwAR1xfMQtjZsRikz9vkisYxQBVAAkE9tgekKnMQ4nPaynr2BN9Ceyep3mjq8 accessed 13 April 2023.

[81] M Caserotti and others, ‘Associations of COVID-19 Risk Perception with Vaccine Hesitancy Over Time for Italian Residents’ (2021) 272 Social Science & Medicine 113688, 10.1016/j.socscimed.2021.113688.

[82] M Dewatripont, ‘Policy Insight 110: Vaccination Strategies in the Midst of an Epidemic’ (Centre for Economic Policy Research, 1 October 2021) https://cepr.org/publications/policy-insight-110-vaccination-strategies-midst-epidemic accessed 13 April 2023.

[83] G Samuel and others, ‘COVID-19 Contact Tracing Apps: UK Public Perceptions’ (2022) 1:32 Critical Public Health 31, 10.1080/09581596.2021.1909707.

[84] S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[85] J Amann, J Sleigh and E Vayena, ‘Digital Contact-Tracing during the Covid-19 Pandemic: An Analysis of Newspaper Coverage in Germany, Austria, and Switzerland’ (2021) PLOS ONE, https://doi.org/10.1371/journal.pone.0246524.

[86] AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023.

[87] Office of Audit and Evaluation (Health Canada) and the Public Health Agency of Canada, ‘Evaluation of the National COVID-19 Exposure Notification App’ (Health Canada, 20 June 2022) www.canada.ca/en/health-canada/corporate/transparency/corporate-management-reporting/evaluation/covid-alert-national-covid-19-exposure-notification-app.html accessed13 April 2023.

[88] J Ore, ‘Where did things go wrong with Canada’s COVID Alert App?’ (CBS, 9 February 2022) www.cbc.ca/radio/costofliving/from-boycott-to-bust-we-talk-spotify-and-neil-young-and-take-a-look-at-covid-alert-app-1.6339708/where-did-things-go-wrong-with-canada-s-covid-alert-app-1.6342632 accessed 13 April 2023.

[89] Office of Audit and Evaluation (Health Canada) and the Public Health Agency of Canada, ‘Evaluation of the National COVID-19 Exposure Notification App’ (Health Canada, 20 June 2022) www.canada.ca/en/health-canada/corporate/transparency/corporate-management-reporting/evaluation/covid-alert-national-covid-19-exposure-notification-app.html accessed 13 April 2023.

[90] S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[91] L Dowthwaite and others, ‘Public Adoption of and Trust in the NHS COVID-19 Contact Tracing App in the United Kingdom: Quantitative Online Survey Study’ (2021) 23:9 JMIR Publications e29085, 10.2196/29085.

[92] Ada Lovelace Institute, ‘Confidence in a crisis? Building public trust in a contact tracing app’ (2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 13 April 2023; ‘Provisos for a contact tracing app: The route to trustworthy digital contact tracing’ (2020) www.adalovelaceinstitute.org/evidence-review/provisos-covid-19-contact-tracing-app accessed 13 April 2023.

[93] C Bambra and others, ‘The COVID-19 Pandemic and Health Inequalities’ (2020) 74:11 Journal of Epidemiology & Community Health 964, http://dx.doi.org/10.1136/jech-2020-214401; E Yong, ‘The Pandemic’s Legacy Is Already Clear’ (The Atlantic, 30 September 2022) www.theatlantic.com/health/archive/2022/09/covid-pandemic-exposes-americas-failing-systems-future-epidemics/671608 accessed 13 April 2023.

[94] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 Rapid Evidence Review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 26 May 2023.

[95] L Marelli, K Kieslich and S Geiger, ‘COVID-19 and Techno-Solutionism: Responsibilization without Contextualization?’ (2022) 32:1 Critical Public Health 1, https://doi.org/10.1080/09581596.2022.2029192.

[96] S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[97] Government of Ireland, ‘COVID Tracker app’ www.covidtracker.ie accessed 31 March 2023.

[98]  S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[99] S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[100] S Landau, People Count: Contact-Tracing Apps and Public Health (The MIT Press, 2021).

[101] M Veale, ‘The English Law of QR Codes: Presence Tracing and Digital Divides’ (Lex-Atlas: Covid-19, 25 May 2021) https://lexatlas-c19.org/the-english-law-of-qr-codes accessed 31 March 2023.

[102] S Reed and others, ‘Tackling Covid-19: A Case for Better Financial Support to Self-Isolate’ (Nuffield Trust, 14 May 2021) www.nuffieldtrust.org.uk/research/tackling-covid-19-a-case-for-better-financial-support-to-self-isolate accessed 26 May 2023.

[103] Statista, ‘Internet user penetration in Nigeria from 2018 to 2027’ (June 2022) www.statista.com/statistics/484918/internet-user-reach-nigeria accessed 26 May 2023.; G Razzano, ‘Privacy and the pandemic: An African response’ (Association For Progressive Communications, 21 June 2020) www.apc.org/en/pubs/privacy-and-pandemic-african-response accessed 31 March 2023.

[104] Ada Lovelace Institute, ‘Confidence in a Crisis? Building Public Trust in a Contact Tracing App’ (2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 26 May 2023.; ‘Provisos for a Contact Tracing App: The Route to Trustworthy Digital Contact Tracing’ (2020) www.adalovelaceinstitute.org/evidence-review/provisos-covid-19-contact-tracing-app accessed 26 May 2023.

[105] Privacy International, ‘The principles of data protection: not new and actually quite familiar’ (24 September 2018) https://privacyinternational.org/news-analysis/2284/principles-data-protection-not-new-and-actually-quite-familiar accessed 31 March 2023; Ada Lovelace Institute, ‘Provisos for a contact tracing app: The route to trustworthy digital contact tracing’ (2020) www.adalovelaceinstitute.org/evidence-review/provisos-covid-19-contact-tracing-app accessed 26 May 2023. Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 26 May 2023.

[106] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023.

[107] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 26 May 2023.

[108] TT Altshuler and RA Hershkovitz, ‘Digital Contact Tracing and the Coronavirus: Israeli and Comparative Perspectives’ (The Brookings Institution, August 2020) www.brookings.edu/wp-content/uploads/2020/08/FP_20200803_digital_contact_tracing.pdf accessed 31 March 2023.

[109] P Garrett and others, ‘High Acceptance of COVID-19 TRACING Technologies in Taiwan: A Nationally Representative Survey Analysis’ (2020) 19:5 International Journal of Environmental Research and Public Health 3323, 10.3390/ijerph19063323.

[110] TT Altshuler and RA Hershkovitz, ‘Digital Contact Tracing and the Coronavirus: Israeli and Comparative Perspectives’ (The Brookings Institution, August 2020) www.brookings.edu/wp-content/uploads/2020/08/FP_20200803_digital_contact_tracing.pdf accessed 31 March 2023.

[111] J Zhu, ‘The Personal Information Protection Law: China’s version of the GDPR?’ (Columbia Journal of Transnational Law, 14 February 2022) www.jtl.columbia.edu/bulletin-blog/the-personal-information-protection-law-chinas-version-of-the-gdpr accessed 26 May 2023.; it is noteworthy that there were pre-existing privacy rules in place embedded in several laws and regulations; however, these were not enforced with adequate oversight capacity. See A Geller, ‘How Comprehensive Is Chinese Data Protection Law? A Systematisation of Chinese Data Protection Law from a European Perspective’ (2020) 69:12 GRUR International Journal of European and International IP Law 1191, https://doi.org/10.1093/grurint/ikaa136.

[112] H Yu, ‘Living in the Era of Codes: A Reflection on China’s Health Code System’ (2022) Biosocieties, 10.1057/s41292-022-00290-8.

[113] A Li, ‘Explainer: China’s Covid-19 Health Code System’ (Hong Kong Free Press, 13 July 2022) https://hongkongfp.com/2022/07/13/explainer-chinas-COVID-19-health-code-system accessed 31 March 2023; A Clarance, ‘Aarogya Setu: Why India’s Covid-19 contact tracing app is controversial’ (BBC News, 15 May 2020) www.bbc.co.uk/news/world-asia-india-52659520 accessed 31 March 2023; W Bin and others, ‘Depositors Are Forcibly Given Red Codes, the Latest Responses from All Parties’ (Southern Metropolis Daily, 14 June 2022) https://mp.weixin.qq.com/s/KAc8_3rCviqnVv05aQvSlw?fbclid=IwAR1xfMQtjZsRikz9vkisYxQBVAAkE9tgekKnMQ4nPaynr2BN9Ceyep3mjq8 accessed 13 April 2023.

[114] TT Altshuler and RA Hershkovitz, ‘Digital Contact Tracing and the Coronavirus: Israeli and Comparative Perspectives’ (The Brookings Institution, August 2020) www.brookings.edu/wp-content/uploads/2020/08/FP_20200803_digital_contact_tracing.pdf accessed 31 March 2023.

[115] ‘Lex-Atlas: Covid-19’ https://lexatlas-c19.org accessed 31 March 2023.

[116] A Clarance, ‘Aarogya Setu: Why India’s Covid-19 contact tracing app is controversial’ (BBC News, 15 May 2020) www.bbc.co.uk/news/world-asia-india-52659520 accessed 31 March 2023.

[117] Internet Freedom Foundation, ‘Statement: Victory! Aarogya Setu changes from mandatory to, “best efforts”’ (18 May 2020) https://internetfreedom.in/aarogya-setu-victory accessed 26 May 2023.

[118] Evidence submitted to Ada Lovelace Institute by Internet Freedom Foundation, India.

[119] Norton Rose Fulbright, ‘Contact Tracing Apps: A New World for Data Privacy’ (February 2021) www.nortonrosefulbright.com/en/knowledge/publications/d7a9a296/contact-tracing-apps-a-new-world-for-data-privacy accessed 26 May 2023.

[120] T Klosowski, ‘The State of Consumer Data Privacy Laws in the US (and Why It Matters)’ (New York Times, 6 September 2021) www.nytimes.com/wirecutter/blog/state-of-privacy-laws-in-us accessed 26 May 2023.

[121] Health Insurance Portability and Accountability Act  is a federal law to protect sensitive patient health information, but contact tracing apps were not covered because they are not ‘regulated entities’ under the Act. Centers for Disease Control and Prevention ‘Health Insurance Portability and Accountability Act of 1996 (HIPAA) https://www.cdc.gov/phlp/publications/topic/hipaa.html accessed 26 May 2023.

[122] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 31 March 2023.

[123] P Valade, ‘Jumbo Privacy Review: North Dakota’s Contact Tracing App’ (Jumbo, 21 May 2020) https://blog.withjumbo.com/jumbo-privacy-review-north-dakota-s-contact-tracing-app.html accessed 31 March 2023.

[124]  Civil Liberties Union for Europe, ‘Do EU Governments Continue to Operate Contact Tracing Apps Illegitimately?’ (October 2021) https://dq4n3btxmr8c9.cloudfront.net/files/Nv4A36/DO_EU_GOVERNMENTS_CONTINUE_TO_OPERATE_CONTACT_TRACING_APPS_ILLEGITIMATELY.pdf accessed 31 March 2023.

[125] Fabio Chiusi and others, ‘Automating COVID Responses: The Impact of Automated Decision-Making on the COVID-19 Pandemic’ (AlgorithmWatch 2022) https://algorithmwatch.org/en/wp-content/uploads/2021/12/Tracing-The-Tracers-2021-report-AlgorithmWatch.pdf accessed 31 March 2023.

[126] A Hussain, ‘TraceTogether data used by police in one murder case: Vivian Balakrishnan (Yahoo! News, 5 January 2021) https://uk.style.yahoo.com/trace-together-data-used-by-police-in-one-murder-case-vivian-084954246.html?guccounter=2 accessed 31 March 2023.

[127] K Han, ‘COVID app triggers overdue debate on privacy in Singapore’ (Al Jazeera, 10 February 2021) www.aljazeera.com/news/2021/2/10/covid-app-triggers-overdue-debate-on-privacy-in-singapore accessed 31 March 2023.

[128] K Han, ‘COVID app triggers overdue debate on privacy in Singapore’ (Al Jazeera, 10 February 2021) www.aljazeera.com/news/2021/2/10/covid-app-triggers-overdue-debate-on-privacy-in-singapore accessed 31 March 2023.

[129] S Hilberg, ‘The new German Privacy Act: An overview’ (Deloitte) www2.deloitte.com/dl/en/pages/legal/articles/neues-bundesdatenschutzgesetz.html accessed 26 May 2023.

[130] Civil Liberties Union for Europe, ‘Do EU Governments Continue to Operate Contact Tracing Apps Illegitimately?’ (October 2021) https://dq4n3btxmr8c9.cloudfront.net/files/Nv4A36/DO_EU_GOVERNMENTS_CONTINUE_TO_OPERATE_CONTACT_TRACING_APPS_ILLEGITIMATELY.pdf accessed 31 March 2023.

[131] H Heine, ‘Check-In feature: Corona-Warn-App can now scan luca’s QR codes’ (Corona Warn-app Open Source Project, 9 November 2021) www.coronawarn.app/en/blog/2021-11-09-cwa-luca-qr-codes accessed 26 May 2023.

[132] Fabio Chiusi and others, ‘Automating COVID Responses: The Impact of Automated Decision-Making on the COVID-19 Pandemic’ (AlgorithmWatch 2022) https://algorithmwatch.org/en/wp-content/uploads/2021/12/Tracing-The-Tracers-2021-report-AlgorithmWatch.pdf accessed 26 May 2023.

[133] M Knodel, ‘Public Health, Big Tech, and Privacy: Multistakeholder Governance and Technology-Assisted Contact tracing’ (Global Insights, January 2021) www.ned.org/wp-content/uploads/2021/01/Public-Health-Big-Tech-Privacy-Contact-Tracing-Knodel.pdf accessed 16 April 2023.

[134] M Veale, ‘Opinion: Privacy is not the problem with the Apple-Google contact tracing app’ (UCL News, 1 July 2020) www.ucl.ac.uk/news/2020/jul/opinion-privacy-not-problem-apple-google-contact-tracing-app accessed 31 March 2023.

[135] Ada Lovelace Institute, Rethinking Data and Rebalancing Digital Power (2022) www.adalovelaceinstitute.org/report/rethinking-data accessed 16 April 2023.

[136] H Mance, ‘Shoshana Zuboff: “Privacy Has Been Extinguished. It Is Now a Zombie”’ (Financial Times, 30 January 2023) www.ft.com/content/0cca6054-6fc9-4a94-b2e2-890c50d956d5#myft:my-news:page accessed 16 April 2023.

[137] M Knodel, ‘Public Health, Big Tech, and Privacy: Multistakeholder Governance and Technology-Assisted Contact tracing’ (Global Insights, January 2021) www.ned.org/wp-content/uploads/2021/01/Public-Health-Big-Tech-Privacy-Contact-Tracing-Knodel.pdf accessed 16 April 2023.

[138] GOVLAB and Knight Foundation, ‘The #Data4Covid19 Review’ https://review.data4covid19.org accessed 16 April 2023.

[139] Ada Lovelace Institute, ‘Exit through the App Store? COVID-19 rapid evidence review’ (2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 16 April 2023.

[140] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 16 April 2023.

[141] World Health Organization, ‘Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX’ (WHO, 7 October 2020) www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax accessed 16 April 2023.

[142] World Health Organization, ‘Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX’ (WHO, 7 October 2020) www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax accessed 16 April 2023.

[143] Pfizer, ‘Pfizer and BioNtech announce vaccine candidate against COVID-19 achieved success in first interim analysis from Phase 3 Study’ (9 November 2020) www.pfizer.com/news/press-release/press-release-detail/pfizer-and-biontech-announce-vaccine-candidate-against accessed 16 April 2023.

[144] NHS England, ‘Landmark moment as first NHS patient receives COVID-19 vaccination’ (NHS England News, December 2020) www.england.nhs.uk/2020/12/landmark-moment-as-first-nhs-patient-receives-COVID-19-vaccination accessed 12 April 2023.

[145] H Davidson, ‘China Approves Sinopharm Covid-19 Vaccine for General Use’ (Guardian, 31 December 2020) www.theguardian.com/world/2020/dec/31/china-approves-sinopharm-covid-19-vaccine-for-general-use accessed 12 April 2023.

[146] NHS England, ‘Landmark moment as first NHS patient receives COVID-19 vaccination’ (NHS England News, * December 2020) www.england.nhs.uk/2020/12/landmark-moment-as-first-nhs-patient-receives-COVID-19-vaccination accessed 12 April 2023.

[147] Y Noguchi, ‘The history of vaccine passports in the US and what’s new’ (NPR, 8 April 2021) www.npr.org/2021/04/08/985253421/the-history-of-vaccine-passports-in-the-u-s-and-whats-new accessed 12 April 2023.

[148]  Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (2023) https://covid19.adalovelaceinstitute.org l accessed 31 May 2023.

[149] Y Noguchi, ‘The history of vaccine passports in the US and what’s new’ (NPR, 8 April 2021) www.npr.org/2021/04/08/985253421/the-history-of-vaccine-passports-in-the-u-s-and-whats-new accessed 12 April 2023.

[150] K Teyras, ‘Covid-19 health passes can open the door to a digital ID revolution’ (THALES, 30 November 2021) https://dis-blog.thalesgroup.com/identity-biometric-solutions/2021/06/23/covid-19-health-passes-can-open-the-door-to-a-digital-id-revolution accessed 12 April 2023; Privacy International, ‘Covid-19 vaccination certificates: WHO sets minimum demands, governments must do even better’ (9 August 2021) https://privacyinternational.org/advocacy/4607/covid-19-vaccination-certificates-who-sets-minimum-demands-governments-must-do-even accessed 12 April 2023.

[151] S Davidson, ‘How vaccine passports could change digital identity’ (Digicert,6 November 2021) www.digicert.com/blog/how-vaccine-passports-could-change-digital-identity accessed 12 April 2023.

[152] Ada Lovelace Institute, ‘International monitor: vaccine passports and COVID-19 status apps’ (2021) https://www.adalovelaceinstitute.org/resource/international-monitor-vaccine-passports-and-covid-19-status-apps/ accessed 12 April 2023.

[153] F Kritz, ‘The vaccine passport debate actually began in 1897 over a plague vaccine’ (NPR, 8 April 2021) www.npr.org/sections/goatsandsoda/2021/04/08/985032748/the-vaccine-passport-debate-actually-began-in-1897-over-a-plague-vaccine accessed 12 April 2023.

[154] F Kritz, ‘The vaccine passport debate actually began in 1897 over a plague vaccine’ (NPR, 8 April 2021) www.npr.org/sections/goatsandsoda/2021/04/08/985032748/the-vaccine-passport-debate-actually-began-in-1897-over-a-plague-vaccine accessed 12 April 2023.

[155] Ada Lovelace Institute, Checkpoints for Vaccine Passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[156] ibid.

[157] S Subramanian, ‘Biometric tracking can ensure billions have immunity against Covid-19’ (Bloomberg, 13 August 2020) https://www.bloomberg.com/features/2020-COVID-vaccine-tracking-biometric accessed 13 April 2023

[158] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[159] See the legacy of COVID-19 technologies?: Outstanding questions section, p. 118.

[160] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (2023) https://covid19.adalovelaceinstitute.org  accessed 31 May 2023

 

 

[161] World Health Organization, ‘Rapidly escalating Covid-19 cases amid reduced virus surveillance forecasts a challenging autumn and winter in the WHO European Region’ (WHO, 19 July 2022) www.who.int/europe/news/item/19-07-2022-rapidly-escalating-COVID-19-cases-amid-reduced-virus-surveillance-forecasts-a-challenging-autumn-and-winter-in-the-who-european-region accessed 12 April 2023.

[162] F Kritz, ‘The vaccine passport debate actually began in 1897 over a plague vaccine’ (NPR, 8 April 2021) www.npr.org/sections/goatsandsoda/2021/04/08/985032748/the-vaccine-passport-debate-actually-began-in-1897-over-a-plague-vaccine accessed 12 April 2023.

[163] The New Zealand government shifted its policy towards COVID-19 acceptance by opening the borders and ending lockdowns in October 2021. See J Curtin, ‘The end of New Zealand’s zero-COVID policy’ (Think Global Health, 28 October 2021) www.thinkglobalhealth.org/article/end-new-zealands-zero-COVID-policy accessed 12 April 2023.

[164] Reuters, ‘Brazil health regulator asks Bolsonaro to retract criticism over vaccines’ (9 January 2022) www.reuters.com/business/healthcare-pharmaceuticals/brazil-health-regulator-asks-bolsonaro-retract-criticism-over-vaccines-2022-01-09 accessed 12 April 2023.

[165] Al Jazeera, ‘Brazil judge mandates proof of vaccination for foreign visitors’ (12 December 2021) www.aljazeera.com/news/2021/12/12/brazil-justice-mandates-vaccine-passport-for-visitors accessed 12 April 2023.

[166]  Y Noguchi, ‘The history of vaccine passports in the US and what’s new’ (NPR, 8 April 2021) www.npr.org/2021/04/08/985253421/the-history-of-vaccine-passports-in-the-u-s-and-whats-new accessed 12 April 2023.

[167] M Bull, ‘The Italian Government Response to Covid-19 and the Making of a Prime Minister’ (2021) 13:2 Contemporary Italian Politics 149, https://doi.org/10.1080/23248823.2021.1914453.

[168] A Peacock, ‘What is the Covid ‘Super Green’ pass?’ (Tuscany Now & More) www.tuscanynowandmore.com/discover-italy/essential-advice/travelling-italy-COVID-green-pass accessed 12 April 2023.

[169] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (May 2023), https://covid19.adalovelaceinstitute.org accessed 31 May 2023

 

[170] DF Povse, ‘Examining the pros and cons of digital COVID certificates in the EU’ (Ada Lovelace Institute, 15 December 2022) www.adalovelaceinstitute.org/blog/examining-digital-covid-certificates-eu accessed 12 April 2023.

[171] Ada Lovelace Institute, ‘What place should COVID-19 vaccine passports have in society?’ (17 February 2021) www.adalovelaceinstitute.org/report/covid-19-vaccine-passports accessed 31 March 2023.

[172] Ada Lovelace Institute, ‘International Monitor: vaccine passports and COVID-19 status apps’ (15 October 2021) https://www.adalovelaceinstitute.org/resource/international-monitor-vaccine-passports-and-covid-19-status-apps/ accessed 31 March 2023

[173] S Amaro, ‘France’s Macron sparks outrage as he vows to annoy the unvaccinated’ (CNBC, 5 January 2022) www.cnbc.com/2022/01/05/macron-french-president-wants-to-annoy-the-unvaccinated-.html accessed 12 April 2023.

[174] G Vergallo and others, ‘Does the EU COVID Digital Certificate Strike a Reasonable Balance between Mobility Needs and Public Health? (2021) 57:10 Medicina (Kaunas) 1077, 10.3390/medicina57101077.

[175] C Franco-Paredes, ‘Transmissibility of SARS-CoV-2 among Fully Vaccinated Individuals’ (2022) 22:1 The Lancet P16, https://doi.org/10.1016/S1473-3099(21)00768-4.

[176] World Health Organization, ‘Information for the public: COIVID-19 vaccines’ (WHO, 18 November 2022) https://www.who.int/westernpacific/emergencies/covid-19/information-vaccines accessed 01 June 2023.

[177] World Health Organization, ‘Vaccine efficacy, effectiveness and protection’ (WHO, 14 July 2021) www.who.int/news-room/feature-stories/detail/vaccine-efficacy-effectiveness-and-protection accessed 12 April 2024; A Allen, ‘Pfizer CEO pushes yearly shots for Covid: Not so fast, experts say’ (KFF Health News, 21 March 2022) https://kffhealthnews.org/news/article/pfizer-ceo-albert-bourla-yearly-COVID-shots accessed 31 March 2023.

[178] World Health Organization, ‘Tracking SARS-CoV-2 variants’ www.who.int/activities/tracking-SARS-CoV-2-variants accessed 31 March 2023.

[179] G Warren and R Lofstedt, ‘Risk Communication and COVID-19 in Europe: Lessons for Future Public Health Crises’ (2021) 25:10 Journal of Risk Research 1161, https://doi.org/10.1080/13669877.2021.1947874.

[180] DF Povse, ‘Examining the pros and cons of digital COVID certificates in the EU’ (Ada Lovelace Institute, 15 December 2022) www.adalovelaceinstitute.org/blog/examining-digital-covid-certificates-eu accessed 31 March 2023.

[181] M Sallam, ‘COVID-19 Vaccine Hesitancy Worldwide: A Concise Systematic Review of Vaccine Acceptance Rates’ (2021) 9:2 Vaccines 160, https://doi.org/10.3390/vaccines9020160.

[182] SuperJob, ‘Most often, the introduction of QR codes is approved at mass events, least often – in non-food stores, but 4 out of 10 Russians are against any QR codes’ (16 November 2021) www.superjob.ru/research/articles/113182/chasche-vsego-vvod-qr-kodov-odobryayut-na-massovyh-meropriyatiyah accessed 31 March 2023.

[183] G Salau, ‘How vaccine cards are procured without jabs’ (The Guardian [Nigeria], 23 December 2021) https://guardian.ng/features/how-vaccine-cards-are-procured-without-jabs accessed 26 May 2023; E de Bre, ‘Fake COVID-19 vaccination cards emerge in Russia’ (Organized Crime and Corruption Reporting Project, 30 June 2021) www.occrp.org/en/daily/14733-fake-COVID-19-vaccination-cards-emerge-in-russia accessed 31 March 2023.

[184] J Ceulaer, ‘Viroloog Emmanuel Andre: “Covid Safe Ticket leidde tot meer besmettingen”’ (De Morgen, 29 November 2021) www.demorgen.be/nieuws/viroloog-emmanuel-andre-covid-safe-ticket-leidde-tot-meer-besmettingen~bae41a3e/?utm_source=link&utm_medium=social&utm_campaign=shared_earned accessed 12 April 2023.

[185] Gilmore and others, ‘Community Engagement to Support COVID-19 Vaccine Uptake: A Living Systematic Review Protocol’ (2022) 12 BMJ Open e063057, http://dx.doi.org/10.1136/bmjopen-2022-063057.

[186] AD Bourhanbour and O Ouchetto, ‘Morocco Achieves the Highest COVID-19 Vaccine Rates in Africa in the First Phase: What Are Reasons for Its Success?’ (2021) 28:4 Journal of Travel Medicine taab040, https://doi.org/10.1093/jtm/taab040.

[187] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023.

[188] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023.

[189] K Beaver, G Skinner and A Quigley, ‘Majority of Britons support vaccine passports but recognise concerns in new Ipsos UK Knowledge Panel poll’ (Ipsos, 31 March 2021) www.ipsos.com/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-uk-knowledgepanel-poll accessed 12 April 2023.

[190] H Kennedy, ‘The vaccine passport debate reveals fundamental views about how personal data should be used, its role in reproducing inequalities, and the kind of society we want to live in’ (LSE, 12 August 2021) https://blogs.lse.ac.uk/impactofsocialsciences/2021/08/12/the-vaccine-passport-debate-reveals-fundamental-views-about-how-personal-data-should-be-used-its-role-in-reproducing-inequalities-and-the-kind-of-society-we-want-to-live-in accessed 26 May 2023.

[191] C Brogan, ‘Vaccine passports linked to COVID-19 vaccine hesitancy in UK and Israel’ (Imperial College London, 2 September 2021) www.imperial.ac.uk/news/229153/vaccine-passports-linked-covid-19-vaccine-hesitancy accessed 12 April 2023.

[192] J Drury, ‘Behavioural Responses to Covid-19 Health Certification: A Rapid Review’ (2021) 21 BMC Public Health 1205, https://doi.org/10.1186/s12889-021-11166-0; JR de Waal, ‘One year on: Global update on public attitudes to government handling of Covid’ (YouGov, 19 November 2021) https://yougov.co.uk/topics/international/articles-reports/2021/11/19/one-year-global-update-public-attitudes-government accessed 12 April 2023.

[193] H Kennedy, ‘The vaccine passport debate reveals fundamental views about how personal data should be used, its role in reproducing inequalities, and the kind of society we want to live in’ (LSE, 12 August 2021) https://blogs.lse.ac.uk/impactofsocialsciences/2021/08/12/the-vaccine-passport-debate-reveals-fundamental-views-about-how-personal-data-should-be-used-its-role-in-reproducing-inequalities-and-the-kind-of-society-we-want-to-live-in accessed 12 April 2023.

[194] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[195] H Kennedy, ‘The vaccine passport debate reveals fundamental views about how personal data should be used, its role in reproducing inequalities, and the kind of society we want to live in’ (LSE, 12 August 2021) https://blogs.lse.ac.uk/impactofsocialsciences/2021/08/12/the-vaccine-passport-debate-reveals-fundamental-views-about-how-personal-data-should-be-used-its-role-in-reproducing-inequalities-and-the-kind-of-society-we-want-to-live-in accessed 12 April 2023.

[196] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[197] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (May 2023), https://covid19.adalovelaceinstitute.orgaccessed 31 May 2023

[198] B Bell, ‘Covid: Austrians heading towards lockdown for unvaccinated’ (BBC News, 12 November 2021) www.bbc.co.uk/news/world-europe-59245018 accessed 12 April 2023.

[199] B Bell, ‘Covid: Austrians heading towards lockdown for unvaccinated’ (BBC News, 12 November 2021) www.bbc.co.uk/news/world-europe-59245018 accessed 12 April 2023.

[200] Simmons + Simmons, ‘COVID-19 Italy: An easing of covid restrictions’ (1 May 2022) www.simmons-simmons.com/en/publications/ckh3mbdvv151g0a03z6mgt3dr/covid-19-decree-brings-strict-restrictions-for-italy accessed 12 April 2023.

[201] E de Bre, ‘Fake COVID-19 vaccination cards emerge in Russia’ (Organized Crime and Corruption Reporting Project, 30 June 2021) www.occrp.org/en/daily/14733-fake-COVID-19-vaccination-cards-emerge-in-russia accessed 31 March 2023.

[202] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023.

[203] Health Pass, ‘Sıkça Sorulan Sorular’ https://healthpass.saglik.gov.tr/sss.html accessed 12 April 2023.

[204] S Dwivedi, ‘“No one can be forced to get vaccinated”: Supreme Court’s big order’ (NDTV, 2 May 2022) www.ndtv.com/india-news/coronavirus-no-one-can-be-forced-to-get-vaccinated-says-supreme-court-adds-current-vaccine-policy-cant-be-said-to-be-unreasonable-2938319 accessed 12 April 2023.

[205] NHS, ‘NHS COVID Pass’ www.nhs.uk/nhs-services/covid-19-services/nhs-covid-pass accessed 12 May 2021.

[206] Our World in Data, ‘Coronavirus (COVID-19) Vaccinations’ https://ourworldindata.org/COVID-vaccinations?country=OWID_WRL accessed 12 April 2023.

[207] Harvard Global Health Institute, ‘From Ebola to COVID-19: Lessons in digital contact tracing in Sierra Leone’ (1 September 2020) https://globalhealth.harvard.edu/from-ebola-to-covid-19-lessons-in-digital-contact-tracing-in-sierra-leone accessed 26 May 2023.

[208] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023. Riaz and colleagues define vaccine nationalism as ‘an economic strategy to hoard vaccinations from manufacturers and increase supply in their own country’. See M Riaz and others, ‘Global Impact of Vaccine Nationalism during COVID-19 Pandemic’ (2010) 49 Tropical Medicine and Health 101, https://doi.org/10.1186/s41182-021-00394-0.

[209] E Racine, ‘Understanding COVID-19 certificates in the context of recent health securitisation trends’ (Ada Lovelace Institute, 9 March 2023) www.adalovelaceinstitute.org/blog/covid-certificates-health-securitisation accessed 26 May 2023.

[210] E Racine, ‘Understanding COVID-19 certificates in the context of recent health securitisation trends’ (Ada Lovelace Institute, 9 March 2023) www.adalovelaceinstitute.org/blog/covid-certificates-health-securitisation accessed 26 May 2023.

[211] J Atick, ‘Covid vaccine passports are important but could they also create more global inequality?’ (Euro News, 17 August 2021) www.euronews.com/next/2021/08/16/covid-vaccine-passports-are-important-but-could-they-also-create-more-global-inequality accessed 12 April 2023.

[212] E Racine, ‘Understanding COVID-19 certificates in the context of recent health securitisation trends’ (Ada Lovelace Institute, 9 March 2023) www.adalovelaceinstitute.org/blog/covid-certificates-health-securitisation accessed 12 April 2023.

[213] A Suarez-Alvarez and AJ Lopez-Menendez, ‘Is COVID-19 Vaccine Inequality Undermining the Recovery from the COVID-19 Pandemic?’ (2022) 12 Journal of Global Health 05020, 10.7189/jogh.12.05020. Share of vaccinated people refers to the total number of people who received all doses prescribed by the initial vaccination protocol, divided by the total population of the country.

[214]  Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (May 2023), https://covid19.adalovelaceinstitute.org accessed 31 May 2023.

[215]  ibid.

[216] World Health Organization, ‘COVAX: Working for global equitable access to COVID-19 vaccines’ www.who.int/initiatives/act-accelerator/covax, accessed 12 April 2023.

[217] European Commission, ‘Team Europe contributes €500 million to COVAX initiative to provide one billion COVID-19 vaccine doses for low and middle income countries’ (15 December 2020) https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2262 accessed 12 April 2023.

 

[218] Holder, J. (2023). Tracking Coronavirus Vaccinations Around the World. The New York Times [online]. Available at: https://www.nytimes.com/interactive/2021/world/covid-vaccinations-tracker.html. (Accessed: 12 April 2023).

[219] European Council. EU digital COVID certificate: how it works. Available at: https://www.consilium.europa.eu/en/policies/coronavirus/eu-digital-covid-certificate//

[220] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023.

[221] A Gillwald and others, ‘Mobile phone data is useful in coronavirus battle: But are people protected enough?’ (The Conversation, 27 April 2020) https://theconversation.com/mobile-phone-data-is-useful-in-coronavirus-battle-but-are-people-protected-enough-136404 accessed 26 May 2023..

[222] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023..

[223] A Gillwald and others, ‘Mobile phone data is useful in coronavirus battle: But are people protected enough?’ (The Conversation, 27 April 2020) https://theconversation.com/mobile-phone-data-is-useful-in-coronavirus-battle-but-are-people-protected-enough-136404 accessed 26 May 2023.

[224] ABC News, ‘Brazil’s health ministry website hacked, vaccination information stolen and deleted’ (11 December 2021) www.abc.net.au/news/2021-12-11/brazils-national-vaccination-program-hacked-/100692952 accessed 12 April 2023; Z Whittaker, ‘Jamaica’s immigration website exposed thousands of travellers’ data’ (TechCrunch, 17 February 2021) https://techcrunch.com/2021/02/17/jamaica-immigration-travelers-data-exposed accessed 12 April 2023.

[225] Proportionality is a general principle in law which refers to striking a balance between the means used and the intended aim. See European Data Protection Supervisor, ‘Necessity and proportionality’ https://edps.europa.eu/data-protection/our-work/subjects/necessity-proportionality_en accessed 12 April 2023.

[226] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 26 May 2023..

[227] G Razzano, ‘Privacy and the pandemic: An African response’ (Association For Progressive Communications, 21 June 2020) www.apc.org/en/pubs/privacy-and-pandemic-african-response accessed 26 May 2023.

[228] A Gillwald and others, ‘Mobile phone data is useful in coronavirus battle: But are people protected enough?’ (The Conversation, 27 April 2020) https://theconversation.com/mobile-phone-data-is-useful-in-coronavirus-battle-but-are-people-protected-enough-136404 accessed 26 May 2023.

[229] European Commission ‘Coronavirus: Commission proposes to extend the EU Digital COVID Certificate by one year’ (3 February 2022) https://ec.europa.eu/commission/presscorner/detail/en/ip_22_744 accessed 26 May 2023.

[230] A Hussain, ‘TraceTogether data used by police in one murder case: Vivian Balakrishnan (Yahoo! News, 5 January 2021) https://uk.style.yahoo.com/trace-together-data-used-by-police-in-one-murder-case-vivian-084954246.html?guccounter=2 accessed 30 March 2023. DW, ‘German police under fire for misuse of COVID app’ DW (11 January 2022) www.dw.com/en/german-police-under-fire-for-misuse-of-covid-contact-tracing-app/a-60393597 accessed 31 March 2023.

[231] Ada Lovelace Institute, Checkpoints for vaccine passports (2021) www.adalovelaceinstitute.org/report/checkpoints-for-vaccine-passports accessed 12 April 2023; ‘Confidence in a Crisis? Building Public Trust in a Contact Tracing App’ (17 August 2020) www.adalovelaceinstitute.org/report/confidence-in-crisis-building-public-trust-contact-tracing-app accessed 12 April 2023; ‘Exit through the App Store? COVID-19 Rapid Evidence Review’ (19 April 2020) www.adalovelaceinstitute.org/evidence-review/covid-19-rapid-evidence-review-exit-through-the-app-store accessed 12 April 2023.

[232] Ada Lovelace Institute, ‘COVID-19 Data Explorer: Policies, Practices and Technology’ (May 2023), https://covid19.adalovelaceinstitute.orgaccessed 31 May 2023

[233] European Council, ‘European digital identity (eID): Council makes headway towards EU digital wallet, a paradigm shift for digital identity in Europe’ (6 December 2022) https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/european-digital-identity-eid-council-adopts-its-position-on-a-new-regulation-for-a-digital-wallet-at-eu-level accessed 12 April 2023; Y Theodorou, ‘On the road to digital-ID success in Africa: Leveraging global trends’ (Tony Blair Institute, 13 June) www.institute.global/insights/tech-and-digitalisation/road-digital-id-success-africa-leveraging-global-trends accessed 12 April 2023.

[234] The Tawakkalna app is available at https://ta.sdaia.gov.sa/en/index; Saudi–US Trade Group, ‘United Nations recognizes Saudi Arabia’s Tawakkalna app with Public Service Award for 2022 www.sustg.com/united-nations-recognizes-saudi-arabias-tawakkalna-app-with-public-service-award-for-2022 accessed 12 April 2023.

[235] Varindia, ‘Aarogya Setu has been transformed as nation’s health app’ (26 July 2022) https://varindia.com/news/aarogya-setu-has-been-transformed-as-nations-health-app accessed 13 April 2023.

[236] NHS England, ‘Digitising, connecting and transforming health and care’ www.england.nhs.uk/digitaltechnology/digitising-connecting-and-transforming-health-and-care accessed 13 April 2023.

[237] DHI News Team, ‘The role of a successful federated data platform programme’ (Digital Health, 27 September 2022) www.digitalhealth.net/2022/09/the-role-of-a-successful-federated-data-platform-programme accessed 12 April 2023; Department of Health and Social Care, ‘Better, broader, safer: Using health data for research and analysis (gov.uk, 7 April 2022) www.gov.uk/government/publications/better-broader-safer-using-health-data-for-research-and-analysis accessed 13 April 2023.

[238] N Sherman, ‘Palantir: The controversial data firm now worth £17bn’ (BBC News, 1 October 2020) www.bbc.co.uk/news/business-54348456 accessed 13 April 2023.

[239] C Handforth, ‘How digital can close the ‘identity gap’ (UNDP, 19 May 2022) www.undp.org/blog/how-digital-can-close-identity-gap?utm_source=EN&utm_medium=GSR&utm_content=US_UNDP_PaidSearch_Brand_English&utm_campaign=CENTRAL&c_src=CENTRAL&c_src2=GSR&gclid=CjwKCAiA0J accessed 13 April 2023.

[240] L Muscato, ‘Why people don’t tryst contact tracing apps, and what to do about it’ (Technology Review, 12 November 2020) www.technologyreview.com/2020/11/12/1012033/why-people-dont-trust-contact-tracing-apps-and-what-to-do-about-it accessed 31 March 2023; AWO, ‘Assessment of Covid-19 response in Brazil, Colombia, India, Iran, Lebanon and South Africa’ (29 July 2021) www.awo.agency/blog/covid-19-app-project accessed 13 April 2023; L Horvath and others, ‘Adoption and Continued Use of Mobile Contact Tracing Technology: Multilevel Explanations from a Three-Wave Panel Survey and Linked Data’ (2022) 12:1 BMJ Open e053327, 10.1136/bmjopen-2021-053327; A Kozyreva and others, ‘Psychological Factors Shaping Public Responses to COVID-19 Digital Contact Tracing Technologies in Germany’ (2021) 11 Scientific Reports 18716, https://doi.org/10.1038/s41598-021-98249-5; G Samuel and others, ‘COVID-19 Contact Tracing Apps: UK Public Perceptions’ (2022) 1:32 Critical Public Health 31, 10.1080/09581596.2021.1909707; M Caserotti and others, ‘Associations of COVID-19 Risk Perception with Vaccine Hesitancy Over Time for Italian Residents’ (2021) 272 Social Science & Medicine 113688, 10.1016/j.socscimed.2021.113688. Ada Lovelace Institute’s ‘Public attitudes to COVID-19, technology and inequality: A tracker’ summarises a wide range of studies and projects that offer insight into people’s attitudes and perspectives. See Ada Lovelace Institute, ‘Public attitudes to COVID-19, technology and inequality: A tracker’ (2021) https://www.adalovelaceinstitute.org/resource/public-attitudes-covid-19/ accessed 12 April 2023.

[241] Ada Lovelace Institute, ‘International monitor: vaccine passports and COVID-19 status apps’ (15 October 2021) https://www.adalovelaceinstitute.org/resource/international-monitor-vaccine-passports-and-covid-19-status-apps/ accessed 30 March 2023.

[242] Our World in Data, ‘Coronavirus Pandemic (COVID-19) https://ourworldindata.org/coronavirus, accessed 31 May 2023

[243] Our World in Data ‘Coronavirus Pandemic (COVID-19)’ https://ourworldindata.org/coronavirus#explore-the-global-situation accessed 12 April 2023.

[244] University of Oxford ‘COVID-19 Government Response Tracker’ https://www.bsg.ox.ac.uk/research/covid-19-government-response-tracker  accessed 12 April 2023.

[245] Our World in Data ‘Coronavirus Pandemic (COVID-19)’ https://ourworldindata.org/coronavirus#explore-the-global-situation accessed 12 April 2023.

[246] Our World in Data ‘Coronavirus Pandemic (COVID-19)’ https://ourworldindata.org/coronavirus#explore-the-global-situation accessed 12 April 2023.

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 110

Skip to content

Executive summary

What can foundation model oversight learn from the US Food and Drug Administration (FDA)?

In the last year, policymakers around the world have grappled with the challenge of how to regulate and govern foundation models – artificial intelligence (AI) models like OpenAI’s GPT-4 that are capable of a range of general tasks such as text synthesis, image manipulation and audio generation. Policymakers, civil society organisations and industry practitioners have expressed concerns about the reliability of foundation models, the risk of misuse of their powerful capabilities and the systemic risks they could pose as more and more people begin to use them in their daily lives.

Many of these risks to people and society – such as the potential for powerful and widely used AI systems to discriminate against particular demographics, or to spread misinformation more widely and easily – are not new, but foundation models have some novel features that could greatly amplify the potential harms.

These features include their generality and ability to complete range of tasks; the fact that they are ‘built on’ for a wide range of downstream applications, creating a risk that a single point of failure could lead to networked catastrophic consequences; fast and (sometimes) unpredictable jumps in their capabilities and behaviour, which make it harder to foresee harm; and their wide-scale accessibility, which puts powerful AI capabilities in the hands of a much larger number of people.

Both the UK and US governments have released voluntary commitments for developers of these models, and the EU’s AI Act includes some stricter requirements for models before they can be sold on the market. The US Executive Order on AI also includes some obligations on some developers of foundation models to test their systems for certain risks.[1] [2]

Experts agree that foundation models need additional regulatory oversight due to their novelty, complexity and lack of clear safety standards. Oversight needs to enable learning about risks, and to ensure iterative updates to safety assessments and standards.

Notwithstanding the unique features of foundation models, this is not the first time that regulators have grappled with how to regulate complex, novel technologies that raise a variety of sociotechnical risks.[3] One area where this challenge already exists is in life sciences. Drug and medical device regulators have a long history of applying a rigorous oversight process to novel, groundbreaking and experimental technologies that – alongside their possible benefits – could present potentially severe consequences for people and society.

This paper draws on interviews with 20 experts and a literature review to examine the suitability and applicability of the US Food and Drug Administration (FDA) oversight model to foundation models. It explores the similarities and differences between medical devices and foundation models, the limitations of the FDA model as applied to medical devices, and how the FDA’s governance framework could be applied to the governance of foundation models.

This paper highlights that foundation models may pose risks to the public that are similar to — or even greater than — Class III medical devices (the FDA’s highest risk category). To begin to address the mitigation of these risks through the lens of the FDA model, the paper lays out general principles to strengthen oversight and evaluation of the most capable foundation models, along with specific recommendations for each layer in the supply chain.

This report does not address questions of international governance implications, the political economy of the FDA or regulating AI in medicine specifically. Rather, this paper seeks to answer a simple question: when designing the regulation of complex AI systems, what lessons and approaches can regulators draw on from medical device regulation?

A note on terminology

Regulation refers to the legally binding rules that govern the industry, setting the standards, requirements and guidelines that must be complied with.

Oversight refers to the processes of monitoring and enforcing compliance with regulations, for example through audits, reporting requirements or investigations.

What is FDA oversight?

With more than one hundred years’ history, a culture of continuous learning, and increasing authority, the FDA is a long-established regulator, with FDA-regulated products accounting for about 20 cents of every dollar spent by US consumers.

The FDA regulates drugs and medical devices by assigning them a specific risk level corresponding to how extensive subsequent evaluations, inspections and monitoring will be at different stages of development and deployment. The more risky and more novel a product, the more tests, evaluation processes and monitoring it will undergo.

The FDA does this by providing guidance and setting requirements for drug and device developers to follow, including regulatory approval of any protocols the developer will use for testing, and evaluating the safety and efficacy of the product.

The FDA has extensive auditing powers, with the ability to inspect drug companies’ data, processes and systems at will. It also requires companies to report incidents, failures and adverse impacts to a central registry. There are substantial fines for failing to follow appropriate regulatory guidance, and the FDA has a history of enforcing these sanctions.

Core risk-reducing aspects of FDA oversight

  • Risk- and novelty-driven oversight: The riskier and more novel a product, the more tests, evaluation processes and monitoring there will be.
  • Continuous, direct engagement with developers from development through to market: Developers must undergo a rigorous testing process through a protocol agreed with the FDA.
  • Wide-ranging information access: The FDA has statutory powers to access comprehensive information, for example, clinical trial results and patient data.
  • Burden of proof on developers: Developers must demonstrate the efficacy and safety of a drug or medical device at various ‘approval gates’ before the product can be tested on humans or be sold on a market.
  • Balancing innovation with efficacy and safety: This builds acceptance for the FDA’s regulatory authority.

How suitable is FDA-style oversight for foundation models?

Our findings show that foundation models are at least as complex as and more novel than FDA Class III medical devices (the highest risk category), and that the risks they pose are potentially just as severe.[4][5][6] Indeed, the fact that these models are deployed across the whole economy, interacting with millions of people, means that they are likely to pose systemic risks far beyond those of Class III medical devices.[7] However, the exact risks of these models are so far not fully clear. Risk mitigation measures are uncertain and risk modelling is poor or non-existent.

The regulation of Class III medical devices offers policymakers valuable insight into how they might regulate foundation models, but it is also important that they are aware of the limitations.

Limitations of FDA-style oversight for foundation models

  • High cost of compliance: A high cost of compliance could limit the number of developers, which may benefit existing large companies. Policymakers may need to consider less restrictive requirements for smaller companies that have fewer users, coupled with support for such companies in compliance and via streamlined regulatory pathways.
  • Limited range of risks assessed: The FDA model may not be able to fully address the systemic risks and the risks of unexpected capabilities associated with foundation models. Medical devices are not general purpose, and the FDA model therefore largely assesses efficacy and safety in narrow contexts. Policymakers may need to create new, exploratory methods for assessing some types of risk throughout the foundation model supply chain, which may require increased post-market monitoring obligations.
  • Overreliance on industry: Regulatory agencies like the FDA sometimes need industry expertise, especially in novel areas where clear benchmarks have not yet been developed and knowledge is concentrated in industry. Foundation models present a similar challenge. This could raise concerns around regulatory capture and conflicts of interest. An ecosystem of independent academic and governmental experts needs to be built up to support balanced, well-informed oversight of foundation models, with clear mechanisms for those impacted by AI technologies to contribute. This could be at the design and development stage, eliciting feedback from pre-market ‘sandboxing’, or through market approval processes (under the FDA regime, patient representatives have a say in this process). At any step in the process, consideration should be given to who is involved (this could range from a representative panel to a jury of members of the public), the depth of engagement (from public consultations through to partnership decision-making), and methods (for example, from consultative exercises such as focus groups, to panels and juries for deeper engagement).

General principles for AI regulators

To strengthen oversight and evaluations of the most capable foundation models (for example, OpenAI’s GPT-4), which currently lag behind FDA oversight in aspects of risk-reducing external scrutiny:

  1. Establish continuous, risk-based evaluations and audits throughout the foundation model supply chain.
  2. Empower regulatory agencies to evaluate critical safety evidence directly, supported by a third-party ecosystem – consistently proven higher quality than self- or second-party evaluations across industries.
  3. Ensure independence of regulators and external evaluators, through mandatory industry fees and a sufficient budget for regulators that contract third parties. While existing sector-specific regulators, for example, the Consumer Financial Protection Bureau (CFPB) in the USA, may review downstream AI applications, there might be a need for an upstream regulator of foundation models themselves. The level of funding for such a regulator would need to be similar to that of other safety-critical domains, such as medicine.
  4. Enable structured access to foundation models and adjacent components for evaluators and civil society. This will help ensure the technology is designed and deployed in a manner that meets the needs of the people who are impacted by its use, and enable methods to offer accountability mechanisms if it is not
  5. Enforce a foundation model pre-approval process, shifting the burden of proof to developers.

Recommendations for AI regulators, developers and deployers

Data and compute layers oversight

  1. Regulators should compel pre-notification of, and information-sharing on, large training runs.
  2. Regulators should compel mandatory model and dataset documentation and disclosure for the pre-training and fine-tuning of foundation models,[8] [9] [10] including a capabilities evaluation and risk assessment within the model card for the (pre-) training stage and post-market.

Foundation model layer oversight

  1. Regulators should introduce a pre-market approval gate for foundation models, as this is the most obvious point at which risks can proliferate. In any jurisdiction, defining the approval gate will require significant work, with input from all relevant stakeholders. In critical or high-risk areas, depending on the jurisdiction and existing or foreseen pre-market approval for high-risk use, regulators should introduce an additional approval gate at the application layer of the supply chain.
  2. Third-party audits should be required as part of the pre-market approval process, and sandbox testing in real-world conditions should be considered.
  3. Developers should enable detection mechanisms for the outputs of generative foundation models.
  4. As part of the initial risk assessment, developers and deployers should document and share planned and foreseeable modifications throughout the foundation model’s supply chain.
  5. Foundation model developers, and high-risk application providers building on top of these models, should enable an easy complaint mechanism for users to swiftly report any serious risks that have been identified.

Application layer oversight

  1. Existing sector-specific agencies should review and approve the use of foundation models for a set of use cases, by risk level.
  2. Downstream application providers should make clear to end users and affected persons what the underlying foundation model is, including if it is an open-source model, and provide easily accessible explanations of systems’ main parameters and any opt-out mechanisms or human alternatives available.

Post-market monitoring

  1. An AI ombudsman should be considered, to take and document complaints or known instances of harms of AI. This should be complimented by a comprehensive remedies framework for affected persons based on clear avenues for redress.
  2. Developers and downstream deployers should provide documentation and disclosure of incidents throughout the supply chain, including near misses. This could be strengthened by requiring downstream developers (building on top of foundation models at the application layer) and end users (for example, medical or education professionals) to also disclose incidents.
  3. Foundation model developers, downstream deployers and hosting providers (for example GitHub or Hugging Face) should be compelled to restrict, suspend or retire a model from active use if harmful impacts, misuse or security vulnerabilities (including leaks or otherwise unauthorised access) arise.
  4. Host layer actors (for example cloud service providers or model hosting platforms) should also play a role in evaluating model usage and implementing trust and safety policies to remove harmful models that have demonstrated or are likely to demonstrate serious risks, and flagging harmful models to regulators when it is not in their power to take them down.
  5. AI regulators should have strong powers to investigate and require evidence generation from foundation model developers and downstream deployers. This should be strengthened by whistleblower protections for any actor involved in development or deployment who raises concerns about risks to health or safety.
  6. Any regulator should be funded to a level comparable to (if not greater than) regulators in other domains where safety and public trust are paramount and where underlying technologies form part of national infrastructure – such as civil nuclear, civil aviation, medicines, or road and rail.[11] Given the level of resourcing required, this may be partly funded by AI developers over a certain threshold.
  7. The law around AI liability should be clarified to ensure that legal and financial liability for AI risk is distributed proportionately along foundation model supply chains.

Introduction

As governments around the world consider the regulation of artificial intelligence (AI), many experts are suggesting that lessons should be drawn from other technology areas. The US Food and Drug Administration (FDA) and its approval process for drug development and medical devices is one of the most cited areas in this regard.

This paper seeks to understand if and how FDA-style oversight could be applied to AI, and specifically to foundation models, given their complexity, novelty and potentially severe risk profile – each of which arguably exceeds those of the products regulated by the FDA.

This paper first maps the FDA review process for Class III medical software, to identify both the risk-reducing features and the limitations of FDA-style oversight. It then considers the suitability and applicability of FDA processes to foundation models and suggests how FDA risk-reducing features could be applied across the foundation model supply chain. It concludes with actionable recommendations for policymakers.

What are foundation models?

Foundation models are a form of AI system capable of a range of general tasks, such as text synthesis, image manipulation and audio generation.[12] Notable examples include OpenAI’s GPT-4 – which has been used to create products such as ChatGPT – and Anthropic’s Claude 2.

Advances in foundation models raise concerns about reliability, misuse, systemic risks and serious harms. Developers and researchers of foundation models have highlighted that their wide range of capabilities and unpredictable behaviours[13] could pose a series of risks, including:

  • Accidental harms: Foundation models can generate confident but factually incorrect statements, which could exacerbate problems of misinformation. In some cases this could have potentially fatal consequences, for example, if someone is misled into eating something poisonous or taking the wrong medication.[14] [15]
  • Misuse harms: These models could enable actors to intentionally cause harm, from harassment[16] through to cybercrime at a greater scale[17] or biosecurity risks.[18] [19]
  • Structural or systemic harms: If downstream developers increasingly rely on foundation models, this creates a single point of dependency on a model, raising security risks.[20] It also concentrates market power over cutting-edge foundation models as few private companies are able to develop foundation models with hundreds of millions of users.[21] [22] [23]
  • Supply chain harms: These are harms involving the processes and inputs used to develop AI, such as poor labour practices, environmental impacts and the inappropriate use of personal data or protected intellectual property.[24]

Context and environment

Experts agree that foundation models are a novel technology in need of additional oversight. This sentiment was shared by industry, civil society and government experts at an Ada Lovelace Institute roundtable on standards-setting held in May 2023. Attendees largely agreed that foundation models represent a ‘novel’ technology without an established ‘state of the art’ for safe development and deployment.

This means that additional oversight mechanisms may be needed, such as testing the models in a ‘sandbox’ environment or regular audits and evaluations of a model’s performance before and after its release (similar to the approach to the testing, approval and monitoring approaches in public health). Such mechanisms would enable greater transparency and accessibility for actors with incentives more aligned with societal interest in assessing (second order) effects on people.[25]

Crafting AI regulation is a priority for governments worldwide. In the last three years, national governments across the world have sought to draft legislation to regulate the development and deployment of AI in different sectors of society.

The European AI Act takes a risk-based approach to regulation, with stricter requirements applying to AI models and systems that pose a high risk to health, safety or fundamental rights. In contrast, the UK has proposed a principles-based approach, calling for existing individual regulators to regulate AI models through an overarching set of principles.

Policymakers in the USA have proposed a different approach in the Algorithm Accountability Act,[26] which would create a baseline requirement for companies building foundation models and AI systems to assess the impacts of ‘automating critical decision-making’ and empower an existing regulator to enforce this requirement. Neither the UK nor the USA have ruled out ‘harder’ regulation that would require the creation of a new (or empowering an existing) body for enforcement.

Regulation in public health, such as FDA pre-approvals, can inspire AI regulation. As governments seek to develop their approach to regulating AI, they have naturally turned to other emerging technology areas for guidance. One area routinely mentioned is the regulation of public health – specifically, the drug development and medical device regulatory approval process used by the FDA.

The FDA’s core objective is to ‘speed innovations that make food and drug products more effective, safer and more affordable’ to ‘maintain and improve the public’s health’. In practice, the FDA model requires developers of drugs or medical devices to provide (sufficiently positive) evidence on the safety risks, efficacy and accessibility of products before they are approved to be sold in a market or continue to the next development phase (referred to as pre-market approval or pre-approval).

Many call for FDA-style oversight for AI, though its detailed applicability for foundation models is largely unexamined. Applying lessons from the FDA to AI is not a new idea,[27] [28] [29] though it has recently gained significant traction. In a May 2023 Senate Hearing, renowned AI expert Gary Marcus testified that priority number one should be ‘a safety review like we use with the FDA prior to widespread deployment’.[30] Leading AI researchers Stuart Russell and Yoshua Bengio have also called for FDA-style oversight of new AI models.[31] [32] [33] In a recent request for evidence by the USA’s National Telecommunications and Information Administration on AI accountability mechanisms, 43 pieces of evidence mentioned the FDA as an inspiration for AI oversight.[34]

However, such calls often lack detail on how appropriate the FDA model is to regulate AI. The regulation of AI for medical purposes has received extensive attention,[35] [36] but there has not yet been a detailed analysis on how FDA-style oversight could be applied to foundation models or other ‘general-purpose’ AI.

Drug regulators have a long history of applying a rigorous oversight process to novel, groundbreaking and experimental technologies that – alongside their possible benefits – present potentially severe consequences.

Such technologies include gene editing, biotechnology and medical software. As with drugs, the effects of most advanced AI models are largely unknown but potentially significant.[37] Both public health and AI are characterised by fast-paced research and development progress, the complex nature of many components, their potential risk to human safety, and the uncertainty of risks posed to different groups of people.

As market sectors, public health and AI are both dominated by large private-sector organisations developing and creating new products sold on a multinational scale. Through registries, drug regulators ensure transparency and dissemination of evaluation methods and endpoint setting. The FDA is a prime example of drug regulation and offers inspiration for how complex AI systems like foundation models could be governed.

Methodology and scope

This report draws on expert interviews and literature to examine the suitability of applying FDA oversight mechanisms to foundation models. It includes lessons drawn from a literature review[38] [39] and interviews with 20 experts from industry, academia, thinktanks and government on FDA oversight and foundation model evaluation processes.[40] In this paper, we answer two core research questions:

  1. Under what conditions are FDA-style pre-market approval mechanisms successful in reducing risks for drug development and medical software?
  2. How might these mechanisms be applied to the governance of foundation models?

The report is focused on the applicability of aspects of FDA-style oversight (such as pre-approvals) to foundation models for regulation within a specific jurisdiction. It does not aim to determine if the FDA’s approach is the best for foundation model governance, but to inform policymakers’ decision-making. This report also does not answer how the FDA should regulate foundation models in the medical context.[41]

We focus on how foundation models might be governed within a jurisdiction, not on international cross-jurisdiction oversight. An international approach could be built on top of jurisdictional FDA-style oversight models through mutual recognition and trade limitations, as recently proposed.[42] [43]

We focus particularly on auditing and approval mechanisms, outlining criteria relevant for a future comparative analysis with other national and multinational regulatory models. Further research is needed to understand whether a new agency like the FDA should be set up for AI.

The implications and recommendations of this report will apply differently to different jurisdictions. For example, many downstream ‘high-risk’ applications of foundation models would have the equivalent of a regulatory approval gate under the EU AI Act (due to be finalised at the end of 2023). The most relevant learnings for the EU would therefore be considerations of what upstream foundation model approval gates could entail, or how a post-market monitoring regime should operate. For the UK and USA (and other jurisdictions), there may be more scope to glean ideas about how to implement an FDA-style regulatory framework to cover the whole foundation model supply chain.

‘The FDA oversight process’ chapter explores how FDA oversight functions and its strengths and weaknesses as an approach to risk reduction. We use Software as a Medical Device (SaMD) as a case study to examine how the FDA approaches the regulation of current ‘narrow’ AI systems (AI systems that do not have general capabilities). Then, the chapter on ‘FDA-style oversight for foundation models’ explores the suitability of this approach to foundation models. The paper concludes with recommendations for policymakers and open questions for further research.

Definitions

 

●      Approval gates are the specific points in the FDA oversight process at which regulatory approval decisions are made. They are throughout the development process. A gate can only be passed when the regulator believes that sufficient evidence on safety and efficacy has been provided.

●      Class IIII medical devices: Class I medical devices are low-risk with non-critical consequences. Class II devices are medium risk. Class III devices are devices which can potentially cause severe harms.

●      Clinical trials, ‘also known as clinical studies, test potential treatments in human volunteers to see whether they should be approved for wider use in the general population’.[44]

●      Endpoints are targeted outcomes of a clinical trial that are statistically analysed to help determine efficacy and safety. They may include clinical outcome assessments or other measures to predict efficacy and safety. The FDA and developers jointly agree on endpoints before a clinical trial.

●      Foundation models are ‘AI models capable of a wide range of possible tasks and applications, such as text, image, or audio generation. They can be standalone systems or can be used as a ‘base’ for many other more narrow AI applications’.[45]

○      Upstream (in the foundation model supply chain) refers to the component parts and activities in the supply chain that feed into development of the model.[46]

○      Downstream (in the foundation model supply chain) refers to activities after the launch of the model and activities that build on a model.[47]

○      Fine-tuning is the process of training a pre-trained model with an additional specialised or context-specific dataset, removing the need to train a model from scratch.[48]

●      Narrow AI is ‘designed to be used for a specific purpose and is not designed to be used beyond its original purpose’.[49]

●      Pre-market approval is the point in the regulatory approval process where developers provide evidence on the safety risks, efficacy and accessibility of their products before they are approved to be sold in a market. Beyond pre-market, the term ‘pre-approvals’ generally describes a regulatory approval process before the next step along the development process or supply chain.

●      A Quality Management System (QMS) is a collection of business processes focused on achieving quality policy and objectives to meet requirements (see, for example ISO 9001 and ISO 13485),[50] [51] or on safety and efficacy (see, for example FDA Part 820). This includes management controls; design controls; production and process controls; corrective and preventative actions; material controls; records, documents, and change controls; and facilities and equipment controls.

●      Risk-based regulation ‘focuses on outcomes rather than specific rules and process as the goal of regulation’,[52] adjusting oversight mechanisms to the level of risk of the specific product or technology.

●      Software as a Medical Device (SaMD) is ‘Software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device’.[53]

●      The US Food and Drug Administration (FDA) is a federal agency (and part of the Department of Health and Human Services) that is charged with protecting consumers against impure and unsafe foods, drugs and cosmetics. It enforces the Federal Food Drug and Cosmetic Act and related laws, and develops detailed guidelines.

 How to read this paper

This report offers insight from FDA regulators, civil society and private sector companies on applying specific oversight mechanisms proven in life sciences, to govern AI and foundation models specifically.

…if you are a policymaker working on AI regulation and oversight:

  • The section on ‘Applying key features of FDA-style oversight to foundation models’ provides general principles that can contribute to a risk-reducing approach to oversight,
  • The chapter on ‘Recommendations and open questions’ summarises specific mechanisms for developing and implementing oversight for foundation models.
  • For a detailed analysis of the applicability of life sciences oversight to foundation models, see the chapter ‘FDA-style oversight for foundation models’ and section on ‘The limitations of FDA oversight’.

…if you are a developer or designer of data-driven technologies, foundation models or AI systems:

  • Grasp the importance of rigorous testing, documentation and post-market monitoring of foundation models and AI applications. The introduction and ‘FDA-style oversight for foundation models’ chapter detail why significant investments into AI governance is important, and why the life sciences are a suitable inspiration.
  • The section on ‘Applying specific FDA-style processes along the foundation model supply chain’ describes mechanisms for each layer in the foundation model supply chain, They are tailored to data providers, foundation model developers, hosts and application providers. These mechanisms are based on proven governance methods used by regulators and companies in the pharmaceutical and medical device sectors.
  • Our Recommendations and open questions’ provide actionable ways in which AI companies can contribute to a better AI oversight process.

…if you are a researcher or public engagement practitioner interested in AI regulation:

  • The introduction includes an overview of the methodology which may also offer insight for others interested in undertaking a similar research project.
  • In addition to a summary of the FDA oversight process, the main research contribution of this paper is in the chapter ‘FDA-style oversight for foundation models’.
  • Our chapter on ‘Recommendations and open questions’ outlines opportunities for future research on governance processes.
  • There is also potential in collaborations between researchers in life sciences regulation and AI governance, focusing on the specific oversight mechanisms and technical tools like unique device identifiers described in our recommendations for AI regulators, developers and deployers.

The FDA oversight process

The Food and Drug Administration (FDA) is the US federal agency tasked with enforcing laws on food and drug products. Its core objective is to help ‘speed innovations that make products more effective, safer and more affordable’ through ‘accurate, science-based information’. In 2023, it had a budget of around $8 billion, around half of which was paid through mandatory fees by companies overseen by the FDA.[54] [55]

The FDA’s regulatory mandate has come to include regulating computing hardware and software used for medical purposes, such as in-vitro glucose monitoring devices or breast cancer diagnosis software.[56] The regulatory category SaMD and adjacent software for medical devices encompasses AI-powered medical applications. These are novel software applications that may bear potentially severe consequences, such as software for eye surgeries[57] or automated oxygen level control under anaesthesia.[58] [59]

An understanding of the most important oversight components for the FDA enables the discussion on suitable inspirations for foundation models in the following chapter.

The FDA regulates drugs and medical devices through a risk-based approach. This seeks to identify potential risks at different stages of the development process. The FDA does this by providing guidance and setting requirements for drug and device developers, including agreed protocols for testing and evaluating the safety and efficacy of the drug or device. The definition of ‘safety’ and ‘efficacy’ are dependent on the context, but generally:

  • Safety refers to the type and likelihood of adverse effects. This is then described as ‘a judgement of the acceptability of the risk associated with a medical technology’. A ‘safe’ technology is described as one that ‘causes no undue harm’.[60]
  • Efficacy refers to ‘the probability of benefit to individuals in a defined population from a medical technology applied for a given medical problem’.[61] [62]

Some devices and drugs undergo greater scrutiny than others. For medical devices, the FDA has developed a Class I–III risk rating system; higher-risk (Class III) devices are required to meet more stringent requirements to be approved and sold on the market. For medical software, the focus lies more on post-market monitoring. The FDA allows software on the market with higher levels of risk uncertainty than drugs, but it monitors such software continuously.

Figure 3: Classes of medical devices (applicable to software components and SaMD)[63]

The FDA’s oversight process follows five steps, which are adapted to the category and risk class of the drug, software or medical device in question.[64] [65]

The FDA can initiate reviews and inspections of drugs and medical devices (as well as other medical and food products) at three points: before clinical trials begin (Step 2), before a drug is marketed to the public (Step 4) and as part of post-market monitoring (Step 5). The depth of evidence required depends on the potential risk levels and novelty of a drug or device.

Approval gates – points in the development process where proof of sufficient safety and efficacy is required to move to the next step – are determined depending on where risks originate and proliferate.

This section illustrates the FDA’s oversight approach to novel Class III software (including narrow AI applications). Low-risk software and software similar to existing software go through a significantly shorter process (see Figure 3).

We illustrate each step using the hypothetical scenario of an approval process for medical AI software for guiding a robotic arm to take patients’ blood. This software consists of a neural network that has been trained with an image classification dataset to visually detect an appropriate vein and that can direct a human or robotic arm to this vein (see Figure 4).[66] [67]

While the oversight process for drugs and medical devices is slightly different, this section borrows insights from both and simplifies when suitable. This illustration will help to inform our assessment in the following chapter, of whether and how a similar approach could be applied to ensure oversight of foundation models.

Risk origination points are when risks arise initially; risk proliferation points: when risks spread without being controllable any more.

Step 1: Discovery and development

Description: A developer conducts initial scoping and ideation of how to design a medical device or drug, including use cases for the new product, supply chain considerations, regulatory implications and needs of downstream users. At the start of the development process, the FDA uses pre-submissions, which aim to provide a path from conceptualisation through to placement on the market.

Developer responsibilities:

  • Determine the product and risk category to classify the device, which will determine the testing and evaluation procedure (see Figure 3).
  • While training the AI model, conduct internal (non-clinical) tests, and clearly document the data and algorithms used throughout the process in a Quality Management System (QMS).[68]
  • Follow Good Documentation Practice, which offer guidance on how to document procedures from development through to market, to facilitate risk mitigation, validation and verification, and traceability (to support regulators in the event of recall or investigations).
  • Inform the FDA on the necessity of new software, for example, for efficiency gains or improvements in quality.

FDA responsibilities:

  • Support developers in risk determination.
  • Offer guidance on, for example, milestones for (pre-)clinical research and data analysis.

Required outcomes: Selection of product and risk category to determine regulatory pathway.

Example scenario: A device that uses software to guide the taking of blood may be classified as an in-vitro diagnostics device, which the FDA has previously classified as Class III (highest risk class).[69]

Step 2: Pre-clinical research

Description: In this step, basic questions about safety are addressed through initial animal testing.

Developer responsibilities:

  • Propose endpoints of study and conduct research (often with a second party).
  • Use continuous tracking in the QMS and share results with FDA.

FDA responsibilities:

  • Approve endpoints of the study, depending on the novelty and type of medical device or drug.
  • Review results to allow progression to clinical research.

Required outcomes: Developer proves basic safety of product, allowing clinical studies with human volunteers in the next step.

Example scenario: This step is important for assessing risks of novel drugs. It would not usually be needed for medical software such as our example that helps take blood, as these types of software are typically aimed at automating or improving existing procedures.

Step 3: Clinical research

Description: Drugs, devices and software are tested on humans to make sure they are safe and effective. Unlike for foundation models and most AI research and development, institutional review for research with human subjects is mandatory in public health.

Developer responsibilities:

  • Create a research design (called a protocol) and submit it to an institutional review board (IRB) for ethical review, along with Good Clinical Practice (GCP) principles and ISO standards such as ISO14155.
  • Provide the FDA with the research protocol, the hypotheses and results of the clinical trials and of any other pre-clinical or human tests undertaken, and other relevant information.
  • Following FDA approval, hire an independent contractor to conduct clinical studies (as required by risk level); these may be in multiple regions or locations, as agreed with the FDA, to match future application environments.

For drugs, trials may take place in phases that seek to identify different aspects of a drug:

  • Phase 1 studies tend to involve less than 100 participants, run for several months and seek to identify the safety and dosage of a drug.
  • Phase 2 studies tend to involve up to several hundred people with the disease/condition, run for up to two years and study the efficacy and side effects.
  • Phase 3 studies involve up to 3,000 volunteers, can run for one to four years and study efficacy and adverse reactions.

FDA responsibilities:

  • Approve the clinical research design protocol before trials can proceed.
  • During testing, support the developer with guidance or advice at set intervals on protocol design and open questions.

Required outcomes: Once the trials are completed, the developer submits them as evidence to the FDA. The supplied information should include:

  • description of main functions
  • data from trials to prove safety and efficacy
  • benefit/risk and mitigation review, citing relevant literature and medical association guidelines
  • intended use cases and limitations
  • a predetermined change control plan, allowing for post-approval adaptations of software without the need for re-approval (for a new use, new approval is required)
  • QMS review (code, protocols of storing data, Health Protection Agency guidelines, patient confidentiality).

Example scenario: The developers submit a ‘submission of investigational device exemption’ to the FDA, seeking to simplify design, requesting observational studies of the device instead of randomised controlled trials. They provide a proposed research design protocol to the FDA. Once the FDA approves it, they begin trials in 15 facilities with 50 patients each, aiming to prove 98 per cent accuracy and reduction of waiting times at clinics. During testing, no significant adverse events are reported. The safety and efficacy information is submitted to the FDA.

Step 4: FDA review

Description: FDA review teams thoroughly examine the submitted data on the drug or device and decide whether to approve it.

Developer responsibilities: Work closely with the FDA to provide access to all requested information and facilities (as described above).

FDA responsibilities:

  • Assign specialised staff to review all submitted data.
  • In some cases, conduct inspections and audits of developer’s records and evidence, including site visits.
  • If needed, seek advice from an advisory committee, usually appointed by the FDA Commissioner with input from the federal Secretary of the Health & Human Service department.[70] The committee may include representation from patients, scientific academia, consumer organisations and industry (if decision-making is delegated to the committee, only scientifically qualified members may vote).

Required outcomes: Approval and registration, or no approval with request for additional evidence.

Example scenario: For novel software like the example here, there might be significant uncertainty. The FDA could request more information from the developer and consult additional experts. Decision-making may be delegated to an advisory committee to discuss open questions and approval.

Step 5: Post-market monitoring

Description: The aim of this step is to detect ‘adverse events’[71] (discussed further below) to increase safety iteratively. At this point, all devices are labelled with Unique Device Identifiers to support monitoring and reporting from development through to market. These are particularly in relation to identifying the underlying causes of, and corrective actions for adverse events.

Developer responsibilities: Any changes or upgrades must be clearly documented, within the agreed change control plan.

FDA responsibilities:

  • Monitor safety of all drugs and devices once available for use by the public.
  • Monitor compliance on an ongoing basis through the QMS, with safety and efficacy data reviewed every six to 12 months.
  • Maintain a database on adverse events and recalls.[72]

Required outcomes: No adverse events or diminishing efficacy. If safety issues occur, the FDA may issue a recall.

Example scenario: Due to a reported safety incident with the blood-taking software, the FDA inspects internal emails and facilities. In addition, every six months, the FDA reviews a one per cent sample of patient data in the QMS and conducts interviews with patients and staff from a randomly selected facility.

Risk-reducing aspects of FDA oversight

Our interviews with experts on the FDA and a literature review[73] highlighted several themes. We group them into five risk-reducing aspects below.

Risk- and novelty-driven oversight

The approval gates described in the previous section lead to iterative oversight using QMS and jointly agreed research endpoints, as well as continuous post-market monitoring.

Approval gates are informed by risk controllability. Risk controllability is understood by considering the severity of harm to people; the likelihood of that harm occurring; proliferation, duration of exposure to population; potential false results; patient tolerance of risk; risk factors for people administering or using the drug or device, such as caregivers; detectability of risks; risk mitigations; the drug or device developer’s compliance history; and how much uncertainty there may be around any of these factors.[74]

Class III devices and related software – those that may guide critical clinical decisions or that are invasive or life-supporting – need FDA pre-approval before the drug is marketed to the public. In addition, the clinical research design needs to be approved by the FDA.

Continuous, direct engagement of FDA with developers throughout the development process

There can be inspections at any step of the development and deployment process. Across all oversight steps, the FDA’s assessments are independent and not reliant on input from private auditors who may have profit incentives.

In the context of foundation models, where safety standards are unclear and risk assessments are therefore more exploratory, these assessments should not be guided by profit incentives.

In cases where the risks are less severe, for example,  Class II devices, the FDA is supported by accredited external reviewers.[75] External experts also support reviews of novel technology where the FDA lacks expertise, although this approach has been criticised (see limitations below and advisory committee description above).

FDA employees review planned clinical trials, as well as clinical trial data produced by developers and their contractors. In novel, high-stakes cases, a dedicated advisory committee reviews evidence and decides on approval. Post market, the FDA reviews sample usage, complaint and other data approximately every six months.

Wide-ranging information access

By law, the FDA is empowered to request comprehensive evidence through audits, conduct inspections[76] and check the QMS. The FDA’s QMS regulation requires documented, comprehensive managerial processes for quality planning, purchasing, acceptance activities, nonconformities and corrective/preventative actions throughout design, production, distribution and post-market. While the FDA has statutory powers to access comprehensive information, for example, on clinical trials, patient data and in some cases internal emails, it releases only a summary of safety and efficacy post approval.

Putting the burden of proof on the developer

The FDA must approve clinical trials and their endpoints, and the labelling materials for drugs and medical devices, before they are approved for market. This model puts the burden of proof on the developer to provide this information or be unable to sell their product.

A clear approval gate entails the following steps:

  • The product development process in scope: The FDA’s move into regulating SaMD required it to translate regulatory approval gates for a drug approval process to the stages of a software development process. In the SaMD context, a device may be made up of different components, including software and hardware, that come from other suppliers or actors further upstream in the product development process. The FDA ensures the safety and efficacy of each component by requiring all components to undergo testing. If a component has been previously reviewed by the FDA, future uses of it can undergo an expedited review. In some cases, devices may use open-source Software of Unknown Provenance (SOUP). Such software needs either to be clearly isolated from critical components of the device, or to undergo demonstrable safety testing.[77]
  • The point of approval in the product development process: Effective gates occur once a risk is identifiable, but before it can proliferate or turn into harms. Certain risks (such as differential impacts on diverse demographic groups) may not be identifiable until after the intended uses of the device are made clear (for example will it be used in a hospital or a care home?). For technology with a wide spectrum of uses, like gene editing, developers must specify intended uses and the FDA allows trials with human subjects only in a few cases, where other treatments have higher risks or significantly lower chance of success.[78]
  • The evidence required to pass the approval gate: This is tiered depending on the risk class, as already described. The FDA begins with an initial broad criterion such as simply not causing to the human body when used. Developers and contractors then provide exploratory evidence. Based on this, in the case of medicines, the regulator learns and makes further specifications, for example, around the drug elimination period. For medical devices such as heart stents, evidence could include the percentage reduction in the rate of major cardiac events.

Balancing innovation and risks enables regulatory authority to be built over time

The FDA enables innovation and access by streamlining approval processes (for example, similarity exemptions, pre-submissions) and approvals of drugs with severe risks but high benefits. Over time, Congress has provided the FDA with increasing information access and enforcement powers and budgets, to allow it to enforce ‘safe access’.

The FDA has covered more and more areas over time, recently adding tobacco control to its remit.[79] FDA-regulated products account for about 20 cents of every dollar spent by US consumers.[80] It has the statutory power to issue warnings, make seizures, impose fines and pursue criminal prosecution.

Safety and accessibility need to be balanced. For example, a piece of software that automates oxygen control may perform slightly less well than healthcare professionals, but if it reduces the human time and effort involved and therefore increases accessibility, it may still be beneficial overall. By finding the right balance, the FDA builds an overall reputation as an agency providing mostly safe access, enabling continued regulatory power.[81] When risk uncertainty is high, it can slow down the marketing of technologies, for example, allowing only initial, narrow experiments for novel technologies such as gene editing.[82]

The FDA approach does not rely on any one of these risk-reducing aspects alone. Rather, the combination of all five ensures the safety of FDA-regulated medical devices and drugs in most cases.[83] The five together also allow the FDA to continuously learn about risks and improve its approval process and its guidance on safety standards.

Risk- and novelty-driven oversight focuses learning on the most complex and important drugs, software and devices. Direct engagement and access to a wide range of information is the basis of the FDA’s understanding of new products and new risks.

With the burden of proof on developers through pre-approvals, they are incentivised to ensure the FDA is informed about safety and efficacy.

As a result of this approach to oversight, the FDA is better able to balance safety and accessibility, leading to increased regulatory authority.

‘The burden is on the industry to demonstrate the safety and effectiveness, so there is interest in educating the FDA about the technology.’

Former FDA Chief Counsel

The history of the FDA: 100+ years of learning and increasing power [84] [85] [86]

 

The creation of the FDA was driven by a series of medical accidents that exposed the risks drug development can pose to public safety. While the early drug industry initially pledged to self-regulate, and members of the public viewed doctors as the primary keepers of public safety, public outcry over tragedies like the Elixir Sulfanilamide disaster (see below) led to calls for an increasingly powerful federal agency.

Today the FDA employs around 18,000 people (2022 figures) with a $8 billion budget (2023 data). The FDA’s approach to regulating drugs and devices involves learning iteratively about risks and benefits of products with every new evidence review it undertakes as part of the approval process.

Initiation

The 1906 Pure Food and Drugs Act was the first piece of legislation to regulate drugs in the USA. A groundbreaking law, it took nearly a quarter-century to formulate. It prohibited interstate commerce of adulterated and misbranded food and drugs, marking the start of federal consumer protection.

Learning through trade controls: This Act established the importance of regulatory oversight for product integrity and consumer protection.

Limited mandate

From 1930 to 1937, there were failed attempts to expand FDA powers, with relevant bills  not being passed by Congress. This period underscored the challenges in evolving regulatory frameworks to meet public health needs.

Limited power and limited learning.

Elixir Sulfanilamide disaster

This 1937 event, where an untested toxic solvent caused over 100 deaths, marked a turning point in drug safety awareness.

Learning through post-market complaints: The Elixir tragedy emphasised the crucial need for pre-market regulatory oversight in pharmaceuticals.

Extended mandate

In 1938, previously proposed legislation, the Food, Drug, and Cosmetic Act, was passed into law that changed the FDA’s regulatory approach by mandating review processes without requiring proof of fraudulent intent.

Learning through mandated information access and approval power: Pre-market approvals and the FDA’s access to drug testing information enabled the building of appropriate safety controls.

Safety reputation

During the 1960s, the FDA’s refusal to approve thalidomide –a drug prescribed to pregnant women causing an estimated 80,000 miscarriages and infant deaths and deformities in 20,000 children worldwide – further established its commitment to drug safety.

Learning through prevented negative outcomes: The thalidomide situation led the FDA to calibrate its safety measures by monitoring and preventing large-scale health catastrophes, especially in comparison with similar countries. Post-market recalls were included in the FDA’s regulatory powers.

Extended enforcement power

The 1962 Kefauver-Harris Amendment to the Federal Food, Drug, and Cosmetic  Act was a significant step, requiring new drug applications to provide substantial evidence of efficacy and safety.

Learning through expanded enforcement powers: This period reinforced the evolving role of drug developers in demonstrating the safety and efficacy of their products.

Balancing accessibility with safety

The 1984 Drug Price Competition and Patent Term Restoration Act marked a balance between drug safety and accessibility, simplifying generic drug approvals. In the 2000s, Risk Minimization Action Plans were introduced, emphasising the need for drugs to have more benefits than risks, monitored at both the pre- and the post-market stages.

Learning through a lifecycle approach: This era saw the FDA expanding its oversight scope across product development and deployment for a deeper understanding of the benefit–risk trade-off.

Extended independence

The restructuring of advisory committees in the 2000s and 2010s enhanced the FDA’s independence and decision-making capability.

Learning through independent multi-stakeholder advice: The multiple perspectives of diverse expert groups bolstered the FDA’s ability to make well-informed, less biased decisions, reflecting a broad range of scientific and medical insights – although critics and limitations remain (see below).

Extension to new technologies

In the 2010s and 2020s, recognising the potential of technological advancements to improve healthcare quality and cost efficiency, the FDA began regulating new technologies such as AI in medical devices.

Learning through a focus on innovation: Keeping an eye on emerging technologies.

The limitations of FDA oversight

The FDA’s oversight regime is built for regulating food, drugs and medical devices, and more recently extended to software used in medical applications. Literature reviews[87] and interviewed FDA experts suggest three significant limitations of this regime’s applicability to other sectors.

Limited types of risks controlled

The FDA focuses on risks to life posed by product use, therefore focusing on reliability and (accidental) misuse risks. Systemic risks such as accessibility challenges, structural discrimination issues and novel risk profiles are not as well covered.[88] [89]

  • Accessibility risks include the cost barriers of advanced biotechnology drugs or SaMD for underprivileged groups.[90]
  • Structural discrimination risks include disproportionate risks to particular demographics caused by wider societal inequalities and a lack of representation in data. These may not appear in clinical trials or in single-device post-market monitoring. For example, SaMD algorithms have misclassified Black patients’ healthcare needs systematically because they have suggested treatment based past healthcare spending data that did not accurately reflect their requirements.[91]
  • Equity risks arise when manufacturers claim average accuracy across a population or use only for a specific population (for example, people aged 60+). The FDA only considers whether a product safely and effectively delivers according to the claims of its manufacturers – it doesn’t go beyond this to urge them to reach other populations. It does not yet have comprehensive algorithmic impact assessments to ensure equity and fairness.
  • False similarity risks originate in the accelerated FDA 510(k) approval pathway for medical devices and software through comparison with already-approved products –referred to as predicate devices. Reviews of this pathway have shown ‘predicate creep’ when multiple generations of predicate devices slowly drift away from the originally approved use.[92] This could mean that predicate devices may not provide suitable comparisons for new devices.
  • Novel risk profiles challenge the standard regulatory approach of the FDA that rests on risk detection through trials before risks proliferate through marketing. Risks that are not typically detectable in clinical trials, due to their novelty or new application environments, may be missed. For example, the risk of water-contaminating foods is clear, but it may be less clear how to monitor for new pathogens that might be significantly smaller or otherwise different to those detected by existing routines.[93] While any ‘adverse events’ need to be reported to the FDA, risks that are difficult to detect might be missed.

Limited number of developers due to high costs of compliance

The FDA’s stringent approval requirements lead to costly approval processes that only large corporations can afford, as a multi-stage clinical trial can cost tens of millions of dollars.[94] [95] This can lead to oligopolies and monopolies, high drug prices because of limited competition, and innovation focused on areas with high monetary returns.

If this is not counteracted through governmental subsidies and reimbursement incentives, groups with limited means to pay for medications can face accessibility issues. It remains an open question whether small companies should be able to develop and market severe-risk technologies, or how governmental incentives and efforts can democratise the drug and medical device – or foundation model – development process.

Reliance on industry for expertise

The FDA sometimes relies on industry expertise, particularly in novel areas where clear benchmarks have not been developed and knowledge is concentrated in industry. This means that the FDA may seek input from external consultants and its advisory committees to make informed decisions.[96]

An overreliance on industry could raise concerns around regulatory capture and conflicts of interest – similar to other agencies.[97] For example, around 25 per cent of FDA advisory committee members had conflicts of interest in the past five years.[98] In principle, conflicted members are not allowed to participate, but dependency on their expertise regularly leads this requirement being waived.[99] [100] [101] External consultants have been conflicted, too: one notable scandal occurred when McKinsey advised the FDA on opioid policy while being paid by corporations to help them sell the same drugs.[102]

A lack of independent expertise can reduce opportunities for the voice of people affected by high-risk drugs or devices being heard. This in turn may undermine public trust in new drugs and devices. It has also been shown that oversight processes that are not heavily dependent on industry expertise and funding have been proven to discover more, and more significant, risks and inaccuracies.[103]

Besides these three main limitations, others include enforcement issues for small-scale illegal deployment of SaMD, which can be hard to identify;[104] [105] and device misclassifications in new areas.[106]

FDA-style oversight for foundation models

FDA Class III devices are complex, novel technologies with potentially severe risks to public health and uncertainties regarding how to detect and mitigate these risks.[107]

Foundation models are at least as complex, more novel and – alongside their potential benefits – likewise pose potentially severe risks, according to the experts we interviewed and recent literature.[108] [109] [110] They are also deployed across the economy, interacting with millions of people, meaning they are likely to pose systemic risks that are far beyond those of Class III medical devices.[111]

However, the risks of foundation models are so far not fully clear, risk mitigation measures are uncertain and risk modelling is poor or non-existent.

Leading AI researchers such as Stuart Russell and Yoshua Bengio, independent research organisations, and AI developers have flagged the riskiness, complexity and black-box nature of foundation models.[112] [113] [114] [115] [116] In a review on the severe risks of foundation models (in this case, the accessibility of instructions for responding to biological threats), the AI lab Anthropic states: ‘If unmitigated, we worry that these risks are near-term, meaning they may be actualised in the next two to three years.’[117]

As seen in the history of the FDA outlined above, it was a reaction to severe harm that led to its regulatory capacity being strengthened. Those responsible for AI governance would be well advised to act ahead of time to pre-empt and reduce the risk of similarly severe harms.

The similarities between foundation models and existing, highly regulated Class III medical devices – in terms of complexity, novelty and risk uncertainties – suggests that they should be regulated in a similar way (see Figure 5).

However, foundation models differ in important ways from Software as a Medical Device (SaMD). The definitions themselves reveal inherent differences in the range of applications and intended use:

Foundation models are AI models capable of a wide range of possible tasks and applications, such as text, image or audio generation. They can be stand-alone systems or can be used as a ‘base’ for many other more narrow AI applications.[118]

SaMD  is more specific: it is software that is ‘intended to be used for one or more medical purposes that perform[s] these purposes without being part of a hardware medical device’.[119]

However, the most notable differences are more subtle. Even technology applied across a wide range of purposes, like general drug dispersion software, can be effectively regulated with pre-approvals. This is because the points of risk and the pathways to dangerous outcomes are well understood and agreed upon, and they all start from the distribution of products to consumers – something in which the FDA can intervene.

The first section of this chapter outlines why this is not yet the case for foundation models. The second section illustrates how FDA-style oversight can bridge this gap generally. The third section details how these mechanisms could be applied along the foundation model supply chain – the different stages of development and deployment of these models.

The foundation model challenge: unclear, distributed points of risk

In this section we discuss two key points of risk: 1) risk origination points, when risks arise initially; and 2) risk proliferation points, when risks spread without being controllable.

A significant challenge that foundation models raise is the difficulty of identifying where different risks originate and proliferate in their development and deployment, and which actors within that process should be held responsible for mitigating and providing redress for those harms.[120]

Risk origination and proliferation examples

Bias

Some risks may originate in multiple places in the foundation model supply chain. For example, the risk of a model producing outputs that reinforce racial stereotypes may originate in the data used to train the model, how it was cleaned, the weights that the model developer used, which users the model was made available to, and what kinds of prompts the end user of the model is allowed to make.[121] [122]

 

In this example, a series of evaluations for different bias issues might be needed throughout the model’s supply chain. The model developer and dataset provider would need to be obliged to proactively look for and address known issues of bias. It might also be necessary to find ways to prohibit or discourage end users from prompting a model for outputs that reinforce racial stereotypes.

Cybercrime

Another example is reports of GPT-4 being used to write code for phishing operations to steal people’s personal information. Where in the supply chain did such cyber-capabilities originate and proliferate?[123] [124] Did the risk originate during training (while general code-writing abilities were being built) or after release (allowing requests compatible with phishing)? Did it proliferate through model leakage, widely accessible chatbots like ChatGPT or Application Programming Interfaces (APIs), or downstream applications?

Some AI researchers have conceptualised the uncertainty over risks as a matter of the unexpected capabilities of foundation models. This ‘unexpected capabilities problem’ may arise during models’ development and deployment.[125] Exactly what risks this will lead to cannot be identified reliably, especially not before the range of potential use cases is clear.[126] In turn, this uncertainty means that risks may be more likely to proliferate rapidly (the ‘proliferation problem’),[127] and to lead to harms throughout the lifecycle – with limited possibility for recall (the ‘deployment safety problem’).[128]

The challenge in governing foundation models is therefore in identifying and mitigating risks comprehensively before they proliferate.[129]

There is a distinction to draw between risk origination (the point in the supply chain a risk such as toxic content may arise) and risk proliferation (the point in the supply chain a risk can be widely distributed to downstream actors). Identifying points of risk origination and proliferation can be challenging for different kinds of risks.

Foundation model oversight needs to be continuous throughout the supply chain. Identifying all inherent risks in a foundation model upstream is hard. Leaving risks to downstream companies is not the solution, because they may have proliferated already by this stage.

There are tools available to help upstream foundation model developers reduce risk before training (through filtering data inputs), and to assess risks during training (through clinical trial style protocols). More of these tools are needed. They are most effective when applied at the foundation model layer (see Figure 2 and Figure 6), given the centralised nature of foundation models. However, some risks might arise or be detectable only at the application layer, so tools for intervention at this layer are also necessary.

Applying key features of FDA-style oversight to foundation models

How should an oversight regime be designed so that it suits complex, novel, severe-risk technologies with distributed, unclear points of risk origination and proliferation?

Both foundation models and Class III devices pose potentially severe levels of risk to public safety and therefore require governmental oversight. For the former, this is arguably even more important given national security concerns (for example, the risk that such technologies could enable cyberattacks or widespread disinformation campaigns at far greater scales than current capabilities allow).[130] [131] [132]

Government oversight is needed also because of the limitations of private insurance for severe risks.

As seen in the cases of nuclear waste insurance or financial crisis, large externalities and systemic risks need to be captured by a government.

Below we consider what we can learn from the oversight of FDA-regulated products and whether an FDA-style approach could provide effective oversight of foundation models.

Building on Raji et al’s recent review[133] and interviews, current oversight regimes for foundation models can be understood alongside, and compared with, the core risk-reducing aspects of the FDA approach, as depicted in Figure 7.[134] [135] Current oversight and evaluations of GPT-4 lag behind FDA oversight in all dimensions.

Governance of GPT-4’s development and release according to their 2023 system card and interviews, vs. FDA governance of Class III drugs.[136] [137] [138] While necessarily simplified, characteristics furthest to the right fit best for complex, novel technologies with potentially severe risks and unclear risk (measures).[139]

‘We are in a “YOLO [you only live once]” culture without meaningful specifications and testing – “build, release, see what happens”.’

Igor Krawczuk on current oversight of commercial foundation models

The complexity and risk uncertainties of foundation models could justify similar levels of oversight to those provided by the FDA in relation to Class III medical devices.

This would involve an extensive ecosystem of second-party, third-party and regulatory oversight to monitor and understand the capabilities of foundation models and to detect and mitigate risks. The high speed of progress in foundation model development requires adaptable oversight institutions, including non-governmental organisations with specialised expertise. AI regulators need to establish and enforce improved foundation model oversight across the development and deployment process.

General principles for applying key features of the FDA’s approach to foundation model governance

  1. Establish continuous, risk-based evaluations and audits throughout the foundation model supply chain. Existing bug bounty programmes[140] and complaint-driven evaluation do not sufficiently cover potential risks. The FDA’s incident reporting system captures fewer risks than the universal risk-based reviews before market entry and post-market monitoring requirements.[141] Therefore, review points need to be defined across the supply chain of foundation models, with risk-based triggers. As already discussed, risks can originate at multiple sources, potentially simultaneously. Continuous engagement of reviewers and evaluators is therefore important to detect and mitigate risks before they proliferate.
  2. Empower regulatory agencies to evaluate critical safety evidence directly, supported by a third-party ecosystem. First-party self-assessments and second-party contracted auditing have consistently proven to be lower quality than accredited third-party or governmental audits.[142] [143] [144] [145] Regulators of foundation models should therefore have direct access to assess evaluation and audit evidence. This is especially significant when operating in a context when standards are unclear and audits therefore more exploratory (in the style of evaluations). Regulators can also improve their understanding by consulting independent experts.
  3. Ensure independence of regulators and external evaluators. Oversight processes not dependent on industry expertise and funding have been proven to discover more, and more significant, risks and inaccuracies, especially in complex settings with vague standards.[146] [147] Inspired by the FDA approach, foundation model oversight could be funded directly through mandatory fees from AI labs and only partly through federal funding. Sufficient resourcing in these ways is essential, to avoid the need for additional resourcing that is associated with potential conflicts of interest. Consideration should also be given to an upstream regulator of foundation models as existing sector-specific regulators may only have the ability to review downstream AI applications. The level of funding for such a regulator needs to be similar to that of other safety-critical domains, such as medicine. Civil society and external evaluators could be empowered through access to federal computing infrastructure for evaluations and accreditation programmes.
  4. Enable structured access to foundation models and adjacent components for evaluators and civil society. Access to information is the foundation of an effective audit (although while it is necessary, it is not sufficient on its own).[148] Providing information access to regulators – not just external auditors – increases audit quality.[149] Information access needs to be tiered to protect intellectual property and limit the risks of model leakage.[150] [151] Accessibility to civil society could increase the likelihood of innovations that meet the needs of people that are impacted by its use, for example, through understanding public perceptions of risks and perceived benefits of technologies. Foundation model regulation needs to strike a risk-benefit balance.
  5. Enforce a foundation model pre-market approval process, shifting the burden of proof to developers. If the regulator has the power to stop the development or sale of products, this significantly increases developers’ incentive to provide sufficient safety information. The regulatory burden needs to be distributed across the supply chain – with requirements in line with the risks at each layer of the supply chain. Cross-context risks and those with the most potential for wide-scale proliferation need to be regulated upstream at the foundation model layer; context-dependent risks should be addressed downstream in domain-specific regulation.

‘Drawing from very clear examples of real harm led the FDA to put the burden of proof on the developers – in AI this is flipped. We are very much in an ex post scenario with the burden on civil society.’

Co-Founder of Leading AI thinktank

 

‘We should see a foundation model as a tangible, auditable product and process that starts with the training data collection as the raw input material to the model.’

Kasia Chmielinski, Harvard Berkman Klein Center for Internet & Society

Learning through approval gates

The FDA’s capabilities have increased over time. Much of this has occurred through setting approval gates, which become points of learning for regulators. Given the novelty of foundation models and the lack of an established ‘state of the art’ for safe development and deployment, a similar approach could be taken to enhance the expertise of regulators and external evaluators (see Figure 2).

Approval gates can provide regulators with key information throughout the foundation model supply chain.

Some approval gates already exist under current sectoral regulation for specific downstream domains. At the application layer of a foundation model’s supply chain, the context of its use will be more clear than at the developer layer. Approval gates at this stage could require evidence similar to clinical studies for medical devices, to approximate risks. This could be gathered, for example, through an observational study on the automated allocation of physicians’ capacity based on described symptoms.

Current sectoral regulators may need additional resources, powers and support to appropriately evaluate the evidence and make a determination of whether a foundation model is safe to pass an approval gate.

Every time a foundation model is suggested for use, companies may already need to – or should – collect sufficient context-specific safety evidence and provide it to the regulator. For the healthcare capacity allocation example above, existing FDA –  or MHRA (Medicines and Healthcare products Regulatory Agency, UK) – requirements and approval gates on clinical decision support software currently support extensive evaluation of such applications.[152]

Upstream stages of the foundation model supply chain, in particular, lack an established ‘state of the art’ defining industry standards for development and underpinning regulation. A gradual process might therefore be required to define approval requirements and the exact location of approval gates.

Initially, lighter approval requirements and stronger transparency requirements will enable learning for the regulator, allowing it to gradually set optimal risk-reducing approval requirements. The model access required by the regulator and third parties for this learning could be provided via mechanisms such as sandboxes, audits or red teaming, detailed below.

Red teaming is an approach originating in computer security. It describes exercises where individuals or groups (the ‘red team’) are tasked with looking for errors, issues or faults with a system, by taking on the role of a bad actor and ‘attacking’ it. In the case of AI, it has increasingly been adopted as an approach to look for risks of harmful outputs from AI systems.[153]

Once regulators have agreed inclusive[154] international standards and benchmarks for testing of upstream capabilities and risks, they should impose standardised thresholds for approval and endpoints. Until that point, transparency and scrutiny should be increased, and the burden of proof should be on developers to prove safety to regulators at approval gates.

The next section discusses in more specific detail how FDA-style processes could be applied to foundation model governance.

‘We need end-to-end oversight along the value chain.’

CEO of an Algorithmic auditing firm

Applying specific FDA-style processes along the foundation model supply chain

Risks can manifest across the AI supply chain. Foundation models and downstream applications can have problematic behaviours originating in pre-training data, or they can develop new ones when integrated into complex environments (like a hospital or a school). This means that new risks can emerge over time.[155] Policymakers, researchers, industry and the public therefore ‘require more visibility into the risks presented by AI systems and tools’.

Regulation can ‘play an important role in making risks more visible, and the mitigation of risk more actionable, by developing policy to enable a robust and interconnected evaluation, auditing, and disclosure ecosystem that facilitates timely accountability and remediation of potential harms’.[156]

The FDA has processes, regulatory powers and a culture that helps to identify and mitigate risks across the development and deployment process, from pre-design through to post-market monitoring. This holistic approach provides lessons for the AI regulatory ecosystem.

There are also significant similarities between specific FDA oversight mechanisms and proposals for oversight in the AI space, suggesting that the latter proposals are generally feasible. In addition, new ideas for foundation model oversight can be drawn from the FDA, such as in setting endpoints that determine the evidence required to pass an approval gate. This section draws out key lessons that AI regulators could take from the FDA approach and applies them to each layer of the supply chain.

Data and compute layers oversight

There is an information asymmetry between governments and AI developers. This is demonstrated, for example, in the way that governments have been caught off-guard by the release of ChatGPT. This also has societal implications in areas like the education sector, where universities and schools are having to respond to a potential increase in students’ use of AI-generated content for homework or assessments.[157]

To be able to anticipate these implications, regulators need much greater oversight on the early stages of foundation model development, when large training runs (the key component of the foundation model development process) and the safety precautions for such processes are being planned. This will allow greater foresight over potentially transformative AI model releases, and early risk mitigation.

Pre-submissions and Good Documentation Practice

At the start of the development process, the FDA uses pre-submissions (pre-subs), which allow it to conduct ‘risk determination’. This benefits the developer because they can get feedback from the regulator at various points, for example on protocols for clinical studies. The aim is to provide a path from device conceptualisation through to placement on the market.

This is similar to an idea that has recently gained some traction in the AI governance space: that labs should submit reports to regulators ‘before they begin the training process for new foundation models, periodically throughout the training process, and before and following model deployment’. [158]

This approach would enable learning and risk mitigation by giving access to information that currently resides only inside AI labs (and which has not so far been voluntarily disclosed), for example covering compute and capabilities evaluations,[159] what data is used to train models, or environmental impact and supply chain data.[160] It would mirror the FDA’s Quality Management System (QMS), which documents compliance with standards (ISO 13485/820) and is based on Good Documentation Practice throughout the development and deployment process to ensure risk mitigation, validation and verification, and traceability (to support regulators in the event of recall or investigations).

As well as documenting compliance in this way, the approach means that the regulator would need to demonstrate similar good practice when handling pre-submissions. Developers would have concerns around competition: the relevant authorities would need to be legally compelled to observe confidentiality, to protect intellectual property rights and trade secrets. A procedure for documenting and submitting high-value information at the compute and data input layer would be the first step towards an equivalent to the FDA approach in the AI space.

Transparency via Unique Device Identifiers (UDIs)

The FDA uses UDIs for medical devices and stand-alone software. The aim of this is to support monitoring and reporting throughout the lifecycle, particularly to identify the underlying causes of ‘adverse events’ and what corrective action should be taken (this is discussed further below).[161] This holds some similarities to AI governance proposals, particularly the suggestion for compute verification to help ensure that (pre-) training rules and safety standards are being followed.

Specifically for the AI supply chain, this would apply at the developer layer, to the essential hardware used to train and run foundation models: compute chips. Chip registration and monitoring has gained traction because, unlike other components of AI development, this hardware can be tracked in the same manner as other physical goods (like UDIs). It is also seen as an easy win. Advanced chips are usually tagged with unique numbers, so regulators would simply need to set up a registry; this could be updated each time the chips change hands.[162]

Such a registry would enable targeted interventions. For example, Jason Matheny, the CEO of RAND suggests that regulators should ‘track and license large concentrations of AI chips’, while ‘cloud providers, who own the largest clusters of AI chips, could be subject to ‘know your customer’ (KYC) requirements so that they identify clients who place huge rental orders that signal an advanced AI system is being built’.[163]

This approach would allow regulators and relevant third parties to track use throughout the lifecycle – starting with monitoring for large training runs to build advanced AI models and to verify safety compliance (for example, via KYC checks or providing information about the cybersecurity and risk management measures) for these training runs and subsequent development decisions. It would also support them to hold developers accountable if they do not comply.

Quality Management System (QMS)

The FDA’s quality system regulation is sometimes wrongly assumed to be only a ‘compliance checklist’ to be completed before the FDA approves a product. In fact, the QMS – a standardised process for documenting compliance – is intended to put ‘processes, trained personnel, and oversight’ in place to ensure that a product is ‘predictably safe throughout its development and deployment lifecycles’.

At the design phase, controls consist of design planning, design inputs that establish user needs and risk controls, design outputs, verification to ensure that the product works as planned, validation to ensure that the product works in its intended setting, and processes for transferring the software into the clinical environment.[164]

To apply a QMS to foundation model development phase, it is logical to look at the data used to (pre-)train the model. This – alongside compute – is the key input at this layer of the AI supply chain. As with the pharmaceuticals governed by the FDA, the inputs will strongly shape the outputs, such as decisions on size (of dataset and parameters), purpose (while pre-trained models are designed to be used for multiple downstream tasks, some models are better suited than others to particular types of tasks) and values (for example, choices on filtering and cleaning the data).[165]

These decisions can lead to issues in areas such as bias,[166] copyright[167] and AI-generated data[168] throughout the lifecycle. Data governance and documentation obligations are therefore needed, with similar oversight to the FDA QMS for SaMD. This will build an understanding of where risks and harms originate and make it easier to stop them from proliferating by intervening upstream.

Regulators should therefore consider model and dataset documentation methods[169] for pre-training and fine-tuning foundation models. For example, model cards document information about the model’s architecture, testing methods and intended uses,[170] while datasheets document information about a dataset, including what kind of data is included and how it was collected and processed.[171] A comprehensive model card should also contain a risk assessment,[172] similar to the FDA’s controls for testing for effectiveness in intended settings. This could be based on uses foreseen by foundation model developers. Compelling this level of documentation would help to introduce FDA-style levels of QMS practice for AI training data.

Core policy implications

An approach to pre-notification of, and information-sharing on, large training runs could use the pre-registration process of the FDA as a model. As discussed above, under the FDA regime, developers are continuously providing information to the regulator, from the pre-training stage onwards.[173] This should also be the case in relation to foundation models.

It might also make sense to track core inputs to training runs by giving UDIs to microchips. This would allow compliance with regulations or standards to be tracked and would ensure that the regulator would have sight of non-notified large training runs. Finally, the other key input into training AI models – data – should adhere to documentation obligations, similarly to FDA QMS procedures.

Foundation model developer layer oversight

Decisions taken early in the development process have significant implications downstream. For example, models (pre-)trained on fundamental human rights values produce outputs that are less structurally harmful.[174] To reduce risk of harm as early as possible, critical decisions that shape performance across the supply chain should be documented as they are made, before wide-scale distribution, fine-tuning or application,

Third-party evidence generation and endpoints

The FDA model relies on third-party efficacy and safety evidence to prove ‘endpoints’ (targeted outcomes, jointly agreed between the FDA and developers before a clinical trial) as defined in standards or in an exploratory manner together with the FDA. This allows high-quality information on the pre-market processes for devices to be gathered and submitted to regulators.

Narrowly defined endpoints are very similar to one of the most commonly cited interventions in the AI governance space: technical audits.[175] A technical audit is ‘a narrowly targeted test of a particular hypothesis about a system, usually by looking at its inputs and outputs – for instance, seeing if the system performs differently for different user groups’. Such audits have been suggested by many AI developers and researchers and by civil society.[176]

Regulators should therefore develop – or support the AI ecosystem to develop – benchmarks and metrics to assess the capabilities of foundation models, and possibly thresholds that a model would have to meet before it could be placed on the market. This would help standardise the approach to third-party compliance with evidence and measurement requirements, as under the FDA, and establish a culture of safety in the sector.

Clinical trials

In the absence of narrowly defined endpoints and in cases of uncertainty, the FDA works with developers and third-party experts to enable more exploratory scrutiny as part of trials and approvals. Some of these trials are based on iterative risk management and explorative auditing, and on small-scale deployment to facilitate ‘learning by doing’ on safety issues. This informs what monitoring is needed, provides iterative advice and leads to learning being embedded in regulations afterwards.

AI regulators could use similar mechanisms, such as (regulatory) sandboxes. This would involve pre-market, small-scale deployment of AI models in real-world but controlled conditions, with regulator oversight.

This could be done using a representative population for red-teaming, expert ‘adversarial’ red-teamers (at the foundation model developer stage), or sandboxing more focused on foreseeable or experimental applications and how they interact with end users. In some jurisdictions, existing regulatory obligations could be used as the endpoint and offer presumptions of conformity – and therefore market access – after sandbox testing (as in the EU AI Act).

It will take work to develop a method and an ecosystem of independent experts who can work on third-party audits and sandboxes for foundation models. But this is a challenge the FDA has met, as have other sectors such as aviation, motor vehicles and banking.[177] An approach like the one described above has been used in aviation to monitor and document incidents and devise risk mitigation strategies. This helped to encourage a culture of safety in the industry, reducing fatality risk by 83 per cent between 1998 and 2008 (at the same time as a five per cent annual increase in passenger kilometres flown).[178]

Many organisations already exist that can service this need in the AI space (for example, Eticas AI, AppliedAI, Algorithmic Audit, Apollo Research), and more are likely to be set up.[179]

An alternative to sandboxes is to consider structured access for foundation models, at least until it can be proven that a model is safe for wide-scale deployment.[180] This would be an adaptation of the FDA’s approach to clinical trials, which allows experimentation with a limited number of people when the technology has a wide spectrum of uses (for example, gene editing) or when the risks are unclear, to get insights while preventing any harms that arise from proliferation.

Applied to AI, this could entail a staged release process – something leading AI researchers have already advocated for. This would involve model release to a small number of people (for example, vetted researchers) so that ‘beta’ testing is not done on the whole population via mass deployment.

Internal testing and disclosure of ‘adverse events’

Another mechanism used at the development stage by the FDA is internal testing and mandatory disclosure of ‘adverse events’. Regulators could impose similar obligations on foundation model developers, requiring internal audits and red teaming[181] and the disclosure of findings to regulators. Again, these approaches have been suggested by leading AI developers.[182] They could be made more rigorous by coupling them with mandatory disclosure, as under the FDA regime.

The AI governance equivalent of reporting ‘adverse effects’ might be incident monitoring.[183] This would involve a ‘systematic approach to the collection and dissemination of incident analysis to illuminate patterns in harms caused by AI’.[184] The approach could be strengthened further by including ‘near-miss’ incidents.[185]

In developing these proposals, however, it is important to bear in mind challenges faced in the life sciences sector regarding how to make adverse effect reporting suitably prescriptive. For example, clear indicators for what to report need to be established so that developers cannot claim ignorance and underreport.

However, it is not possible to foresee all potential effects of a foundation model. As a result, there needs to be some flexibility in incident reporting as well as penalties for not reporting. Medical device regulators in the UK have navigated this by providing high-level examples of indirect harms to look out for, and examples of the causes of these harms.[186] In the USA, drug and device developers are liable to report larger-scale incidents, enforced by the FDA through, for example, fines. If enacted effectively, this kind of incident reporting would be a valuable foresight mechanism for identifying emergent harms.

A pre-market approval gate for foundation models

After the foundation model developer layer, regulators should consider a pre-market approval gate (as used by the FDA) at the point just before the model is made widely available and accessible for use by other businesses and consumers. This would build on the mandatory disclosure obligations at the data and compute layers and involve submitting all documentation compiled from third-party audits, internal audits, red teaming and sandbox testing. It would be a rigorous regime, similar to the FDA’s use of QMS, third-party efficacy evidence, adverse event reporting and clinical trials.

AI regulators should ensure that documentation and testing practices are standardised, as they are in FDA oversight. This would ensure that high-value information is used for market approval at the optimal time, to minimise the risk of potential downstream harms before a model is released onto the market.

This approach also depends on developing adequate benchmarks and standards. As a stopgap, approval gates could initially be based on transparency requirements and the provision of exploratory evidence. As benchmarks and standards emerged over time, the evidence required could be more clearly defined.

Such an approval gate would be consistent with one of the key risk-reducing features of the FDA’s approach: putting the burden of proof on developers. Many of the concerns around third-party audits of foundation models (in the context of the EU AI Act) centre on the lack of technological expertise beyond AI labs. A pre-market approval gate would allow AI regulators to specify what levels of safety they expect before a foundation model can reach the market, but the responsibility for proving safety and reliability would be placed on the experts who wish to bring the model to market.

In addition, the approval gate offers the regulator and accredited third parties the chance to learn. As the regulator learns – and the technology develops – approval gates could be updated via binding guidance (rather than legislative changes). This combination of ‘intervention and reflection’ has ‘been shown to work in safety-critical domains such as health’.[187] Regulators and other third parties should cascade this learning downstream, for example, to parties who build on top of the foundation model. This is a key risk-reducing feature of the FDA’s approach: the ‘approvers’ and others in the ecosystem become more capable and more aware of safe use and risk mitigation.

While the burden of proof would be primarily on developers (who may use third parties to support in evidence creation), approval would still depend on the regulator. Another key lesson from FDA processes is that the regulator should bring in support from independent experts in cases of uncertainty, via a committee of experts, consumer and industry representatives, and patient representatives. This is important, as the EU’s regulatory regime for AI has been criticised for a lack of multi-stakeholder governance mechanisms, including ‘effective citizen engagement’.[188]

Indeed, many commercial AI labs say that they want avenues for democratic oversight and public participation (for example, OpenAI and Anthropic’s participation in ‘alignment assemblies’,[189] which seek public opinion to inform, for example, release criteria) but are unclear on how to establish them.[190] Introducing ways to engage stakeholders in cases of uncertainty as part of the foundation model approval process could help to address this. It would give a voice to those who could be affected by models with potentially societal-level implications, in the same way patients are given a voice in FDA review processes for SaMD. It might also help address one of the limitations of the FDA: an overreliance on industry expertise in some novel areas.

To introduce public participation in foundation model oversight in a meaningful way, it would be important to consider the approach to engagement that is suitable to help to identify risks.

One criteria to consider is who should be involved, with options ranging from a representative panel or jury of members of the public to panels formed of members of the public at higher risk of harm or marginalisation.

Another criteria to consider relates to the depth of engagement. The depth of engagement is often framed as a spectrum from low involvement, such as public consultations, all the way to deeper processes that involve partnership in decision-making.[191]

A third criteria to consider is the method of engagement. This would depend on decisions related to who should be involved and to what extent. For example, surveys or focus groups are common in consultative exercises, workshops can enable more involvement whereas panels and juries allow for deeper engagement which can result in its members proposing recommendations. In any case it will be important to consider whose voices, experiences and potential harms will be included or missed, and ensure those less represented or at more risk of harms are part of the process.

Finally, there are ongoing debates about whether pre-market approval should be applied to all foundation models, or ‘tiered’ to ensure those with the most potential to impact society are subject to greater oversight.

While answering this question is beyond the scope of this paper, it seems important that both ex ante and ex post metrics are considered when establishing which models belong in which tier. The former might include, for example, measurement of modalities, the generality of the base model, the distribution method and the potential for adaptation of the model. The latter could include the number of downstream applications built on the model, the number of users across applications and how many times the model is being queried. Any regulator must have the power and capacity to update the makeup of tiers in a timely fashion as and when these metrics shift.

Application layer oversight

Following the AI supply chain, a foundation model is made available and distributed via the ‘host’ layer, by either the model provider (API access) or a cloud service provider (for example, Hugging Face, which hosts models for download).

Some argue that this layer should also have some responsibility for the safe development and distribution of foundation models (for example, through KYC checks, safety testing before hosting or take-down obligations in case of harm). But there is a reason why regulators have focused primarily on developers and deployers: they have the most control over decisions affecting risk origin and safety levels. For this reason, we also focus on interventions beyond the host layer.

However, a minimal set of obligations on host layer actors (such as cloud service providers or model hosting platforms) is necessary, as they could play a role in evaluating model usage, implementing trust and safety policies to remove models that have demonstrated or are likely to demonstrate serious risks, and flagging harmful models to regulators when it is not in their power to take them down. This is beyond the scope of this paper, and we suggest that the responsibilities of the host layer are addressed in further research.

Once a foundation model is on the market and it is fine-tuned, built upon or deployed by downstream users, its risk profile becomes clearer. Regulatory gates and product safety checks are introduced by existing regulators at this stage, for example in healthcare, automotives or machinery (see UK regulation of large language models – LLMs – as medical devices, or the EU AI Act’s regulation of foundation models deployed in ‘high-risk’ areas). These are useful regulatory endpoints that should help to reduce risk and harm proliferation.

However, there are still lessons to be learned at the application layer from the FDA model. Many of the mechanisms used at the foundation model developer layer could be used at this layer, but with endpoints defined based on the risk profile of the area of deployment. This could take the form of third-party audits based on context-specific standards, or sandboxes including representative users based on the specific setting in which the AI system will be used.

Commercial off-the-shelf software (COTS) in critical environments

One essential mechanism for the application layer is a deployment risk assessment. Researchers have proposed that this should involve a review of ‘(a) whether or not the model is safe to deploy, and (b) the appropriate guardrails for ensuring the deployment is safe’.[192] This would serve as an additional gate for context-specific risks and is similar to the FDA’s rules for systems that integrate COTS in severe-risk environments. Under these rules, additional approval is needed unless the COTS is approved for use in that context.

A comparable AI governance regime could allow foundation models that pass the earlier approval gate to be used downstream unless they are to be used in a high-risk or critical sector, in which case a new risk assessment would have to be undertaken and further regulatory approval sought.

For example, foundation models applied in critical energy system would be pre-approved as COTS. The final approval would still need to be given by energy regulators, but the process would be substantially easier for pre-approved COTS. The EU AI Act employs a similar approach: foundation models that are given a high-risk ‘intended purpose’ by downstream developers would have to undergo EU conformity assessment procedures.

Algorithmic impact assessments are a tool for assessing the possible societal impacts of an AI system before the system is in use (with ongoing monitoring often advised).[193] Such assessments should be undertaken when an AI system is to be deployed in a critical area such as cybersecurity, and mitigation measures put in place. This assessment should be coupled with a new risk assessment (in addition to that carried out by the foundation model developer), tailored to the area of deployment. This could involve additional context-specific guidance or questions from regulators, and the subsequent mitigation measures should address these.

Algorithmic impact and risk assessments are essential components at the application layer for high-risk deployments, and are very similar to the QMS imposed by the FDA throughout the development and deployment process. If they are done correctly, they can help to ensure that risk and impact mitigation measures are put in place to cover the lifecycle and will form the basis of post-market monitoring processes.

Some AI governance experts have suggested that these assessments should be complemented by user evaluation and testing – defined as assessments of user-centric effects of an application or system, its functionality and its restrictions, usually via user testing or surveys.[194] These evaluations could be tailored to the intended use context of an application, to ensure adequate representation of people potentially affected by it, and would be similar to the context-specific audit gates used by the FDA.

Post-market monitoring

Across sectors, one-off conformity checks have been shown to open the door for regulations to be ‘gamed’ or for emergent behaviours to be missed (see the Volkswagen emissions scandal).[195] These issues are even more likely to arise in relation to AI, given its dynamic nature, including the capacity to change throughout the lifecycle and for downstream users to fine-tune and (re)deploy models in complex environments. The FDA model shows how these risks can be reduced by having an ecosystem of reporting and foresight, and strong regulatory powers to act to mitigate risks.

MedWatch and MedSun reporting

Post-market monitoring by the FDA includes reporting mechanisms such as MedWatch and MedSun.[196] These mechanisms enable adverse event reporting for medical products, as well as monitoring of the safety and effectiveness of medical devices. Serious incidents are documented and their details made available to consumers.

In the AI space, there are similar proposals for foundation model developers, and for high-risk application providers building on top of these models, to implement ‘an easy complaint mechanism for users and to swiftly report any serious risks that have been identified’.[197] This should compel the upstream providers to take corrective action when they can, and to document and report serious incidents to regulators.

This is particularly important for foundation models that are provided via API, as in this case the provider maintains a huge degree of control over the underlying model.[198] This would mean that the provider would usually be able to mitigate or correct the emerging risk. It would also reduce the burden on regulators to document incidents or take corrective action. Leading AI developers have already committed to introducing a ‘robust reporting mechanism’ to allow ‘issues [that] may persist even after an AI system is released’ to be ‘found and fixed quickly’.[199] Regulators could consider putting such a regime in place for all foundation models.

Regulators could also consider detection mechanisms for generative foundation models. These would aim to ‘distinguish content produced by the foundation model from other content, with a high degree of reliability’, as recently proposed by the Global Partnership on AI.[200] Their report found that this is ‘technically feasible and would play an important role in reducing certain risks from foundation models in many domains’. Requiring this approach, at least for the largest model providers (who have the resources and expertise to develop detection mechanisms), could mitigate risks such as disinformation and subsequent undermining of the rule of law or democracy.

Other reporting mechanisms for foundation models have been proposed, which overlap with the FDA’s ‘usability and clinical data logging, and trend reporting’. For example, Stanford researchers have suggested that regulators should compel the disclosure of usage patterns, in the same manner of transparency reporting for online platforms.[201] This would greatly enhance understanding of ‘how foundation models are used (for example, for providing medical advice, preparing legal documents) to hold their providers to account’.[202]

Concern-based audits

Concern-based audits are a key part of the FDA’s post-market governance. They are triggered by real-world monitoring of consumers and impacts after approval. If concerns are identified, the FDA has strong enforcement mechanisms that allow it to access relevant data and documentation. The audits are rigorous and have been shown to have strong deterrence effects on negligent behaviour by drug companies.

Mechanisms for highlighting ‘concern’ in the AI space could include reporting mechanisms and ‘trusted flaggers’ – organisations that are formally recognised as  independent, and with the requisite expertise, for identifying and reporting concerns. People affected by the technologies could be given the right to lodge a complaint with supervisory authorities, such as an AI ombudsman, to support people affected by AI and increase regulators’ awareness of AI harms as they occur.[203] [204] . This should be complimented by a comprehensive remedies framework for affected persons based on effective avenues for redress, including a right to lodge a complaint with a supervisory authority, judicial remedy and an explanation of individual decision-making

Feedback loops

Post-market monitoring is a critical element of the FDA’s risk-reducing features. It is based on mechanisms to facilitate feedback loops between developers, regulators, practitioners and patients. As discussed above, Unique Device Identifiers at the pre-registration stage support monitoring and traceability throughout the lifecycle, while ongoing review of quality, safety and efficacy data via QMS further supports this. Post-market monitoring for foundation models should similarly facilitate such feedback loops. These could include customer feedback, usability and user prompt screening, human-AI interaction evaluations and cross-company reporting of trends and structural indicators. Beyond feedback to the provider, affected persons should also be able to report incidents directly to a regulatory authority, particularly where harm arises, or is reasonably foreseeable to arise.

Software of Unknown Provenance (SOUP)

In the context of safety-critical medical software, SOUP is software that has been developed with an unknown development process or methodology, or which has unknown safety-related properties. The FDA monitors for SOUP by compelling the documentation of pre-specified post-market software adaptations, meaning that the regulator can validate changes to a product’s performance and monitor for issues and unforeseen use in software.[205]

Requiring similar documentation and disclosure of software and cybersecurity issues after deployment of a foundation model would be a minimum sensible safeguard for both risk mitigation and regulator learning. This could also include sharing issues back upstream to the model developer so that they can take corrective action or update testing and risk profiles.

The approach should be implemented alongside the obligations around internal testing and disclosure of adverse events for foundation models at the developer layer. Some have argued that disclosure of near misses should also be required (as it is in the aviation industry)[206] as an added incentive for safe development and deployment.

Another parallel with the monitoring of SOUP can be seen in AI governance proposals for measures around open-source foundation models. To reduce the unknown element, and for transparency and accountability reasons, application providers – or whoever makes the model or system available on the market – could be required to make it clear to affected persons when they are engaging with AI systems and what the underlying model is (including if it is open source), and to share easily accessible explanations of systems’ main parameters and any opt-out mechanisms or human alternatives available.[207] This would be the first step to both corrective action to mitigate risk or harm, and redress if a person is harmed. It is also a means to identify the use of untested underlying foundation models.

Finally, similar to the FDA’s use of documentation of pre-specified, post-market software adaptations, AI regulators could consider mandating that developers and application deployers document and share planned and foreseeable changes downstream. This would have to be defined clearly and standardised by regulators to a proportionate level, taking into consideration intellectual property and trade secret concerns, and the risk of the system being ‘gamed’ in the context of new capabilities. In other sectors, such as aviation, there have been examples of changes being underreported to avoid new costs, such as retraining.[208] But a similar regime would be particularly relevant for AI models and systems, given their unique ability to learn and develop throughout their lifecycle.

The need for documenting or pre-specifying post-market adaptations of foundation models could be based on capabilities evaluations and risk assessments, so that new capabilities or risks that arise post-deployment are reported to the ecosystem. Significant changes could trigger additional safety checks, such as third-party (‘concern-based’, in FDA parlance) audits or red teaming to stress-test the new capabilities.

Investigative powers

The FDA’s post-market monitoring puts reporting obligations on providers and users, while underpinning this with strong investigative powers. It conducts ‘active surveillance’ (for example, under the Sentinel Initiative),[209] and it is legally empowered to check QMS and other documentation and logging data, request comprehensive evidence and conduct inspections.

Similarly, AI regulators should have powers to investigate foundation model developers and downstream deployers, such as for monitoring and learning purposes or when investigating suspected non-compliance. This could include off- and on-site inspections to gather evidence, to address the information asymmetries between AI developers and regulators, and to mitigate emergent risks or harms.

Such a regime would require adequate resources and sociotechnical expertise. Foundation models are a general-purpose technology that will increasingly form part of our digital infrastructure. In this light, there needs to be a recognition that regulators should be funded on a comparable level to other domains in which safety and public trust are paramount and where underlying technologies form important parts of national infrastructure – such as civil nuclear, civil aviation, medicines, and road and rail.[210]

Recalls, market withdrawals and safety alerts

The FDA uses recalls, market withdrawals and safety alerts when products are in violation of law. Recall can also be a voluntary action by manufacturers and distributors to meet their responsibility to protect public health and wellbeing from products that present risk or are otherwise defective.[211]

Some AI governance experts and standards bodies have called for foundation model developers to similarly establish standard criteria and protocols for when and how to restrict, suspend or retire a model from active use.[212] This would be based on monitoring by the original providers throughout the lifecycle for harmful impacts, misuse or security vulnerabilities (including leaks or otherwise unauthorised access).

Whistleblower protection

In the same way that the FDA mandates reporting, with associated whistleblower protections, of adverse events by employees, second-party clinical trial conductors and healthcare practitioners, AI regulators should protect whistleblowers (for example, academics, designers, developers, project contributors, auditors, product managers, engineers and economic operators) who suspect breaches of law by a developer or deployer or an AI model or system. This protection should be developed in a way that learns from the pitfalls of whistleblower law in other sectors, which have led to ineffective uptake or enforcement. This includes ensuring breadth of coverage, clear communication of processes and protections, and review mechanisms.[213]

Recommendations and open questions

The FDA model of pre-approval and monitoring is an important inspiration for regulating novel technologies with potentially severe risks, such as foundation models.

This model entails risk-based mandates for pre-approval based on mandatory safety evidence. This works well when risks reliably originate and can be identified before proliferating or developing into harms.

The general-purpose nature of foundation models requires exploratory external scrutiny upstream in the supply chain, and targeted sector-specific approvals downstream.

Risks need to be identified and mitigated before they proliferate. This is especially difficult for foundation models.[214] Explorative approval gates have been ‘shown to work in safety-critical domains such as health’, due to the combination of ‘intervention and reflection’. Pre-approvals offer the FDA a mechanism for intervention, allowing most risks to be caught.

Another important feature of oversight is reflection. In health regulation, this is achieved through ‘iteration via guidance, rather than requiring legislative changes’.[215] This is a key consideration for AI regulators, who should be empowered (and compelled) to frequently update rules via binding guidance.

A continuous learning process to build suitable approval and monitoring regimes for foundation models is essential, especially at the model development layer. Downstream, there needs to be targeted scrutiny and approval for deployment through existing approval gates in specific application areas.

Effective oversight of foundation models requires recurring, independent evaluations and audits and access to information, placing the burden of proof on developers – not on civil society or regulators.

Literature reviews of other industries[216] show that this might be achieved through risk-based reviews by empowered regulators and third parties, tiered access for evaluators, mandatory pre-approvals, and treating foundation models like auditable products.

Our general principles for AI regulators are detailed in the section ‘Applying key features of FDA-style oversight to foundation models’.

Recommendations for AI regulators, developers and deployers

Data and compute layers oversight

  1. Regulators should compel pre-notification of, and information-sharing on, large training runs. Providers of compute for such training runs should cooperate with regulators on monitoring (by registering device IDs for microchips) and safety verification (KYC checks and tracking).
    • FDA inspiration: pre-submissions, Unique Device Identifiers (UDIs)
  2. Regulators should compel mandatory model and dataset documentation and disclosure for the pre-training and fine-tuning of foundation models,[217] [218] [219] including a capabilities evaluation and risk assessment within the model card for the (pre-) training stage and throughout the lifecycle.[220] Dataset documentation should focus on a description of training data that is safe to be made public (what is in it, where was it collected, under what licence, etc.), coupled with structured access for regulators or researchers to the training data itself (while adhering to strict levels of cybersecurity, as even this access carries security risks).
    • FDA inspiration: Quality Management System (QMS)

Foundation model layer oversight

  1. Regulators should introduce a pre-market approval gate for foundation models, as this is the most obvious point at which risks can proliferate. In any jurisdiction, defining the approval gate will require significant work, with input from all relevant stakeholders. Clarity should be provided about which foundation models would be subject to this stricter form of pre-market approval. Based on the FDA findings, this gate should at least entail submission of evidence to prove safety and market readiness based on internal testing and audits, third-party audits and (optional) sandboxes. Making models available on a strict and controllable basis via structured access could be considered as a temporary fix until an auditing ecosystem and/or sandboxes are developed.Depending on the jurisdiction in question and existing or foreseen pre-market approval for high-risk use, an additional approval gate should be introduced using endpoints (outcomes or thresholds to be met to determine efficacy and safety) based on the risk profile of the area of deployment for the application layer.
    • FDA inspiration: QMS, third-party efficacy evidence, adverse events reporting, clinical trials
  2. Third-party audits should be required as part of the pre-market approval process, and sandbox testing (as described in Recommendation 3) in real-world conditions should be considered. These should consist of – at least – a third-party audit based on context-specific standards. Alternatively, regulators could use sandboxes that include representative users (based on the setting in which the AI system will be used) to check conformity before deployment. Results should be documented and disclosed to the regulator.
    • FDA inspiration: third-party efficacy evidence, adverse events reporting, clinical trials
  3. Developers should enable detection mechanisms for outputs of generative foundation models.[221] Developers and deployers should make clear to affected persons and end users when they are engaging with AI systems. As an additional safety mechanism, they should build in detection mechanisms to allow end users and affected persons to ‘distinguish content produced by the foundation model from other content, with a high degree of reliability’.[222] Such detection mechanisms are important both as a defensive tool (for example, tagging AI-generated content) and also to enable study of model impacts. AI regulators could consider making this mandatory, at least for the most significant models (developers of which may have the resources and expertise to develop detection mechanisms).
    • FDA inspiration: post-market safety monitoring
  4. As part of the initial risk assessment, developers and deployers should document and share planned and foreseeable modifications throughout the foundation model’s supply chain. A substantial modification that falls outside this scope should trigger additional safety checks, such as third-party (‘concern-based’) audits or red teaming to stress test the new capabilities.
    • FDA: concern-based audits, pre-specified change control plans
  5. Foundation model developers, and subsequently high-risk application providers building on top of these models, should enable an easy complaint mechanism for users to swiftly report any serious risks that have been identified. This should compel upstream providers to take corrective action when they can, and to document and report serious incidents to regulators. These feedback loops should be strengthened further by awareness-raising across the ecosystem about reporting, and sharing lessons learned on what has been reported and corrective actions taken.
    • FDA Inspiration: MedWatch and MedSun programs

Application layer oversight

  1. Existing sector-specific agencies should review and approve the use of foundation models for a set of use cases, by risk level. Deployers of foundation models in high-risk or critical areas (to be defined in each jurisdiction) should undertake a deployment risk assessment to review ‘(a) whether or not the model is safe to deploy, and (b) the appropriate guardrails for ensuring the deployment is safe’.[223] Upstream developers should cooperate and share information with downstream customers to conduct this assessment. If the model is deemed safe, they should also undertake an algorithmic impact assessment to assess possible societal impacts of an AI system before the system is in use (with ongoing monitoring often advised).[224] Results should be documented and disclosed to the regulator.
    • FDA inspiration: COTS (commercial off-the-shelf software), QMS
  2. Downstream application providers should make clear to end users and affected persons what the underlying foundation model is, including if it is an open-source model, and provide easily accessible explanations of systems’ main parameters and any opt-out mechanisms or human alternatives available.[225]
    • FDA inspiration: Software of Unknown Provenance (SOUP)

Post-market monitoring

  1. An AI ombudsman should be considered, to receive and document complaints or known instances of harms of AI. This would increase regulators’ visibility of AI harms as they occur. It could be piloted initially for a relatively modest investment, but if successful it could dramatically improve redress for AI harms and the functionality of an AI regulatory framework as a whole.[226] An ombudsman should be complimented by a comprehensive remedies framework for affected persons based on clear avenues for redress.
    • FDA inspiration: concern-based audits, reporting of adverse events
  2. Developers and deployers should provide documentation and disclosure of incidents throughout the supply chain, including near misses.[227] This could be strengthened by requiring downstream developers (building on top of foundation models at the application layer) and end users (for example, medical or education professionals) to also disclose incidents.
    • FDA inspiration: reporting of adverse events
  3. Foundation model developers and downstream deployers should be compelled to restrict, suspend or retire a model from active use if harmful impacts, misuse or security vulnerabilities (including leaks or other unauthorised access) arise. Such decisions should be based on standardised criteria and processes.[228]
  4. Host layer actors (for example, cloud service providers or model hosting platforms) should also play a role by evaluating model usage, implementing trust and safety policies to remove models that have demonstrated or are likely to demonstrate serious risks, and flagging harmful models to regulators when it is not in their power to take them down.
    • FDA inspiration: recalls, market withdrawals and safety alerts
  5. AI regulators should have strong powers to investigate and require evidence generation from foundation model developers and downstream deployers. This should be strengthened by whistleblower protections for anyone involved in the development or deployment process who raises concerns about risks to health or safety. This would support regulatory learning and act as a strong deterrent to rule breaking. Powers should include off- and on-site inspections and evidence-gathering mechanisms to address the information asymmetries between AI developers and regulators and to mitigate emergent risks or harms. Consideration should be given to the trade-offs between intellectual property, trade secret and privacy protections (and whether these could serve as undue legal loopholes) and the safety-enhancing features of investigative powers: regulators considering the FDA model across jurisdictions should clarify such legally contentious issues.
    • FDA inspiration: wide information access, active surveillance
  6. Any regulator should be funded to a level comparable to (if not greater than) regulators in other domains where safety and public trust are paramount and where underlying technologies form part of national infrastructure – such as civil nuclear, civil aviation, medicines, or road and rail.[229] Given the level of resourcing required, this may be partly funded by AI developers over a certain threshold (to be defined the regulatorfor example, annual turnover)– as is the case with the FDA[230] and the EU’s European Medicines Agency (EMA).[231] Such an approach is important, to ensure that regulators have a source of funding that is stable and secure, and (importantly) independent from political decisions or reprioritisation.
    • FDA inspiration: mandatory fees
  7. The law around AI liability should be clarified to ensure that legal and financial liability for AI risk is distributed proportionately along foundation model supply chains. Liability regimes vary between jurisdictions and a thorough assessment is beyond the scope of this paper, but across sectors regulating complex technology, clarity in liability is a key driver of compliance within companies and uptake of the technology. For example, lack of clarity as to end user liability in clinical AI is a major reason that uptake has been limited. Liability will be even more contentious in the foundation model supply chain when applications are developed on top of foundation models, and this must be addressed accordingly in any regulatory regime for AI.

Overcoming the limitations of the FDA in a prospective AI regulatory regime

Having considered how the risk-reducing mechanisms of the FDA might be applied to AI governance, it makes sense to also acknowledge the limitations of the FDA regime, and to consider how they might also be counterbalanced in a prospective AI regulatory regime.

The first limitation is the lack of coverage for systemic risks, as the FDA focuses on risk to life. Systemic risks are prevalent in the AI space.[232] AI researchers have conceptualised systemic risk as societal harm and point out that it is similarly overlooked. Proposals to address this include: ‘(1) public oversight mechanisms to increase accountability, including mandatory impact assessments with the opportunity to provide societal feedback; (2) public monitoring mechanisms to ensure independent information gathering and dissemination about AI’s societal impact; and (3) the introduction of procedural rights with a societal dimension, including a right to access to information, access to justice, and participation in public decision-making on AI, regardless of the demonstration of individual harm’.[233] We have expanded on and included these mechanisms in our recommendations in the hope that they can overcome limitations centring on systemic risks.

The second limitation is the high cost of compliance and subsequent limited number of developers, given that the stringent approval requirements are challenging for smaller players to meet. Inspiration for how to counterbalance this may be gleaned from the EU’s FDA equivalent, the EMA. It offers tailored support to small and medium-sized enterprises (SMEs), via an SME Office that provides regulatory assistance for reduced fees. This has contributed to the approval rates for SME applicants increasing from 40 per cent in 2016 to 89 per cent in 2020.[234] Similarly, the UK’s NHS has an AI & Digital Regulations Service that gives guidance and advice on navigating regulation, especially for SMEs that do not have compliance teams.[235]

Streamlined regulatory pathways could be considered to further reduce burdens for AI models or systems with demonstrably promising potential (for example, for scientific discovery). The EMA has done this through its Advanced Therapy Medicine Products process, which streamlines approval procedures for certain medicines.[236]

Similar support mechanisms could be considered for SMEs and startups, as well as streamlined procedures for demonstrably beneficial AI technology, under an AI regulator.

The third limitation is the FDA’s overreliance on industry in some novel areas, because of a lack of expertise. Lack of capacity for effective regulatory oversight has been voiced as a concern in the AI space, too.[237] Some ideas exist for how to overcome this, such as the Singaporean AI Office’s use of public–private partnerships to utilise industry talent without being reliant on it.[238]

The EMA has grappled with similar challenges. Like the FDA, it overcomes knowledge gaps by having a pool of scientific experts, but it seeks to prevent conflict of interest by leaning substantially on transparency: the EMA Management Board and experts cannot have any financial or other interests in the industry they are overseeing, and the curricula vitae, declarations of interest and risk levels for these experts are publicly available.[239]

Taken together, these solutions might be considered to reduce the chances of the limitations of FDA governance being reproduced by an AI regulator.

Open questions

The proposed FDA-style oversight approach for foundation models is far from a detailed ready-to-implement guideline for regulators. We acknowledge the small sample of interviewees for this paper, and that many of our interview subjects may strongly support an FDA model for regulation. For further validation and detailing of the claims in this paper, we are especially interested in future work on three sets of questions.

Understanding foundation model risks

  • Across the foundation model supply chain, where exactly do foundation model risks[240] originate and proliferate, and which players need to be tasked with their mitigation? How can unknown risks be discovered?
  • How effective will exploratory and targeted scrutiny be in identifying different kinds of risks for foundation models?
  • Do current and future foundation models need to be categorised along risk tiers? If so, how? Do all foundation models need to go through an equally rigorous process of regulatory approvals?

Detailing FDA-style oversight for foundation models to foster ‘safe innovation’

  • For the FDA, what aspects of regulatory guidance were easier to prescribe, and to enforce in practice?
  • How do FDA-style oversight or specific oversight features address each risk of foundation models in detail?
  • How can FDA-style oversight for foundation models be integrated into international oversight regimes?[241]
  • What do FDA-style review, audit and inspection processes look like, step by step, for foundation models?
  • How can the limitations of the FDA approach be addressed in every layer of the foundation model supply chain? How can difficult-to-detect systemic risks be mitigated? How can the stifling of innovation, especially among SMEs, be avoided?
  • Are FDA-style product recalls feasible for a foundation model or a downstream applications of foundation models?
  • What role should third parties in the host layer play? While they have less remit over risk origin, might they have significant control over, for example, risk mitigation?
  • What are the implications of FDA-style oversight for foundation models on their accessibility, affordability and sharing their benefits?
  • How would FDA-style pre-approvals be enforced for foundation models, for example, for product recalls?
  • How is liability distributed in an FDA-style oversight approach?
  • Why is the FDA able to be stringent/cautious? How do political incentives on congressional oversight and aversion to risk of harms of medication apply to foundation model regulation?
  • What can be learned from the political economy of the FDA and its reputation?
  • In each jurisdiction (for example, USA, UK, EU), how does an FDA-style approach for AI fit into the political economy and institutional landscape?
  • In each jurisdiction, how should liability law be adapted for AI to ensure that legal and financial liability for AI risk is distributed proportionately along foundation model supply chains?

Learnings from other regulators

  • What can be learned from regulators in public health in other jurisdictions, like the UK’s Medicines and Healthcare products Regulatory Agency (MHRA), EU’s EMA and Health Canada? [242] [243] [244]
  • How can other non-health regulators, such as the US Federal Aviation Administration  or National Highway Traffic Safety Administration, inspire foundation model oversight?[245]
  • How can novel forms of oversight and audits, such as cross-audits or joint audits, be coupled with processes from existing regulators?

Acknowledgements

This paper was co-authored by Merlin Stein (PhD candidate at the University of Oxford) and Connor Dunlop (EU Public Policy Lead at the Ada Lovelace Institute) with input from Andrew Strait.

Interviewees

The 20 interviewees included experts on FDA oversight and foundation model evaluation processes from industry, academia, and thinktanks, as well as government officials. This included three interviews with leading AI labs, two with third-party AI evaluators and auditors, nine with civil society organisations, and six with medical software regulation experts, including former FDA leadership and clinical trial leaders.

The following participants gave us permission to mention their names and affiliations (in alphabetical order). Ten interviewees not listed here did not provide their permission. Respondents do not represent any organisations they are affiliated with. They chose to add their name after the interview and were not sent a draft of this paper before publication. The views expressed in this paper are of the Ada Lovelace Institute.

  • Kasia Chmielinski, Berkman Klein Center for Internet & Society
  • Gemma Galdón-Clavell, Eticas Research & Consulting
  • Gilian Hadfield, University of Toronto, Vector Institute and OpenAI, independent contractor
  • Sonia Khatri, independent SaMD and medical device regulation expert
  • Igor Krawczuk, Lausanne Institute of Technology
  • Sarah Myers West, AI Now Institute
  • Noah Strait, Scientific and Medical Affairs Consulting
  • Robert Trager, Blavatnik School of Government, University of Oxford, and Centre for the Governance of AI
  • Alexandra Tsalidas, Harvard Ethical Intelligence Lab
  • Rudolf Wagner, independent senior executive advisor for SaMD

Reviewers

We are grateful for helpful comments and discussions on this work from:

  • Ashwin Acharya
  • Markus Anderljung
  • Clíodhna Ní Ghuidhir
  • Xiaoxuan Liu
  • Deborah Raji
  • Sarah Myers West
  • Moritz von Knebel

Footnotes

[1] ‘Voluntary AI Commitments’, <www.whitehouse.gov/wp-content/uploads/2023/09/Voluntary-AI-Commitments-September-2023.pdf>, accessed October 12, 2023

[2] ‘An EU AI Act that works for people and society’ (Ada Lovelace Institute 2023) <www.adalovelaceinstitute.org/policy-briefing/eu-ai-act-trilogues/> accessed 12 October 2023

[3] The factors that determine AI risk are not purely technical – sociotechnical determinants of risk are crucial. Features such as the context of deployment, the competency of the intended users, and the optionality of interacting with an AI system must all be considered, in addition to specifics of the data and AI model deployed. OECD, “OECD Framework for the Classification of AI Systems,” OECD Digital Economy Papers, no. 323 (February 2022), https://doi.org/10.1787/cb6d9eca-en.

[4] Markus Anderljung and others, ‘Frontier AI Regulation: Managing Emerging Risks to Public Safety’ (arXiv, 4 September 2023) <http://arxiv.org/abs/2307.03718> accessed 15 September 2023.

[5] ‘A Law for Foundation Models: The EU AI Act Can Improve Regulation for Fairer Competition – OECD.AI’ <https://oecd.ai/en/wonk/foundation-models-eu-ai-act-fairer-competition> accessed 15 September 2023.

[6] ‘Stanford CRFM’ <https://crfm.stanford.edu/report.html> accessed 15 September 2023.

[7] ‘While only a few well-resourced actors worldwide have released general purpose AI models, hundreds of millions of end-users already use these models, further scaled by potentially thousands of applications building on them across a variety of sectors, ranging from education and healthcare to media and finance.’ Pegah Maham and Sabrina Küspert, ‘Governing General Purpose AI’.

[8] Draft standards here are a very good example of the value of dataset documentation (i.e. declaring metadata) on what is used in training and fine-tuning models. In theory, this could also all be kept confidential as commercially sensitive information once a legal infrastructure is in place www.datadiversity.org/draft-standards

[9] Mitchell, Wu, Zaldivar, Barnes, Vasserman, Hutchinson, Spitzer, Raji and Gebru, (2019), ‘Model Cards for Model Reporting’, doi: 10.1145/3287560.3287596

[10] Gebru, Morgenstern, Vecchione, Vaughan, Wallach, Daum and Crawford, (2021), Datasheets for Datasets, https://m-cacm.acm.org/magazines/2021/12/256932-datasheets-for-datasets/abstract (Accessed: 27 February 2023) Hutchinson, Smart, Hanna, Denton, Greer, Kjartansson, Barnes and Mitchell, (2021), ‘Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure’, doi: 10.1145/3442188.3445918;

[11] In the UK, the Civil Aviation Authority has a revenue of £140m and staff of over 1,000, and the Office for Nuclear Regulation around £90m with around 700 staff). An EU-level agency for AI should be funded well beyond this, given that the EU is more than six times the size of the UK.

[12] Algorithmic Accountability Act of 2022 <2022-02-03 Algorithmic Accountability Act of 2022 One-pager (senate.gov)> accessed 15 September 2023.

[13] Lingjiao Chen, Matei Zaharia and James Zou, ‘How Is ChatGPT’s Behavior Changing over Time?’ (arXiv, 1 August 2023) <http://arxiv.org/abs/2307.09009> accessed 15 September 2023.

[14] ‘AI-Generated Books on Amazon Could Give Deadly Advice – Decrypt’ <https://decrypt.co/154187/ai-generated-books-on-amazon-could-give-deadly-advice> accessed 15 September 2023.

[15] ‘Generative AI for Medical Research | The BMJ’ <www.bmj.com/content/382/bmj.p1551#> accessed 15 September 2023.

[16] Emanuel Maiberg ·, ‘Inside the AI Porn Marketplace Where Everything and Everyone Is for Sale’ (404 Media, 22 August 2023) <www.404media.co/inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/> accessed 15 September 2023.

[17] Belle Lin, ‘AI Is Generating Security Risks Faster Than Companies Can Keep Up’ Wall Street Journal (10 August 2023) <www.wsj.com/articles/ai-is-generating-security-risks-faster-than-companies-can-keep-up-a2bdedd4> accessed 15 September 2023.

[18] Sarah Carter et. al., <The Convergence of Artificial Intelligence and the Life Sciences www.nti.org/analysis/articles/the-convergence-of-artificial-intelligence-and-the-life-sciences/> accessed 2 November 2023

[19] Dual Use of Artificial Intelligence-powered Drug Discovery – PubMed (nih.gov)

[20] Haydn Belfield, ‘Great British Cloud And BritGPT: The UK’s AI Industrial Strategy Must Play To Our Strengths’ (Labour for the Long Term 2023)

[21] Thinking About Risks From AI: Accidents, Misuse and Structure | Lawfare (lawfaremedia.org)

[22] Governing General Purpose AI — A Comprehensive Map of Unreliability, Misuse and Systemic Risks | Stiftung Neue Verantwortung (SNV) (stiftung-nv.de); Anthropic \ Frontier Threats Red Teaming for AI Safety

[23] www.deepmind.com/blog/an-early-warning-system-for-novel-ai-risks

[24] ‘Mission critical: Lessons from relevant sectors for AI safety’ (Ada Lovelace Institute 2023) <https://www.adalovelaceinstitute.org/policy-briefing/ai-safety/> accessed 23 November 2023

[25] ‘EU AI Standards Development and Civil Society Participation’ <www.adalovelaceinstitute.org/event/eu-ai-standards-civil-society-participation/> accessed 18 September 2023.

[26] Algorithmic Accountability Act of 2022 <2022-02-03 Algorithmic Accountability Act of 2022 One-pager (senate.gov)> accessed 15 September 2023.

[27] ‘The Problem with AI Licensing & an “FDA for Algorithms” | The Federalist Society’ <https://fedsoc.org/commentary/fedsoc-blog/the-problem-with-ai-licensing-an-fda-for-algorithms> accessed 15 September 2023.

[28] ‘Clip: Amy Kapczynski on an Old Idea Getting New Attention–an “FDA for AI”. – AI Now Institute’ <https://ainowinstitute.org/general/clip-amy-kapczynski-on-an-old-idea-getting-new-attention-an-fda-for-ai> accessed 15 September 2023.

[29] Dylan Matthews, ‘The AI Rules That US Policymakers Are Considering, Explained’ (Vox, 1 August 2023) <www.vox.com/future-perfect/23775650/ai-regulation-openai-gpt-anthropic-midjourney-stable> accessed 15 September 2023; Belenguer L, ‘AI Bias: Exploring Discriminatory Algorithmic Decision-Making Models and the Application of Possible Machine-Centric Solutions Adapted from the Pharmaceutical Industry’ (2022) 2 AI and Ethics 771 <https://doi.org/10.1007/s43681-022-00138-8>

[30] ‘Senate Hearing on Regulating Artificial Intelligence Technology | C-SPAN.Org’ <www.c-span.org/video/?529513-1/senate-hearing-regulating-artificial-intelligence-technology> accessed 15 September 2023.

[31] ‘AI Algorithms Need FDA-Style Drug Trials | WIRED’ <www.wired.com/story/ai-algorithms-need-drug-trials/> accessed 15 September 2023.

[32] ‘One of the “Godfathers of AI” Airs His Concerns’ The Economist <www.economist.com/by-invitation/2023/07/21/one-of-the-godfathers-of-ai-airs-his-concerns> accessed 15 September 2023.

[33] ‘ISVP’ <www.senate.gov/isvp/?auto_play=false&comm=judiciary&filename=judiciary072523&poster=www.judiciary.senate.gov/assets/images/video-poster.png&stt=> accessed 15 September 2023.

[34] ‘Regulations.Gov’ <www.regulations.gov/docket/NTIA-2023-0005/comments> accessed 15 September 2023.

[35] Guidelines for Artificial Intelligence in Medicine: Literature Review and Content Analysis of Frameworks – PMC (nih.gov)

[36] ‘Foundation Models for Generalist Medical Artificial Intelligence | Nature’ <www.nature.com/articles/s41586-023-05881-4> accessed 15 September 2023.

[37] Anthropic admitted openly that “we do not know how to train systems to robustly behave well“. ‘Core Views on AI Safety: When, Why, What, and How’ (Anthropic) <www.anthropic.com/index/core-views-on-ai-safety> accessed 18 September 2023.

[38] NTIA AI Accountability Request for Comment <www.regulations.gov/docket/NTIA-2023-0005/comments> accessed 18 September 2023.

[39] Inioluwa Deborah Raji and others, ‘Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance’ (arXiv, 9 June 2022) <http://arxiv.org/abs/2206.04737> accessed 18 September 2023.

[40] See Appendix for a list of interviewees

[41] Michael Moor and others, ‘Foundation Models for Generalist Medical Artificial Intelligence’ (2023) 616 Nature 259.

[42] Lewis Ho and others, ‘International Institutions for Advanced AI’ (arXiv, 11 July 2023) <http://arxiv.org/abs/2307.04699> accessed 18 September 2023.

[43] Center for Devices and Radiological Health, ‘Medical Device Single Audit Program (MDSAP)’ (FDA, 24 August 2023) <www.fda.gov/medical-devices/cdrh-international-programs/medical-device-single-audit-program-mdsap> accessed 18 September 2023.

[44] Center for Drug Evaluation and Research, ‘Conducting Clinical Trials’ (FDA, 2 August 2023) <www.fda.gov/drugs/development-approval-process-drugs/conducting-clinical-trials> accessed 18 September 2023.

[45] ‘Explainer: What Is a Foundation Model?’ <www.adalovelaceinstitute.org/resource/foundation-models-explainer/> accessed 18 September 2023.
Alternatively: ‘any model that is trained on broad data (generally using self-supervision at scale) that can be adapted (e.g.,fine-tuned) to a wide range of downstream tasks’.

Bommasani R and others, ‘On the Opportunities and Risks of Foundation Models’ (arXiv, 12 July 2022) <http://arxiv.org/abs/2108.07258>

[46] ‘Explainer: What Is a Foundation Model?’ <www.adalovelaceinstitute.org/resource/foundation-models-explainer/> accessed 18 September 2023.

[47] Ibid.

[48] AWS, ‘Fine-Tune a Model’ <https://docs.aws.amazon.com/sagemaker/latest/dg/jumpstart-fine-tune.html> accessed 3 July 2023

[49] ‘Explainer: What Is a Foundation Model?’ <www.adalovelaceinstitute.org/resource/foundation-models-explainer/> accessed 18 September 2023.

[50] ‘ISO – ISO 9001 and Related Standards — Quality Management’ (ISO, 1 September 2021) <www.iso.org/iso-9001-quality-management.html> accessed 2 November 2023.

[51] 14:00-17:00, ‘ISO 13485:2016’ (ISO, 2 June 2021) <www.iso.org/standard/59752.html> accessed 2 November 2023.

 [52] OECD, ‘Risk-Based Regulation’ in OECD, OECD Regulatory Policy Outlook 2021 (OECD 2021) <www.oecd-ilibrary.org/governance/oecd-regulatory-policy-outlook-2021_9d082a11-en> accessed 18 September 2023.

[53] Center for Devices and Radiological Health, ‘International Medical Device Regulators Forum (IMDRF)’ (FDA, 15 September 2023) <www.fda.gov/medical-devices/cdrh-international-programs/international-medical-device-regulators-forum-imdrf> accessed 18 September 2023.

[54] Office of the Commissioner, ‘What We Do’ (FDA, 28 June 2021) <www.fda.gov/about-fda/what-we-do> accessed 18 September 2023.

[55] ‘FDA User Fees: Examining Changes in Medical Product Development and Economic Benefits’ (ASPE) <https://aspe.hhs.gov/reports/fda-user-fees> accessed 18 September 2023.

[56] ‘Premarket Approval (PMA)’ <www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpma/pma.cfm?id=P160009> accessed 18 September 2023.

[57] ‘Product Classification’ <www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfPCD/classification.cfm?id=LQB> accessed 18 September 2023.

[58] Center for Devices and Radiological Health, ‘Et Control – P210018’ [2022] FDA <www.fda.gov/medical-devices/recently-approved-devices/et-control-p210018> accessed 18 September 2023.

[59] Note that only ~2% of SaMD are Class III, see Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis – The Lancet Digital Health and Drugs and Devices: Comparison of European and U.S. Approval Processes – ScienceDirect

[60] ‘Assessing the Efficacy and Safety of Medical Technologies (Part 4 of 12) (Princeton.Edu) – Google Search’ <www.google.com/search?q=Assessing+the+Efficacy+and+Safety+of+Medical+Technologies+(Part+4+of+12)+(princeton.edu)&rlz=1C1GCEA_enBE1029BE1030&oq=Assessing+the+Efficacy+and+Safety+of+Medical+Technologies+(Part+4+of+12)+(princeton.edu)&gs_lcrp=EgZjaHJvbWUyBggAEEUYOdIBBzM1N2owajSoAgCwAgA&sourceid=chrome&ie=UTF-8> accessed 18 September 2023.

[61] Ibid.

[62] For the purposes of this report, ‘effectiveness’ is used as a synonym of ‘efficacy’. In detail, effectiveness is concerned with the benefit of a technology under average conditions of use, whereas efficacy is the benefit under ideal conditions.

[63] ‘SAMD MDSW’ <www.quaregia.com/blog/samd-mdsw> accessed 18 September 2023.

[64] Office of the Commissioner, ‘The Drug Development Process’ (FDA, 20 February 2020) <www.fda.gov/patients/learn-about-drug-and-device-approvals/drug-development-process> accessed 18 September 2023.

[65] Eric Wu and others, ‘How Medical AI Devices Are Evaluated: Limitations and Recommendations from an Analysis of FDA Approvals’ (2021) 27 Nature Medicine 582.

[66] It can be debated whether this falls under the exact definition of SaMD as a stand-alone software feature, or as a software component of a medical device, but the lessons and process remain the same.

[67] SUMMARY OF SAFETYAND EFFECTIVENESS DATA (SSED) <www.accessdata.fda.gov/cdrh_docs/pdf21/P210018B.pdf> accessed 18 September 2023.

[68] A QMS is a standardised process for documenting compliance based on international standards (ISO 13485/820).

[69] Center for Devices and Radiological Health, ‘Overview of IVD Regulation’ [2023] FDA <www.fda.gov/medical-devices/ivd-regulatory-assistance/overview-ivd-regulation> accessed 18 September 2023.

[70] ‘When Science and Politics Collide: Enhancing the FDA | Science’ <www.science.org/doi/10.1126/science.aaw8093> accessed 18 September 2023.

[71] ‘Unique Device Identification System’ (Federal Register, 24 September 2013) <www.federalregister.gov/documents/2013/09/24/2013-23059/unique-device-identification-system> accessed 18 September 2023.

[72] ‘openFDA’ <https://open.fda.gov/data/faers/> accessed 10 November 2023.

[73] For example, Carpenter 2010, Hilts 2004, Hutt et al 2022

[74] ‘Factors to Consider Regarding Benefit-Risk in Medical Device Product Availability, Compliance, and Enforcement Decisions – Guidance for Industry and Food and Drug Administration Staff’.

[75] Center for Devices and Radiological Health, ‘510(k) Third Party Review Program’ (FDA, 15 August 2023) <www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/510k-third-party-review-program> accessed 18 September 2023.

[76] Office of Regulatory Affairs, ‘What Should I Expect during an Inspection?’ [2020] FDA <www.fda.gov/industry/fda-basics-industry/what-should-i-expect-during-inspection> accessed 18 September 2023.

[77] ‘Device Makers Can Take COTS, but Only with Clear SOUP’ <https://web.archive.org/web/20130123140527/http://medicaldesign.com/engineering-prototyping/software/device-cots-soup-1111/> accessed 18 September 2023.

[78] ‘FDA Clears Intellia to Start US Tests of “in Vivo” Gene Editing Drug’ (BioPharma Dive) <www.biopharmadive.com/news/intellia-fda-crispr-in-vivo-gene-editing-ind/643999/> accessed 18 September 2023.

[79] ‘FDA Authority Over Tobacco’ (Campaign for Tobacco-Free Kids) <www.tobaccofreekids.org/what-we-do/us/fda> accessed 18 September 2023.

[80] FDA AT A GLANCE: REGULATED PRODUCTS AND FACILITIES, November 2020 <www.fda.gov/media/143704/download#:~:text=REGULATED%20PRODUCTS%20AND%20FACILITIES&text=FDA%2Dregulated%20products%20account%20for,dollar%20spent%20by%20U.S.%20consumers.&text=FDA%20regulates%20about%2078%20percent,poultry%2C%20and%20some%20egg%20products.> accessed 18 September 2023.

[81] ‘Getting Smarter: FDA Publishes Draft Guidance on Predetermined Change Control Plans for Artificial Intelligence/Machine Learning (AI/ML) Devices’ (5 February 2023) <www.ropesgray.com/en/newsroom/alerts/2023/05/getting-smarter-fda-publishes-draft-guidance-on-predetermined-change-control-plans-for-ai-ml-devices> accessed 18 September 2023.

[82] Center for Veterinary Medicine, ‘Q&A on FDA Regulation of Intentional Genomic Alterations in Animals’ [2023] FDA <www.fda.gov/animal-veterinary/intentional-genomic-alterations-igas-animals/qa-fda-regulation-intentional-genomic-alterations-animals> accessed 18 September 2023.

[83] Andrew Kolodny, ‘How FDA Failures Contributed to the Opioid Crisis’ (2020) 22 AMA Journal of Ethics 743.

[84] Commissioner O of the, ‘Milestones in U.S. Food and Drug Law’ [2023] FDA <https://www.fda.gov/about-fda/fda-history/milestones-us-food-and-drug-law> accessed 3 December 2023

 

[85] Reputation and Power (2010) <https://press.princeton.edu/books/paperback/9780691141800/reputation-and-power> accessed 3 December 2023

 

[86] ‘Hutt, Merrill, Grossman, Cortez, Lietzan, and Zettler’s Food and Drug Law, 5th – 9781636596952 – West Academic’ <https://faculty.westacademic.com/Book/Detail?id=341299> accessed 3 December 2023

 

[87]For example Carpenter 2010, Hilts 2004, Hutt et al 2022

[88] ‘Hutt, Merrill, Grossman, Cortez, Lietzan, and Zettler’s Food and Drug Law, 5th – 9781636596952 – West Academic’ <https://faculty.westacademic.com/Book/Detail?id=341299> accessed 18 September 2023.

[89] Eric Wu and others, ‘How Medical AI Devices Are Evaluated: Limitations and Recommendations from an Analysis of FDA Approvals’ (2021) 27 Nature Medicine 582.

[90] Other public health regulators, for example NICE (UK) cover accessibility risk to a larger degree than the FDA, similarly on structural discrimination risks with NICE “Standing together” work on data curation and declarations of datasets used in developing SaMD. The FDA over time developed similar programs.

[91] Ziad Obermeyer and others, ‘Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations’ (2019) 366 Science 447.

[92] ’FDA-cleared artificial intelligence and machine learning-based medical devices and their 510(k) predicate networks’<www.thelancet.com/journals/landig/article/PIIS2589-7500(23)00126-7/fulltext#sec1> accessed 18 September 2023.

[93] ‘How the FDA’s Food Division Fails to Regulate Health and Safety Hazards’ <https://politico.com/interactives/2022/fda-fails-regulate-food-health-safety-hazards> accessed 18 September 2023.

[94] Christopher J Morten and Amy Kapczynski, ‘The Big Data Regulator, Rebooted: Why and How the FDA Can and Should Disclose Confidential Data on Prescription Drugs and Vaccines’ (2021) 109 California Law Review 493.

[95] ‘Examination of Clinical Trial Costs and Barriers for Drug Development’ (ASPE) <https://aspe.hhs.gov/reports/examination-clinical-trial-costs-barriers-drug-development-0> accessed 18 September 2023.

[96] Office of the Commissioner, ‘Advisory Committees’ (FDA, 3 May 2021) <www.fda.gov/advisory-committees> accessed 18 September 2023.

[97] For example. Carpenter 2010, Hilts 2004, Hutt et al 2022

[98] ‘FDA’s Science Infrastructure Failing | Infectious Diseases | JAMA | JAMA Network’ <https://jamanetwork.com/journals/jama/article-abstract/1149359> accessed 18 September 2023.

[99] Bridget M Kuehn, ‘FDA’s Science Infrastructure Failing’ (2008) 299 JAMA 157.

[100] ‘What to Expect at FDA’s Vaccine Advisory Committee Meeting’ (The Equation, 19 October 2020) <https://blog.ucsusa.org/genna-reed/vrbpac-meeting-what-to-expect/> accessed 18 September 2023.

[101] Office of the Commissioner, ‘What Is a Conflict of Interest?’ [2022] FDA <www.fda.gov/about-fda/fda-basics/what-conflict-interest> accessed 18 September 2023.

[102] The Firm and the FDA: McKinsey & Company’s Conflicts of Interest at the Heart of the Opioid Epidemic <https://fingfx.thomsonreuters.com/gfx/legaldocs/akpezyejavr/2022-04-13.McKinsey%20Opioid%20Conflicts%20Majority%20Staff%20Report%20FINAL.pdf> accessed 18 September 2023.

[103] Causholli M, Chambers DJ and Payne JL, ‘Future Nonaudit Service Fees and Audit Quality’ (2014) ,<onlinelibrary.wiley.com/doi/abs/10.1111/1911-3846.12042> accessed 21 September 2023; Jamal K and Sunder S, ‘Is Mandated Independence Necessary for Audit Quality?’ (2011) 36 Accounting, Organizations and Society 284 <Is mandated independence necessary for audit quality? – ScienceDirect> accessed 21 September 2023

[104] Reputation and Power (2010) <https://press.princeton.edu/books/paperback/9780691141800/reputation-and-power> accessed 18 September 2023.

[105] ‘Hutt, Merrill, Grossman, Cortez, Lietzan, and Zettler’s Food and Drug Law, 5th – 9781636596952 – West Academic’ <https://faculty.westacademic.com/Book/Detail?id=341299> accessed 18 September 2023.

[106] Ana Santos Rutschman, ‘How Theranos’ Faulty Blood Tests Got to Market – and What That Shows about Gaps in FDA Regulation’ (The Conversation, 5 October 2021) <http://theconversation.com/how-theranos-faulty-blood-tests-got-to-market-and-what-that-shows-about-gaps-in-fda-regulation-168050> accessed 18 September 2023.

[107] Center for Devices and Radiological Health, ‘Classify Your Medical Device’ (FDA, 14 August 2023) <www.fda.gov/medical-devices/overview-device-regulation/classify-your-medical-device> accessed 18 September 2023.

[108] Anderljung and others, ‘Frontier AI Regulation: Managing Emerging Risks to Public Safety’ (arXiv, 4 September 2023) <http://arxiv.org/abs/2307.03718> accessed 15 September 2023.

[109] ‘A Law for Foundation Models: The EU AI Act Can Improve Regulation for Fairer Competition – OECD.AI’ <https://oecd.ai/en/wonk/foundation-models-eu-ai-act-fairer-competition> accessed 18 September 2023.

[110] ‘Stanford CRFM’ <https://crfm.stanford.edu/report.html> accessed 18 September 2023.

[111] Pegah Maham and Sabrina Küspert, ‘Governing General Purpose AI’.

[112] ‘Frontier AI Regulation: Managing Emerging Risks to Public Safety’ <https://openai.com/research/frontier-ai-regulation> accessed 18 September 2023.

[113] ‘Auditing Algorithms: The Existing Landscape, Role of Regulators and Future Outlook’ (GOV.UK) <www.gov.uk/government/publications/findings-from-the-drcf-algorithmic-processing-workstream-spring-2022/auditing-algorithms-the-existing-landscape-role-of-regulators-and-future-outlook> accessed 18 September 2023.

[114] ‘Introducing Superalignment’ <https://openai.com/blog/introducing-superalignment> accessed 18 September 2023.

[115] ‘Why AI Safety?’ (Machine Intelligence Research Institute) <https://intelligence.org/why-ai-safety/> accessed 18 September 2023.

[116] ‘DAIR (Distributed AI Research Institute)’ (DAIR Institute) <https://dair-institute.org/> accessed 18 September 2023.

[117] Anthropic < https://www.anthropic.com/index/frontier-threats-red-teaming-for-ai-safety#:~:text=If%20unmitigated%2C%20we%20worry%20that,implementation%20of%20mitigations%20for%20them> accessed 29 November 2023

[118] ‘Explainer: What Is a Foundation Model?’ <www.adalovelaceinstitute.org/resource/foundation-models-explainer/> accessed 18 September 2023.

[119] Center for Devices and Radiological Health, ‘Software as a Medical Device (SaMD)’ (FDA, 9 September 2020) <www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd> accessed 10 November 2023.

[120] Pegah Maham and Sabrina Küspert, ‘Governing General Purpose AI’.

[121] ‘The Human Decisions That Shape Generative AI’ (Mozilla Foundation, 2 August 2023) <https://foundation.mozilla.org/en/blog/the-human-decisions-that-shape-generative-ai-who-is-accountable-for-what/> accessed 18 September 2023.

[122] ‘Frontier Model Security’ (Anthropic) <www.anthropic.com/index/frontier-model-security> accessed 18 September 2023.

[123] Is ChatGPT a cybersecurity threat? | TechCrunch

[124] ChatGPT Security Risks: What Are They and How To Protect Companies (itprotoday.com)

[125] 2307.03718.pdf (arxiv.org)

[126] 2307.03718.pdf (arxiv.org)

[127] 2307.03718.pdf (arxiv.org)

[128] 2307.03718.pdf (arxiv.org)

[129] ‘AI Assurance?’ <www.adalovelaceinstitute.org/report/risks-ai-systems/> accessed 21 September 2023.

[130] Preparing for Extreme Risks: Building a Resilient Society (parliament.uk) ‘Preparing for Extreme Risks: Building a Resilient Society’

[131] Nguyen T, ‘Insurability of Catastrophe Risks and Government Participation in Insurance Solutions’ (2013) <www.semanticscholar.org/paper/Insurability-of-Catastrophe-Risks-and-Government-in-Nguyen/dcecefd3f24a099b958e8ac1127a4bdc803b28fb> accessed 21 September 2023

[132] Banias MJ, ‘Inside CounterCloud: A Fully Autonomous AI Disinformation System’ (The Debrief, 16 August 2023) <https://thedebrief.org/countercloud-ai-disinformation/> accessed 21 September 2023

[133] Raji ID and others, ‘Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance’ (arXiv, 9 June 2022) <http://arxiv.org/abs/2206.04737> accessed 21 September 2023

[134] McAllister LK, ‘Third-Party Programs to Assess Regulatory Compliance’ (2012) <www.acus.gov/sites/default/files/documents/Third-Party-Programs-Report_Final.pdf> accessed 21 September 2023

[135] Science in Regulation, A Study of Agency Decisionmaking Approaches, Appendices 2012 <www.acus.gov/sites/default/files/documents/Science%20in%20Regulation_Final%20Appendix_2_18_13_0.pdf> accessed 21 September 2023

[136] GPT-4-system-card (openai.com) (2023) <https://cdn.openai.com/papers/gpt-4-system-card.pdf> accessed 21 September 2023

[137] Intensive own evidence production of regulators, for example like the IAEA, is only suitable for non-complex industries

[138] The order does not indicate the importance of each dimension. The importance for risk reduction depends significantly on the specific implementation of the dimensions and the context.

[139] While other oversight regimes such as practised in cybersecurity, aviation or similar are an inspiration for foundation models too, FDA-style oversight is among the few that score towards the right on most dimensions identified in the regulatory oversight and audit literature and depicted above.

[140] Open AI Bug Bounty Program (2022) <Announcing OpenAI’s Bug Bounty Program> accessed 21 September 2023

[141] ‘MAUDE – Manufacturer and User Facility Device Experience’ <www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/search.cfm> accessed 21 September 2023

[142] ‘Auditor Independence and Audit Quality: A Literature Review – Nopmanee Tepalagul, Ling Lin, 2015’ <https://journals.sagepub.com/doi/abs/10.1177/0148558×14544505?casa_token=6R7ABlbi2I0AAAAA:K1pMF6sw6QrmvEhczXbW0BwjE8xXD0r3GKfOHpZczbeIvdMckGn00I6zkluRqd06WmBJXJ616xz_KXk> accessed 21 September 2023

[143] ‘Customer-Driven Misconduct: How Competition Corrupts Business Practices – Article – Faculty & Research – Harvard Business School’ <www.hbs.edu/faculty/Pages/item.aspx?num=43347> accessed 21 September 2023

[144] Donald R. Deis Jr and Giroux GA, ‘Determinants of Audit Quality in the Public Sector’ (1992) 67 The Accounting Review 462 <www.jstor.org/stable/247972?casa_token=luGLXHQ3nAoAAAAA:clOnnu3baxAfZYMCx7kJloL08GI0RPboKMovVPQz7Z6bi9w4grsJEqz1tNIKJD88yFXbpc8iqLDoeZY9U5jnECBH99hKFWKk3-WxI9e__HBwlQ_bOBhSWQ> accessed 21 September 2023

[145] Engstrom DF and Ho DE, ‘Algorithmic Accountability in the Administrative State’ (9 March 2020) <https://papers.ssrn.com/abstract=3551544> accessed 21 September 2023

[146] Causholli M, Chambers DJ and Payne JL, ‘Future Nonaudit Service Fees and Audit Quality’ (2014) ,<onlinelibrary.wiley.com/doi/abs/10.1111/1911-3846.12042> accessed 21 September 2023

[147] Jamal K and Sunder S, ‘Is Mandated Independence Necessary for Audit Quality?’ (2011) 36 Accounting, Organizations and Society 284 <Is mandated independence necessary for audit quality? – ScienceDirect> accessed 21 September 2023

[148] Widder DG, West S and Whittaker M, ‘Open (For Business): Big Tech, Concentrated Power, and the Political Economy of Open AI’ (17 August 2023) <https://papers.ssrn.com/abstract=4543807> accessed 21 September 2023

[149] Lamoreaux PT, ‘Does PCAOB Inspection Access Improve Audit Quality? An Examination of Foreign Firms Listed in the United States’ (2016) 61 Journal of Accounting and Economics 313

<Does PCAOB inspection access improve audit quality? An examination of foreign firms listed in the United States – ScienceDirect> accessed 21 September 2023

[150] ‘Introduction to NIST FRVT’ (Paravision) <www.paravision.ai/news/introduction-to-nist-frvt/> accessed 21 September 2023

[151] ‘Confluence Mobile – UN Statistics Wiki’ <https://unstats.un.org/wiki/plugins/servlet/mobile?contentId=152797274#content/view/152797274> accessed 21 September 2023

[152] ‘Large Language Models and Software as a Medical Device – MedRegs’ <https://medregs.blog.gov.uk/2023/03/03/large-language-models-and-software-as-a-medical-device/> accessed 21 September 2023

[153] Ada Lovelace Institute, AI assurance? Assessing and mitigating risks across the AI lifecycle (2023) < https://www.adalovelaceinstitute.org/report/risks-ai-systems/>

[154] ‘Inclusive AI Governance – Ada Lovelace Institute’ (2023) < www.adalovelaceinstitute.org/wp-content/uploads/2023/03/Ada-Lovelace-Institute-Inclusive-AI-governance-Discussion-paper-March-2023.pdf> accessed 21 September 2023

[155] ‘AI Assurance?’ <www.adalovelaceinstitute.org/report/risks-ai-systems/> accessed 21 September 2023

[156] ‘Comment of the AI Policy and Governance Working Group on the NTIA AI Accountability Policy’ (2023) <www.ias.edu/sites/default/files/AI%20Policy%20and%20Governance%20Working%20Group%20NTIA%20Comment.pdf> accessed 21 September 2023

[157] Weale S and correspondent SWE, ‘Lecturers Urged to Review Assessments in UK amid Concerns over New AI Tool’ The Guardian (13 January 2023) <https://www.theguardian.com/technology/2023/jan/13/end-of-the-essay-uk-lecturers-assessments-chatgpt-concerns-ai> accessed 23 November 2023

[158] ‘Proposing a Foundation Model Information-Sharing Regime for the UK | GovAI Blog’ <www.governance.ai/post/proposing-a-foundation-model-information-sharing-regime-for-the-uk> accessed 21 September 2023

[159] ‘Proposing a Foundation Model Information-Sharing Regime for the UK | GovAI Blog’ <www.governance.ai/post/proposing-a-foundation-model-information-sharing-regime-for-the-uk> accessed 21 September 2023

[160] ‘Regulating AI in the UK’ <www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/> accessed 21 September 2023

[161] ‘Unique Device Identification System’ (Federal Register, 24 September 2013) <www.federalregister.gov/documents/2013/09/24/2013-23059/unique-device-identification-system> accessed 21 September 2023

[162] Anthropic AB is CL at and others, ‘How We Can Regulate AI—Asterisk’ <https://asteriskmag.com/issues/03/how-we-can-regulate-ai> accessed 21 September 2023

[163] ‘Opinion | Here’s a Simple Way to Regulate Powerful AI Models’ Washington Post (16 August 2023) <www.washingtonpost.com/opinions/2023/08/16/ai-danger-regulation-united-states/> accessed 21 September 2023

[164] Vidal DE and others, ‘Navigating US Regulation of Artificial Intelligence in Medicine—A Primer for Physicians’ (2023) 1 Mayo Clinic Proceedings: Digital Health 31

[165] ‘The Human Decisions That Shape Generative AI’ (Mozilla Foundation, 2 August 2023) <https://foundation.mozilla.org/en/blog/the-human-decisions-that-shape-generative-ai-who-is-accountable-for-what/> accessed 21 September 2023

[166] Birhane A, Prabhu VU and Kahembwe E, ‘Multimodal Datasets: Misogyny, Pornography, and Malignant Stereotypes’ (arXiv, 5 October 2021) <http://arxiv.org/abs/2110.01963> accessed 21 September 2023

[167] Schaul K, Chen SY and Tiku N, ‘Inside the Secret List of Websites That Make AI like ChatGPT Sound Smart’ (Washington Post) <www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/> accessed 21 September 2023

[168] ‘When AI Is Trained on AI-Generated Data, Strange Things Start to Happen’ (Futurism) <https://futurism.com/ai-trained-ai-generated-data-interview> accessed 21 September 2023

[169] Draft standards here are a very good example of the value of dataset documentation (that is, declaring metadata) on what is used in training and fine-tuning models. In theory, this could also all be kept confidential as commercially sensitive information once a legal infrastructure is in place www.datadiversity.org/draft-standards

[170] Mitchell, Wu, Zaldivar, Barnes, Vasserman, Hutchinson, Spitzer, Raji and Gebru, (2019), ‘Model Cards for Model Reporting’, doi: 10.1145/3287560.3287596

[171] Gebru, Morgenstern, Vecchione, Vaughan, Wallach, Daum and Crawford, (2021), Datasheets for Datasets, <https://m-cacm.acm.org/magazines/2021/12/256932-datasheets-for-datasets/abstract >(Accessed: 27 February 2023); Hutchinson, Smart, Hanna, Denton, Greer, Kjartansson, Barnes and Mitchell, (2021), ‘Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure’, doi: 10.1145/3442188.3445918;

[172] Shevlane T and others, ‘Model Evaluation for Extreme Risks’ (arXiv, 24 May 2023) <http://arxiv.org/abs/2305.15324> accessed 21 September 2023

[173] A pretrained AI model is a deep learning model that is already trained on large datasets to accomplish a specific task, meaning there are design choices which affect its output and performance (according to one leading lab ‘language models already learn a lot about human values during pretraining’ and this is where ‘implicit biases’ arise.)

[174] ‘running against a suite of benchmark objectionable behaviors… we find that the prompts achieve up to 84% success rates at attacking GPT-3.5 and GPT-4, and 66% for PaLM-2; success rates for Claude are substantially lower (2.1%), but notably the attacks still can induce behavior that is otherwise never generated.’ Zou A and others, ‘Universal and Transferable Adversarial Attacks on Aligned Language Models’ (arXiv, 27 July 2023) <http://arxiv.org/abs/2307.15043> accessed 21 September 2023

[175] Shevlane T and others, ‘Model Evaluation for Extreme Risks’ (arXiv, 24 May 2023) <http://arxiv.org/abs/2305.15324> accessed 21 September 2023; Nelson et al ; Kolt N, ‘Algorithmic Black Swans’ (25 February 2023) <https://papers.ssrn.com/abstract=4370566> accessed 21 September 2023

[176] Mökander J and others, ‘Auditing Large Language Models: A Three-Layered Approach’ [2023] AI and Ethics <http://arxiv.org/abs/2302.08500> accessed 21 September 2023; Wan A and others, ‘Poisoning Language Models During Instruction Tuning’ (arXiv, 1 May 2023) <http://arxiv.org/abs/2305.00944> accessed 21 September 2023; ‘Analyzing the European Union AI Act: What Works, What Needs Improvement’ (Stanford HAI) <https://hai.stanford.edu/news/analyzing-european-union-ai-act-what-works-what-needs-improvement> accessed 21 September 2023; ‘EU AI Standards Development and Civil Society Participation’ <www.adalovelaceinstitute.org/event/eu-ai-standards-civil-society-participation/> accessed 21 September 2023

[177] ’Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance’ <https://dl.acm.org/doi/pdf/10.1145/3514094.3534181> accessed 21 September 2023

[178] Gupta A, ‘Emerging AI Governance Is an Opportunity for Business Leaders to Accelerate Innovation and Profitability’ (Tech Policy Press, 31 May 2023) <https://techpolicy.press/emerging-ai-governance-is-an-opportunity-for-business-leaders-to-accelerate-innovation-and-profitability/> accessed 21 September 2023

[179] Key Enforcement Issues of the AI Act Should Lead EU Trilogue Debate’ (Brookings) <www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/> accessed 21 September 2023

[180] ‘Structured Access’ – Toby Shevlane (2022)< https://arxiv.org/ftp/arxiv/papers/2201/2201.05159.pdf> accessed 21 September 2023

[181] ‘Systematic probing of an AI model or system by either expert or non-expert human evaluators to reveal undesired outputs or behaviors’.

[182] House TW, ‘FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI’ (The White House, 21 July 2023) <www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/> accessed 21 September 2023

[183] ‘Keeping an Eye on AI’ <www.adalovelaceinstitute.org/report/keeping-an-eye-on-ai/> accessed 21 September 2023

[184].Janjeva A and others, ‘Strengthening Resilience to AI Risk’ (2023 <)https://cetas.turing.ac.uk/sites/default/files/2023-08/cetas-cltr_ai_risk_briefing_paper.pdf> accessed 21 September 2023

[185] Shrishak K, ‘How to Deal with an AI Near-Miss: Look to the Skies’ (2023) 79 Bulletin of the Atomic Scientists 166

[186] ‘Guidance for Manufacturers on Reporting Adverse Incidents Involving Software as a Medical Device under the Vigilance System’ (GOV.UK) <www.gov.uk/government/publications/reporting-adverse-incidents-involving-software-as-a-medical-device-under-the-vigilance-system/guidance-for-manufacturers-on-reporting-adverse-incidents-involving-software-as-a-medical-device-under-the-vigilance-system> accessed 21 September 2023

[187] www.adalovelaceinstitute.org/blog/ai-regulation-learn-from-history/ Guidance always has its roots in legislation, but can be iterated more rapidly and flexibly whereas legislation requires several legal and political steps at minimum. Explainer here: www.oneeducation.org.uk/difference-between-laws-regulations-acts-guidance-policies/.

[188] www.tandfonline.com/doi/pdf/10.1080/01972243.2022.2124565?needAccess=true

[189] https://cip.org/alignmentassemblies

[190] https://arxiv.org/abs/2306.09871 ; https://openai.com/blog/democratic-inputs-to-ai

[191] Ada Lovelace Institute, Participatory data stewardship: A framework for involving people in the use of data’ (2021) < https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/>

[192] Shevlane T and others, ‘Model Evaluation for Extreme Risks’ (arXiv, 24 May 2023) <http://arxiv.org/abs/2305.15324> accessed 21 September 2023

[193] ‘Examining the Black Box’ <www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/> accessed 21 September 2023

[194] Nelson and et al., ‘AI Policy and Governance Working Group NTIA Comment.Pdf’ <www.ias.edu/sites/default/files/AI%20Policy%20and%20Governance%20Working%20Group%20NTIA%20Comment.pdf> accessed 21 September 2023

[195] Bill Chappell, ‘“It Was Installed For This Purpose,” VW’s U.S. CEO Tells Congress About Defeat Device’ NPR (8 October 2015) <www.npr.org/sections/thetwo-way/2015/10/08/446861855/volkswagen-u-s-ceo-faces-questions-on-capitol-hill> accessed 30 August 2023

[196] MedWatch is the FDA’s adverse event reporting program, while Medical Product Safety Network (MedSun) monitors the safety and effectiveness of medical devices. Commissioner O of the, ‘Step 5: FDA Post-Market Device Safety Monitoring’ [2018] FDA <www.fda.gov/patients/device-development-process/step-5-fda-post-market-device-safety-monitoring> accessed 21 September 2023

[197] AINOW, ‘Zero-Trust-AI-Governance.Pdf’ (August 2023) <https://ainowinstitute.org/wp-content/uploads/2023/08/Zero-Trust-AI-Governance.pdf> accessed 21 September 2023

[198] ‘The Value​​​ ​​​Chain of General-Purpose AI​​’ <www.adalovelaceinstitute.org/blog/value-chain-general-purpose-ai/> accessed 21 September 2023

[199] www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/

[200] Knott A and Pedreschi D, ‘State-of-the-Art Foundation AI Models Should Be Accompanied by Detection Mechanisms as a Condition of Public Release’ <https://gpai.ai/projects/responsible-ai/social-media-governance/Social%20Media%20Governance%20Project%20-%20July%202023.pdf> accessed 21 September 2023

[201] www.tspa.org/curriculum/ts-fundamentals/transparency-report/

[202] Bommasani R and others, ‘Do Foundation Model Providers Comply with the Draft EU AI Act?’ <https://crfm.stanford.edu/2023/06/15/eu-ai-act.html> accessed 21 September 2023

[203] ‘Keeping an Eye on AI’ <www.adalovelaceinstitute.org/report/keeping-an-eye-on-ai/> accessed 21 September 2023

[204] ‘Regulating AI in the UK’ <www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/> accessed 21 September 2023

[205] Zinchenko V and others, ‘Changes in Software as a Medical Device Based on Artificial Intelligence Technologies’ (2022) 17 International Journal of Computer Assisted Radiology and Surgery 1969

[206] Shrishak K, ‘How to Deal with an AI Near-Miss: Look to the Skies’ (2023) 79 Bulletin of the Atomic Scientists 166

[207] AINOW, ‘Zero-Trust-AI-Governance.Pdf’ (August 2023) <https://ainowinstitute.org/wp-content/uploads/2023/08/Zero-Trust-AI-Governance.pdf> accessed 21 September 2023

[208] ‘How Boeing 737 MAX’s Flawed Flight Control System Led to 2 Crashes That Killed 346 – ABC News’ <https://abcnews.go.com/US/boeing-737-maxs-flawed-flight-control-system-led/story?id=74321424> accessed 21 September 2023

[209] A new national system to more quickly spot possible safety issues, using existing electronic health databases to keep an eye on the safety of approved medical products in real time. This tool will add to, but not replace, FDA’s existing post-market safety assessment tools. Commissioner of the, ‘Step 5: FDA Post-Market Device Safety Monitoring’ [2018] FDA <www.fda.gov/patients/device-development-process/step-5-fda-post-market-device-safety-monitoring> accessed 21 September 2023

[210] In the UK, the Civil Aviation Authority has a revenue of £140m and staff of over 1,000, and the Office for Nuclear Regulation around £90m with around 700 staff. An EU-level agency for AI should be funded well beyond this, given that the EU is more than six times the size of the UK.

[211] Affairs O of R, ‘Recalls, Market Withdrawals, & Safety Alerts’ (FDA, 11 February 2022) <www.fda.gov/safety/recalls-market-withdrawals-safety-alerts> accessed 21 September 2023

[212] Team NA, ‘NIST AIRC – Govern’ <https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook/Govern> accessed 21 September 2023

[213]‘Committing to Effective Whistleblower Protection | En | OECD’ <www.oecd.org/corruption-integrity/reports/committing-to-effective-whistleblower-protection-9789264252639-en.html> accessed 21 September 2023

[214] Anderljung M and others, ‘Frontier AI Regulation: Managing Emerging Risks to Public Safety’ (arXiv, 4 September 2023) <http://arxiv.org/abs/2307.03718> accessed 21 September 2023

[215] Guidance always has its roots in legislation but can be iterated more rapidly and flexibly, whereas legislation requires several legal and political steps at minimum. ‘AI Regulation and the Imperative to Learn from History’ <www.adalovelaceinstitute.org/blog/ai-regulation-learn-from-history/> accessed 21 September 2023

Explainer here: www.oneeducation.org.uk/difference-between-laws-regulations-acts-guidance-policies/.

[216] Raji ID and others, ‘Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance’ (arXiv, 9 June 2022) <http://arxiv.org/abs/2206.04737> accessed 21 September 2023

[217] Draft standards here are a very good example of the value of dataset documentation (i.e. declaring metadata) on what is used in training and fine-tuning models. In theory, this could also all be kept confidential as commercially sensitive information once a legal infrastructure is in place. www.datadiversity.org/draft-standards

[218] Mitchell, Wu, Zaldivar, Barnes, Vasserman, Hutchinson, Spitzer, Raji and Gebru, (2019), ‘Model Cards for Model Reporting’, doi: 10.1145/3287560.3287596

[219] Gebru, Morgenstern, Vecchione, Vaughan, Wallach, Daum and Crawford, (2021), Datasheets for Datasets, <https://m-cacm.acm.org/magazines/2021/12/256932-datasheets-for-datasets/abstract> (Accessed: 27 February 2023) Hutchinson, Smart, Hanna, Denton, Greer, Kjartansson, Barnes and Mitchell, (2021), ‘Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure’, doi: 10.1145/3442188.3445918;

[220] Shevlane T and others, ‘Model Evaluation for Extreme Risks’ (arXiv, 24 May 2023) <http://arxiv.org/abs/2305.15324> accessed 21 September 2023

[221]

[222] Knott A and Pedreschi D, ‘State-of-the-Art Foundation AI Models Should Be Accompanied by Detection Mechanisms as a Condition of Public Release’ <https://gpai.ai/projects/responsible-ai/social-media-governance/Social%20Media%20Governance%20Project%20-%20July%202023.pdf> accessed 21 September 2023

[223] Shevlane T and others, ‘Model Evaluation for Extreme Risks’ (arXiv, 24 May 2023) <http://arxiv.org/abs/2305.15324> accessed 21 September 2023

[224] ‘Examining the Black Box’ <www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/> accessed 21 September 2023

[225] AINOW, ‘Zero-Trust-AI-Governance.Pdf’ (August 2023) <https://ainowinstitute.org/wp-content/uploads/2023/08/Zero-Trust-AI-Governance.pdf> accessed 21 September 2023

[226] ‘Regulating AI in the UK’ <www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/> accessed 21 September 2023

[227] Shrishak K, ‘How to Deal with an AI Near-Miss: Look to the Skies’ (2023) 79 Bulletin of the Atomic Scientists 166

[228] Team NA, ‘NIST AIRC – Govern 1.7’ <https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook/Govern> accessed 21 September 2023

[229] In the UK, the Civil Aviation Authority has a revenue of £140m and staff of over 1,000, and the Office for Nuclear Regulation around £90m with around 700 staff). An EU-level agency for AI should be funded well beyond this, given that the EU is more than six times the size of the UK.

[230] In 2023 ~50% of the FDA’s ~$8bn budget was covered through mandatory fees by companies overseen by the FDA. See: < https://www.fda.gov/media/165045/download > accessed 24/11/2023

[231] 80% of the EMA’s funding comes from fees and charges levied on companies. See: EMA, “Funding,” European Medicines Agency, Sep. 17, 2018. <www.ema.europa.eu/en/about-us/how-we-work/governance-documents/funding> accessed Aug. 10, 2023

[232] ‘Governing General Purpose AI — A Comprehensive Map of Unreliability, Misuse and Systemic Risks’ (20 July 2023) <www.stiftung-nv.de/de/publikation/governing-general-purpose-ai-comprehensive-map-unreliability-misuse-and-systemic-risks> accessed 21 September 2023

[233] Nathalie Smuha: Beyond the Individual: Governing AI’s Societal Harm < https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3941956 > accessed Nov. 24, 2023

[234] EMA, “Success rate for marketing authorisation applications from SMEs doubles between 2016 and 2020,” European Medicines Agency, Jun. 25, 2021 <www.ema.europa.eu/en/news/success-rate-marketing-authorisation-applications-smes-doubles-between-2016-2020> accessed Aug. 10, 2023

[235] ‘AI and Digital Regulations Service for Health and Social Care – AI Regulation Service – NHS’ <www.digitalregulations.innovation.nhs.uk/> accessed 21 September 2023

[236] EMA, “Advanced therapy medicinal products: Overview,” European Medicines Agency, Sep. 17, 2018. <www.ema.europa.eu/en/human-regulatory/overview/advanced-therapy-medicinal-products-overview> accessed Aug. 10, 2023

[237] ‘Key Enforcement Issues of the AI Act Should Lead EU Trilogue Debate’ (Brookings) <www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/> accessed 21 September 2023

[238] Infocomm Media Development Authority, Aicadium, and AI Verify Foundation, ‘Generative AI: Implications for Trust and Governance’ 2023 <https://aiverifyfoundation.sg/downloads/Discussion_Paper.pdf> accessed 21 September 2023

[239] EMA, “Transparency,” European Medicines Agency, Sep. 17, 2018 <www.ema.europa.eu/en/about-us/how-we-work/transparency> (accessed Aug. 10, 2023).

[240] ‘Governing General Purpose AI — A Comprehensive Map of Unreliability, Misuse and Systemic Risks’ (20 July 2023) <www.stiftung-nv.de/de/publikation/governing-general-purpose-ai-comprehensive-map-unreliability-misuse-and-systemic-risks> accessed 21 September 2023

[241] Ho L and others, ‘International Institutions for Advanced AI’ (arXiv, 11 July 2023) <http://arxiv.org/abs/2307.04699> accessed 21 September 2023

[242] ‘Three Regulatory Agencies: A Comparison’ <www.hmpgloballearningnetwork.com/site/frmc/articles/three-regulatory-agencies-comparison> accessed 21 September 2023

[243] ‘COVID-19 Disruptions of International Clinical Trials: Comparing Guidances Issued by FDA, EMA, MHRA and PMDA’ (4 February 2020) <www.ropesgray.com/en/newsroom/alerts/2020/04/national-authority-guidance-on-clinical-trials-during-the-covid-19-pandemic> accessed 21 September 2023

[244] Van Norman GA, ‘Drugs and Devices: Comparison of European and U.S. Approval Processes’ (2016) 1 JACC: Basic to Translational Science 399

[245] Cummings ML and Britton D, ‘Chapter 6 – Regulating Safety-Critical Autonomous Systems: Past, Present, and Future Perspectives’ in Richard Pak, Ewart J de Visser and Ericka Rovira (eds), Living with Robots (Academic Press 2020) <www.sciencedirect.com/science/article/pii/B9780128153673000062> accessed 21 September 2023


Image credit: Lyndon Stratford

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 110

Skip to content

As part of this work, the Ada Lovelace Institute, the University of Exeter’s Institute for Data Science and Artificial Intelligence, and the Alan Turing Institute developed six mock AI and data science research proposals that represent hypothetical submissions to a Research Ethics Committee. An expert workshop found that case studies are useful training resources for understanding common AI and data science ethical challenges. Their purpose is to prompt reflection on common research ethics issues and the societal implications of different AI and data science research projects. These case studies are for use by students, researchers, members of research ethics committees, funders and other actors in the research ecosystem to further develop their ability to spot and evaluate common ethical issues in AI and data science research.


 

Executive summary

Research in the fields of artificial intelligence (AI) and data science is often quickly turned into products and services that affect the lives of people around the world. Research in these fields is used in the provision of public services like social care, determining which information is amplified on social media, what jobs or insurance people are offered, and even who is deemed a risk to the public by police and security services.  There has been a significant increase in the volume of AI and data science research in the last ten years, with these methods now being applied to other scientific domains like history, economics, health sciences and physics.

Figure 1: Number of AI publications in the world 2010-21[1]

Globally, the volume of AI research is increasing year-on-year and currently accounts for more than 4% of all published research.

Since products and services built with AI and data science research can have substantial effects on people’s lives, it is essential that this research is conducted safely and responsibly, and with due consideration for the broader societal impacts it may have. However, the traditional research governance mechanisms that are responsible for identifying and mitigating ethical and societal risks often do not address the challenges presented by AI and data science research.

As several prominent researchers have highlighted,[2] inadequately reviewed AI and data science research can create risks that are carried downstream into subsequent products,[3] services and research.[4] Studies have shown these risks can disproportionately impact people from marginalised and minoritised communities, exacerbating racial and societal inequalities.[5] If left unaddressed, unexamined assumptions and unintended consequences (paid forward into deployment as ‘ethical debt’[6]) can lead to significant harms to individuals and society. These harms can be challenging to address or mitigate after the fact.

Ethical debt also poses a risk to the longevity of the field of AI: if researchers fail to demonstrate due consideration for the broader societal implications of their work, it may reduce public trust in the field. This could lead to it becoming a domain that future researchers find undesirable to work in – a challenge that has plagued research into nuclear power and the health effects of tobacco.[7]

To address these problems, there have been increasing calls from within the AI and data science research communities for more mechanisms, processes and incentives for researchers to consider the broader societal impacts of their research.[8]

In many corporate and academic research institutions, one of the primary mechanisms for assessing and mitigating ethical risks is the use of Research Ethics Committees (RECs), also known in some regions as Institutional Review Boards (IRBs) or Ethics Review Committees (ERCs). Since the 1960s, these committees have been empowered to review research before it is undertaken and can reject proposals unless changes are made in the proposed research design.

RECs generally consist of members of a specific academic department or corporate institution, who are tasked with evaluating research proposals before the research begins. Their evaluations are based on a combination of normative and legal principles that have developed over time, originally in relation to biomedical human subjects research. A REC’s role is to help ensure that researchers justify their decisions for how research is conducted, thereby mitigating the potential harms they may pose.

However, the current role, scope and function of most academic and corporate RECs are insufficient for the myriad of ethical challenges that AI and data science research can pose. For example, the scope of REC reviews is traditionally only on research involving human subjects. This means that the many AI and data science projects that are not considered a form of direct intervention in the body or life of an individual human subject are exempt from many research ethics review processes.[9] In addition, a significant amount of AI and data science research involves the use of publicly available and repurposed datasets, which are considered exempt from ethics review under many current research ethics guidelines.[10]

If AI and data science research is to be done safely and responsibly, RECs must be equipped to examine the full spectrum of risks, harms and impacts that can arise in these fields.

In this report, we explore the role that academic and corporate RECs play in evaluating AI and data science research for ethical issues, and also investigate the kinds of common challenges these bodies face.

The report draws on two main sources of evidence: a review of existing literature on RECs and research ethics challenges, and a series of workshops and interviews with members of RECs and researchers who work on AI and data science ethics.

Challenges faced by RECs

Our evaluation of this evidence uncovered six challenges that RECs face when addressing AI and data science research:

Challenge 1: Many RECs lack the resources, expertise and training to appropriately address the risks that AI and data science pose.  

Many RECs in academic and corporate environments struggle with inadequate resources and training on the variety of issues that AI and data science can raise. The work of RECs is often voluntary and unpaid, meaning that members of RECs may not have the requisite time or training to appropriately review an application in its entirety. Studies suggest that RECs are often viewed by researchers as compliance bodies rather than mechanisms for improving the safety and impact of their research.

Challenge 2: Traditional research ethics principles are not well suited for AI research.

RECs review research using a set of normative and legal principles that are rooted in biomedical, human-subject research practices, which operate under a researcher-subject relationship rather than a researcher-data subject relationship. This distinction has challenged traditional principles of consent, privacy and autonomy in AI research, and created confusion and challenges for RECs trying to apply these principles to novel forms of research.

Challenge 3: Specific principles for AI and data science research are still emerging and are not consistently adopted by RECs.

The last few years have seen an emerging series of AI ethics principles aimed at the development and deployment of AI systems. However, these principles have not been well adapted for AI and data science research practices, signalling a need for institutions to translate these principles into actionable questions and processes for ethics reviews.

Challenge 4: Multi-site or public-private partnerships can exacerbate existing challenges of governance and consistency of decision-making.

An increasing amount of AI research involves multi-site studies and public-private partnerships. This can lead to multiple REC reviews of the same research, which can highlight different standards in ethical review of different institutions and present a barrier to completing timely research.

Challenge 5: RECs struggle to review potential harms and impacts that arise throughout AI and data science research.

REC reviews of AI and data science research are ex ante assessments, done before research takes place. However, many of the harms and risks in AI research may only become evident at later stages of the research. Furthermore, many of the types of harms that can arise – such as issues of bias, or wider misuses of AI or data – are challenging for a single committee to predict. This is particularly true with the broader societal impacts of AI research, which require a kind of evaluation and review that RECs currently do not undertake.

Challenge 6: Corporate RECs lack transparency in relation to their processes.

Motivated by a concern to protect their intellectual property and trade secrets, many private-sector RECs for AI research do not make their processes or decisions publicly accessible and use strict non-disclosure agreements to control the involvement of external experts in their decision-making. In some extreme cases, this lack of transparency has raised suspicion of corporate REC processes from external research partners, which can pose a risk to the efficacy of public-private research partnerships.

Recommendations

To address these challenges, we make the following recommendations:

For academic and corporate RECs

Recommendation 1: Incorporate broader societal impact statements from researchers.

A key issue this report identifies is the need for RECs to incentivise researchers to engage more reflexively with the broader societal impacts of their research, such as the potential environmental impacts of their research, or how their research could be used to exacerbate racial or societal inequalities.

There have been growing calls within the AI and data science research communities for researchers to incorporate these considerations in various stages of their research. Some researchers have called for changes to the peer review process to require statements of potential broader societal impacts,[11] and some AI/machine learning (ML) conferences have experimented with similar requirements in their conference submission process.[12]

RECs can support these efforts by incentivising researchers to engage in reflexive exercises to consider and document the broader societal impacts of their research. Other actors in the research ecosystem (funders, conference organisers, etc.) can also incentivise researchers to engage in these kinds of reflexive exercises.

Recommendation 2: RECs should adopt multi-stage ethics review processes of high-risk AI and data science research.

Many of the challenges that AI and data science raise will arise in different stages of research. RECs should experiment with requiring multiple stages of evaluations of research that raises particular ethical concern, such as evaluations at the point of data collection and a separate evaluation at the point of publication.

Recommendation 3: Include interdisciplinary and experiential expertise in REC membership.

Many of the risks that AI and data science research pose cannot be understood without engagement with different forms of experiential and subject-matter expertise. RECs must be interdisciplinary bodies if they are to address the myriad of issues that AI and data science can pose in different domains, and should incorporate the perspectives of individuals who will ultimately be impacted by the research.

For academic/corporate research institutions

Recommendation 4: Create internal training and knowledge-sharing hubs for researchers and REC members, and enable more cross-institutional knowledge sharing.

These hubs can provide opportunities for cross-institutional knowledge-sharing and ensure institutions do not develop standards of practice in silos. They should collect and share information on the kinds of ethical issues and challenges AI and data science research might raise, including case studies of research that raises challenging ethical issues. In addition to our report, we have developed a resource consisting of six case studies that we believe highlight some of the common ethical challenges that RECs might face.[13]

Recommendation 5: Corporate labs must be more transparent about their decision-making and do more to engage with external partners.

Corporate labs face specific challenges when it comes to AI and data science reviews. While many are better resourced and have experimented with broader societal impact thinking, some of these labs have faced criticism for being opaque about their decision-making processes. Many of these labs make consequential decisions about their research without engaging with local, technical or experiential expertise that resides outside their organisation.

For funders, conference organisers and other actors in the research ecosystem

Recommendation 6: Develop standardised principles and guidance for AI and data science research principles.

RECs currently lack standardised principles for evaluating AI and data science research. National research governance bodies like UKRI should work to create a new set of ‘Belmont 2.0’ principles[14] that offer some standardised approaches, guidance and methods for evaluating AI and data science research. Developing these principles should draw on a wide set of perspectives from different disciplines and communities who are impacted by AI and data science research, including multinational perspectives –  particularly from regions that have been historically underrepresented in the development of past research ethics principles.

Recommendation 7: Incentivise a responsible research culture.

AI and data science researchers lack incentives to reflect on and document the societal impacts their research. Different actors in the research ecosystem can encourage ethical behaviour – funders, for example, can create requirements that researchers conduct a broader societal impact statement of their research in order to receive a grant, and conference organisers and journal editors can encourage researchers to include a broader societal impact statement when submitting research. By creating incentives throughout the research ecosystem, ethical reflection can become more desirable and rewarded.

Recommendation 8: Increase funding and resources for ethical reviews of AI and data science research.

There is an urgent need for institutions and funders to support RECs, including paying for the time of staff and funding external experts to engage in questions of research ethics.

Introduction

The academic fields of AI and data science research have witnessed an explosive growth in the last two decades. According to the Stanford AI Index, between 2015 and 2020, the number of AI publications on open-access publication database arXiv grew from 5,487 to over 34,376 (see also Figure 1). As of 2019, AI publications represented 3.8% of all peer-reviewed scientific publications, an increase from 1.3% in 2011.[15] The vast majority of research appearing in major AI conferences comes from academic and industry institutions based in the European Union, China and the United States of America.[16] AI and data science techniques are also being applied across a range of other academic disciplines such as history,[17] economics,[18] genomics[19] and biology.[20]

Compared to many other disciplines, AI and data science have a relatively fast research-to-product pipeline and relatively low barriers for use, making these techniques easily adaptable (though not necessarily well suited) to a range of different applications.[21] While these qualities have led AI and data science to be described as ‘more important than fire and electricity’ by some industry leaders,[22] there have been increased calls from members of the AI research community to require researchers to consider and address ‘failures of imagination’[23] of the potential broader societal impacts and risks of their research.

Figure 2: The research-to-product timeline

This timeline shows how short the research-to-product pipeline for AI can be. It took less than a year from the release of initial research in 2020 and 2021, exploring how to generate images from text inputs, to the first commercial products selling these services.

The sudden growth of AI and data science research has exacerbated challenges for traditional research ethics review processes, and highlighted that they are poorly set up to address questions of broader societal impact of research. Several high-profile instances of controversial AI research passing institutional ethics review include image recognition applications that claim to identify homosexuality,[24] criminality,[25] physiognomy[26] and phrenology.[27] Corporate labs have also experienced high-profile examples of unethical research being approved, including a Microsoft chatbot capable of spreading disinformation,[28] and a Google research paper that contributed to the surveillance of China’s Uighur population.[29]

In research institutions, the role of assessing for research ethics issues tends to fall on Research Ethics Committees (RECs), also known in some regions as Institutional Review Boards (IRBs) or Ethics Review Committees (ERCs). Since the 1960s, these committees have been empowered to reject research from being undertaken unless changes are made in the proposed research design.

These committees generally consist of members of a specific academic department or corporate institution, who are responsible for evaluating research proposals before the research begins. Their evaluations combine normative and legal principles, originally linked to biomedical human subjects research, that have developed over time.

Traditionally, RECs only consider research involving human subjects and only consider questions concerning how the research will be conducted. While they are not the only ‘line of defence’ against unethical practices in research, they are the primary actor responsible for mitigating potential harms to research subjects in many forms of research.

The increasing prominence of AI and data science research poses an important question: are RECs well placed and adequately set up to address the challenges that AI and data science research pose? This report explores these challenges that public and private-sector RECs face in evaluations of research ethics and broader societal impact issues in AI and data science research.[30] In doing so, it aims to help institutions that are developing AI research review processes take a holistic and robust approach for identifying and mitigating these risks. It also seeks to provide research institutions and other actors in the research ecosystem – funders, journal editors and conference organisers – with specific recommendations for how they can address these challenges.

This report seeks to address four research questions:

  1. How are RECs in academia and industry currently structured? What role do they play in the wider research ecosystem?
  2. What resources (e.g. moral principles, legal guidance, etc.) are RECs using to guide their reviews of research ethics? What is the scope of these reviews?
  3. What are the most pressing or common challenges and concerns that RECs are facing in evaluations of AI and data science research?
  4. What changes can be made so that RECs and the wider AI and data science research community can better address these challenges?

To address these questions, this report relied on a review of the literature on RECs, research ethics and broader societal impact questions in AI. The report also draws on a series of workshops with 42 members of public and private AI and data science research institutions in May 2021, along with eight interviews with experts in research ethics and AI issues. More information on our methodology can be found in ‘Methodology and limitations’.

This report begins with an introduction to the history of RECs, how they are commonly structured, and how they commonly operate in corporate and academic environments for AI and data science research. The report then discusses six challenges that RECs face – some of which are longstanding issues, others of which are exacerbated by the rise of AI and data science research. We conclude the paper with a discussion of these findings and eight recommendations for actions that RECs and other actors in the research ecosystem can take to better address the ethical risks of AI and data science research.

Context for Research Ethics Committees and AI research

This section provides a brief history of modern research ethics and Research Ethics Committees (RECs), discusses their scope and function, and highlights some differences between how they operate in corporate and academic environments. It places RECs in the context of other actors in the ‘AI research ecosystem’, such as organisers of AI and data science conferences, or editors of AI journal publications who set norms of behaviour and incentives within the research community. Three key points to take away from this chapter are:

  1. Modern research ethics questions are mostly focused on ethical challenges that arise in research methodology, and exclude consideration of the broader societal impacts of research.
  2. Current RECs and research ethics principles stem from biomedical research, which analyses questions of research ethics through a lens of patient-clinician relationships and is not well suited for the more distanced relationship in AI and data science between a researcher and data subject.
  3. Academic and corporate RECs in AI research share common aims, but with some important differences. Corporate AI labs tend to have more resources, but may also be less transparent about their processes.

What is a REC, and what is its scope and function?

Every day, RECs review applications to undertake research for potential ethical issues that may arise. Broadly defined, RECs are institutional bodies made up of members of an institution (and, in some instances, independent members outside that institution) who are charged with evaluating applications to undertake research before it begins. They make judgements about the suitability of research, and have the power to approve researchers to go ahead with a project or request that changes are made before research is undertaken. Many academic journals and conferences will not publish or accept research that fails to meet a review by a Research Ethics Committee (though as we will discuss below, not all research requires review).

RECs operate with two purposes in mind:

  1. To protect the welfare and interests of prospective and current research participants and minimise risk of harm to them.
  2. To promote ethical and societally valuable research.

In meeting these aims, RECs traditionally conduct an ex ante evaluation only once, before a research project begins. In understanding what kinds of ethical questions RECs evaluate for, it is also helpful to disentangle three distinct categories of ethical risks in research: [31]

  1. Mitigating research process harms (often confusingly called ‘research ethics’).
  2. Research integrity.
  3. Broader societal impacts of research (also referred to as Responsible Research and Innovation, or RRI).

The scope of REC evaluations is entirely on questions of mitigating the ethical risks from research methodology, such as how the researcher intends to protect the privacy of a participant, anonymise their data or ensure they have received informed consent.[32] In their evaluations, RECs may look at whether the research poses a serious risk to interests and safety of research subjects, or if the researchers are operating in accordance with local laws governing data protection and intellectual property ownership of any research findings.

REC evaluations may also probe on whether the researchers have assessed and minimised potential harm to research participants, and seek to balance this against the benefits of the research for society at large.[33] However, there are limitations to the aim of promoting ethical and societally valuable research. There are few frameworks for how RECs can consider the benefit of research for society at large. Additionally, this concept of mitigating methodological risks does not extend to considerations of whether the research poses risks to society at large, or to individuals beyond the subjects of that research.

 

Three different kinds of ethical risks in research

1.    Mitigating research process (also known as ‘research ethics’): The term research ethics refers to the principles and processes governing how to mitigate the risks to research subjects. Research ethics principles are mostly concerned with the protection, safety and welfare of individual research participants, such as gaining their informed consent to participate in research or anonymising their data to protect their privacy.

 

2.    Research integrity: These are principles governing the credibility and integrity of the research, including which whether it is intellectually honest, transparent, robust, and replicable.[34] In most fields, research integrity is evaluated via the peer review process after research is completed.

 

3.    Broader societal impacts of research: This refers to the potential positive and negative societal and environmental implications of research, including unintended uses (such as misuse) of research. A similar concept is Responsible Research and Innovation (RRI) which refers to steps that researchers can undertake to anticipate and address the potential downstream risks and implications of their research.[35]

RECs, however, often do not evaluate for questions of research integrity, which is concerned with whether research is intellectually honest, transparent, robust and replicable.[36] These can include questions relating to whether data has been fabricated or misrepresented, whether research is reproducible, stating the limitations and assumptions of the research, and disclosing conflicts of interests.[37] The intellectual integrity of researchers is important for ensuring public trust in science, which can be eroded in cases of misconduct.[38]

Some RECs may consider complaints about research integrity issues that arise after research has been published, but these issues are often not considered as part of their ethics reviews. RECs may, however, assess a research applicant’s bona fides to determine if they are someone who appears to have integrity (such as if they have any conflicts of interest with the subject of their study). Usually, questions of research integrity are left to other actors in the research ecosystem, such as peer reviewers and whistleblowers who may notify a research institution or the REC of questionable research findings or dishonest behaviour. Other governance mechanisms for addressing research integrity issues include publishing the code or data of the research so that others may attempt to reproduce findings.

Another area of ethical risks that contemporary RECs do not evaluate for (but which we argue they should) is the responsibility of researchers to consider the broader societal effects of their research on society.[39] This is referred to as Responsible Research and Innovation (RRI), which moves beyond concerns of research integrity and is: an approach that anticipates and assesses potential implications and societal expectations with regard to research and innovation, with the aim to foster the design of inclusive and sustainable research and innovation’.[40]

RRI is concerned with the integration of mechanisms of reflection, anticipation and inclusive deliberation around research and innovation, and relies on individual researchers to incorporate these practices in their research. This includes analysing potential economic, societal or environmental impacts that arise from research and innovation. RRI is a more recent development that emerged separately to RECs, stemming in part from the Ethical Legal and Societal Implications Research (ELSI) programme in the 1990s, which was established to research the broader societal implications of genomics research.[41]

Traditionally, RECs are usually not well equipped to deal with assessing subsequent uses of research, or their impacts on society. RECs often lack the capacity or remit to monitor the downstream uses of research, or to act as an ‘observatory’ for identifying trends in the use or misuse of research they reviewed at inception. This is compounded by the decentralised and fragmentary nature of RECs, which operate independently of each other and often do not evaluate each other’s work.

What principles do RECs rely on to make judgements about research ethics?

In their evaluations, RECs rely on a variety of tools, including laws like the General Data Protection Regulation (GDPR), which cover data protection issues and some discipline-specific norms. At the core of all Research Ethics Committee evaluations, there are a series of moral principles that have evolved over time. These principles largely stem from the biomedical sciences, and have been codified, debated and edited by international bodies like the World Medical Association and World Health Organisation. The biomedical model of research ethics is the foundation for how concepts like autonomy and consent were encoded in law,[42] which often motivate modern discussions about privacy.

Some early modern research ethics codes, like the Nuremberg Principles and the Belmont Report, were developed in response to specific atrocities and scandals involving biomedical research on human subjects. Other codes, like the Declaration of Helsinki, developed out of a field-wide concern to self-regulate before governments stepped in to regulate.[43]

Each code and declaration seeks to address specific ethical issues from a particular regional and historical context. Nonetheless, they are united by two aspects. Firstly, they frame research ethics questions in a way that assumes a clear researcher-subject relationship. Secondly, they all seek to standardise norms of evaluating and mitigating the potential risks caused by research processes, to support REC decisions becoming more consistent between different institutions.

 

Historical principles governing research ethics

 

Nuremberg Code: The Nuremberg trials occurred in 1947 and revealed horrific and inhumane medical experimentation by Nazi scientists on human subjects, primarily concentration camp prisoners. Out of concern that these atrocities might further damage public trust in medical professionals and research,[44] the judges in this trial included a set of universal principles for ‘permissible medical experiments’ in their verdict, which would later become known as the Nuremberg Code.[45] The Code lists ten principles that seek to ensure individual participant rights are protected and outweigh any societal benefit of the research.

 

Declaration of Helsinki: Established by World Medical Association (WMA), the Helsinki Declaration seeks to articulate universal principles for human subjects research and clinical research practice. The WMA is an international organisation representing physicians from across the globe. The Helsinki Declaration has been updated repeatedly since its first iteration in 1964, with major updates occurring in 1975, 2000 and 2008. It specifies five basic principles for all human subjects research, as well as further principles specific to clinical research.

 

Belmont Report: This report was written in response to several troubling incidents in the USA, in which patients participating in clinical trials were not adequately informed about the risks involved. These include a 40-year-long experiment by the US Public Health Service and the Tuskegee Institute that sought to study untreated syphilis in Black men. Despite having over 600 participants (399 with syphilis, 201 without), the participants were deceived about the risks and nature of experiment and were not provided with a cure for the disease after it had been developed in the 1940s.[46] These developments led to the United States’ National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research to publish the Belmont Report in 1979, which listed several principles for research to follow: justice, beneficence and respect for persons.[47]

 

Council for International Organizations of Medical Sciences Guidelines (CIOMS): CIOMS was formed in 1949 by the World Health Organisations and the United Nations Educational, Scientific and Cultural Organisation (UNESCO), and is made up of a range of biomedical member organisations from across the world. In 2016, it published the International Ethical Guidelines for Health-Related Research Involving Humans,[48] which includes specific requirements for research involving vulnerable persons and groups, compensation for research participants, and requirements for researchers and health authorities to engage potential participants and communities in a ‘meaningful participatory process’ in various stages of research.[49]

 

Biomedical research ethics principles touch on a wide variety of issues, including autonomy and consent. The Nuremberg Code specified that, for research to proceed, a researcher must have consent given (i) voluntarily by a (ii) competent and (iii) informed subject (iv) with adequate comprehension. At the time, consent was understood as only applicable to healthy, non-patient participants, and thus excluded patients in clinical trials, access to patient information like medical registers and participants (like children or people with a cognitive impairment) who are unable to give consent.

Subsequent research ethics principles have adapted to these scenarios with methods such as legal guardianship, group or community consent, and broad or blanket consent.[50] Under the Helsinki Declaration, consent must be given in writing and states that research subjects can give consent only if they have been fully informed of the study’s purpose, the methods, risks and benefits involved, and their right to withdraw.[51] In all these conceptions of consent, there is a clearly identifiable research subject, who is in some kind of direct relationship with a researcher.

Another area that biomedical research principles touch on is the risk and benefit of research for research subjects. While the Nuremberg Code was unambiguous about the protection of research subjects, the Helsinki Declaration introduced the concept of benefit from research in proportion to risk.[52] The 1975 document and other subsequent revisions reaffirmed that, ‘while the primary purpose of medical research is to generate new knowledge, this goal can never take precedence over the rights and interests of individual research subjects.’[53]

However, Article 21 recommends that research can be conducted if the importance of its objective outweighs the risks to participants, and Article 18 states that a careful assessment of predictable risks to participants must be undertaken in comparison to potential benefits for individuals and communities.[54] The Helsinki Declaration lacks clarity on what constitutes an acceptable, or indeed ‘predictable’ risk and how the benefits would be assessed, and therefore leaves the challenge of resolving these questions to individual institutions.[55] The CIOMS guidance also suggests RECs should consider the ‘social value’ of health research in considering a cost/benefit analysis.

The Belmont Report also addressed the trade-off between societal benefit and individual risk, offering specific ethics principles to guide scientific research that include ‘respects for persons’, ‘beneficence’ and ‘justice’.[56] The principle of ‘respect for persons’ is broken down into respect for the autonomy of human research subjects and requirements for informed consent. The principle of ‘beneficence’ requires the use of the best possible research design to maximise benefits and minimise harms, and prohibits any research that is not backed by a favourable risk-benefit ratio (to be determined by a REC). Finally, the principle of ‘justice’ stipulates that the risks and benefits of research are distributed fairly, research subjects are selected through fair procedures, and to avoid any exploitation of vulnerable populations.

The Nuremberg Code created standardised requirements to identify who bears responsibility for identifying and addressing potential ethical risks of research. For example, the Code stipulates that the research participants have the right to withdraw (Article 9), but places responsibility on the researchers to evaluate and justify any risks in relation to human participation (Article 6), to minimise harm (Articles 4 and 7) and to stop the research if it is likely to cause injury or death to participants (Articles 5 and 10).[57] Similar requirements exist in other biomedical ethical principles like the Helsinki Declaration, which extends responsibility for assessing and mitigating ethical risks to both researchers and RECs.

A brief history of RECs in the USA and the UK

RECs are a relatively modern phenomenon in the history of academic research, and their origins stem from early biomedical research initiatives of the 1970s. The 1975 Declaration of Helsinki, an initiative by the World Medical Association (WMA) to articulate universal principles for human subjects research and clinical research practice, declared the ultimate arbiter for making assessments of ethical risk and benefit were specifically appointed, independent research ethics committees who are given the responsibility to assess the risk of harm to research subjects and the management of those risks.

 

In the USA, the National Research Act of 1974 requires Institutional Review Board (IRB) approval for all human subjects research projects funded by the US Department of Health, Education, and Welfare (DHEW).[58] This was extended in 1991 under the ‘Common Rule’ so that any research involving human subjects that is funded by the federal government must undergo an ethics review by an IRB. There are certain exceptions for what kinds of research will go before an IRB, including research that involves the analysis of data that is publicly available, privately funded research, and research that involves secondary analysis of existing data (such as the use of existing ‘benchmark’ datasets that are commonly used in AI research).[59]

 

In the UK, the first RECs began operating informally around 1966, in the context of clinical research in the National Health Service (NHS), but it was not until 1991 that RECs were formally codified. In the 1980s, the UK expanded the requirement for REC review beyond clinical health research into other disciplines. Academic RECs in the UK began to spring up around this same time, with the majority coming into force after the year 2000.

 

UK RECs in the healthcare and clinical context are coordinated and regulated by the Health Research Authority, which has passed guidance for how medical healthcare RECs should be structured and operate, including the procedure of submitting an ethics application and the process of ethics review.[60] This guidance allows for greater harmony across different health RECs and better governance for multi-site research projects, but this guidance does not extend to RECs in other academic fields. Some funders such as the UK’s Economic and Social Research Council have also released research ethics guidelines for non-health projects to undergo certain ethics review requirements if the project involves human subjects research (though the definition of human subjects research is contested).[61]

RECs in academia

While RECs broadly seek to protect the welfare and interests of research participants and promote ethical and societally valuable research, there are some important distinctions to draw between the function and role of a REC in academic institutions compared to private-sector AI labs.

Where are RECs located in universities and research institutes?

Academic RECs bear a significant amount of the responsibility for assessing research involving human participants, including the scrutiny of ethics applications from staff and students. Broadly, there are two models of RECs used in academic research institutions:

  1. Centralised: A single, central REC is responsible for all research ethics applications, including the development of ethics policies and guidance.
  2. Decentralised: Schools, faculties or departments have their own RECs for reviewing applications, while a central REC maintains and develops ethics policies and guidance.[62]

RECs can be based at the institutional level (such as at universities), or at the regional and federal level. Some RECs may also be run by non-academic institutions, who are charged with reviewing academic research proposals. For example, academic health research in the UK may undergo review by RECs run by the National Health Service (NHS), sometimes in addition to review by the academic body’s own REC. In practice, this means that publicly funded health research proposals may seek ethics approval from one of the 85 RECs run by the NHS, in addition to non-NHS RECs run by various academic departments.[63]

A single, large academic institution, such as the University of Cambridge, may have multiple committees running within it, each with a different composition and potentially assessing different kinds of fields of research. Depending on the level of risk and required expertise, a research project may be reviewed by a local REC, school-level REC or may also be reviewed by a REC at the university level.[64]

For example, Exeter University has a central REC and 11 devolved RECs at college or discipline level. The devolved RECS report to the central REC, which is accountable to the University Council (governing body). Exeter University also implements a ‘dual assurance’ scheme, with an independent member of the university’s governing body providing oversight of the implementation of their ethics policy. The University of Oxford also relies on a cascading system of RECs, which can escalate concerns up the chain if needed, and which may include department and domain-specific guidance for certain research ethics issues.

Figure 3: The cascade of RECs at the University of Oxford[65]

This figure shows how one academic institution’s RECs are structured, with a central REC and more specialised committees.

What is the scope and role of academic RECs?

According to a 2004 survey of UK academic REC members, they play four principal roles:[66]

  1. Responsibility for ethical issues relating to research involving human participants, including maintaining standards and provision of advice to researchers.
  2. Responsibility for ensuring production and maintenance of codes of practice and guidance for how research should be conducted.
  3. Ethical scrutiny of research applications from staff and, in most cases, students.
  4. Reporting and monitoring of instances of unethical behaviour to other institutions or academic departments.

Academic RECs often include a function for intaking and assessing reports of unethical research behaviour, which may lead to disciplinary action against staff or students.

When do ethics reviews take place?

RECs form a gateway through which researchers apply to obtain ethics approval as a prerequisite for further research. At most institutions, researchers will submit their work for ethics approval before conducting the study – typically at the early stages in the research lifecycle, such as at the planning stage or when applying for research grants. This means RECs only consider an anticipatory assessment of ethical risks that the proposed method may raise.

This assessment relies on both ‘testimony’ from research applicants who document what they believe are the material risks, and a review by REC members themselves who assess the validity of that ‘testimony’, provide an opinion of what they envision the material risks of the research method might be, and how those risks can be mitigated. There is limited opportunity for revising these assessments once the research is underway, and that usually only occurs if a REC review identifies a risk or threat and asks for additional information. One example of an organisation that takes a different approach is the Alan Turing Institute, which developed a continuous integration approach with reviews taking place at various stages throughout the research life cycle.[67]

The extent of a REC’s review will vary depending on whether the project has any clearly identifiable risks to participants, and many RECs apply a triaging process to identify research that may pose particularly significant risks. RECs may use a checklist that asks a researcher whether their project involves particularly sensitive forms of data collection or risk, such as research with vulnerable population groups like children, or research that may involve deceiving research participants (such as creating a fake account to study online right-wing communities). If an application raises one of these issues, it must undergo a full research ethics review. In cases where a research application does not involve any of these initial risks, it may undergo an expedited process that involves a review of only some factors of the application such as its data governance practices.[68]

Figure 4: Example of the triaging application intake process for a UK University REC

If projects meet certain risk criteria, they may be subject to a more extensive review by the full committee. Lower-risk projects may be approved by only one or two members of the committee.

During the review, RECs may offer researchers advice to mitigate potential ethical risks. Once approval is granted, no further checks by RECs are required. This means that there is no mechanism for ongoing assessment of emerging risks to participants, communities or society as the research progresses. As the focus is on protecting individual research participants, there is no assessment of potential long-term downstream harms of research.

Composition of academic RECs

The composition of RECs varies between and even within various institutions. In the USA, RECs are required under the ‘common rule’ to have a minimum of five members with a variety of professional backgrounds, to be made up of people from different ethnic and cultural backgrounds, and to have at least one member who is independent from the institution. In the UK, the Health Research Authority recommends RECs have 18 members, while the Economic and Social Research Council (ESRC) recommends at least seven.[69] RECs operate on a voluntary basis, and there is currently no financial compensation for REC members, nor any other rewards or recognition.

Some RECs are comprised of an interdisciplinary board of people who bring different kinds of expertise to ethical reviews. In theory, this is to provide a more holistic review of research that ensures perspectives from different disciplines and life experiences are factored into a decision. RECs in the clinical context in the UK, for example, must involve both expert members with expertise in the subject area and ‘lay members’, which refers to people ‘who are not registered healthcare professionals and whose primary professional interest is not in clinical research’.[70] Additional expertise can be sourced on an ad hoc basis.[71] The ESRC also further emphasises that RECs should be multi-disciplinary and include ethnic and gender diversity.[72] According to our expert workshop participants, however, many RECs that are located within a specific department of faculty are often not multi-disciplinary and do not include lay members, although specific expertise might be requested when needed.

The Secure Anonymised Information Linkage databank (SAIL)[73] offers one example of a body that does integrate lay members in their ethics review process. Their review criteria include data governance issues and risks of disclosure, but also whether the project contributes to new knowledge, and whether it serves the public good by improving health, wellbeing and public services.

RECs within the technology industry

In the technology industry, several companies with AI and data science research divisions have launched internal ethics review processes and accompanying RECs, with notable examples being Microsoft Research, Meta Research and Google Brain. In our workshop and interviews with participants, members of corporate RECs we spoke with noted some key facets of their research review processes. It is important, however, to acknowledge that little publicly available information exists on corporate REC practices, including their processes and criteria for research ethics review. This section reflects statements made by workshop and interview participants, and some public reports of research ethics practices in private-sector labs.

Scope

According to our participants, corporate AI research RECs tend to take a broader scope of review than traditional academic RECs. Their reviews may extend beyond research ethics issues and into questions of broader societal impact. Interviews with developers of AI ethics review practices in industry suggested a view that traditional REC models can be too cumbersome and slow for the quick pace of the product development life cycle.

At the same time, ex ante review does not provide good oversight on risks that emerge during or after a project. To address this issue, some industry RECs have sought to develop processes that focus beyond protecting individual research subjects and include considerations for the broader downstream effects for population groups or society, as well as recurring review throughout the research/product lifecycle.[74]

Several companies we spoke with have specific RECs that review research involving human subjects. However, as one participant from a corporate REC noted, ‘a lot of AI research does not involve human subjects’ or their data, and may focus instead on environmental data or other types of non-personal information. This company relied on separate ethics review process for such cases that considers (i) the potential broader impact of the research and (ii) whether the research aligns with public commitments or ethical principles the company has made.

According to a law review article on their research ethics review process, Meta (previously known as Facebook) claims to consider the public contribution of knowledge of research and whether it may generate positive externalities and implications for society.[75] A workshop participant from another corporate REC noted that ‘the purpose of [their] research is to have societal impact, so ethical implications of their research are fundamental to them.’ These companies also tend to have more resources to undertake ethical reviews than academic labs, and can dedicate more full-time staff positions to training, broader impact mapping and research into the ethical implications of AI.

The use of AI-specific ethics principles and red lines

Many corporate companies like Meta, Google and Microsoft have published AI ethics principles that articulate particular considerations for their AI and data science research to consider, as well as ‘red line’ research areas they will not undertake. For example, in response to employee protests against a US Department of Defense contract, Google stated it will not pursue AI ‘weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people’.[76] Similarly, DeepMind and Element AI have signed a pledge against AI research for lethal autonomous weapons alongside over 50 other companies; a pledge that only a handful of academic institutions have made.[77]

According to some participants, articulating these principles can make more salient the specific ethical concerns that researchers at corporate labs should consider with AI and data science research. However, other participants we spoke with noted that, in practice, there is a lack of internal and external transparency around how these principles are applied.

Many participants from academic institutions we spoke with noted they do not use ‘red line’ areas of research out of concern that these red lines may infringe on existing principles of academic openness.

Extent of reviews

Traditional REC reviews tend to focus on a single one-off assessment of research risk at the early stages of a project. In contrast, one corporate REC we spoke with described their review as being a continuous process in which a team may engage with the REC at different stages, such as when a team is collecting data prior to publication, and post-publication reviews into whether the outcomes and impacts they were concerned with came to fruition. This kind of continuous review enables a REC to capture risks as they emerge.

We note that it was unclear whether this practice was common among industry labs or reflected one lab’s particular practices. We also note that some academic labs, like the Alan Turing Institute, are implementing similar initiatives to engage researchers at various stages of the research lifecycle.

A related point flagged by some workshop participants was that industry ethics review boards may vary in terms of their power to affect product design or launch decisions. Some may make non-binding recommendations, and others can green light or halt projects, or return a project to a previous development stage with specific recommendations.[78]

Composition of board and external engagement

The corporate REC members we spoke with all described the composition of their boards as being interdisciplinary and reflecting a broad range of teams at the company. One REC, for example, noted that members of engineering, research, legal and operations teams sit on their ethical review committee to provide advice not only on specific projects, but also for entire research programmes. Another researcher we spoke with described how their organisation’s ethics review process provides resources for researchers, including a list of ‘banned’ publicly accessible datasets that have questionable consent and privacy issues but are commonly used by researchers in academia and other parts of industry.

However, none of the corporate RECs we spoke with had lay members or external experts on their boards. This raises a serious concern that perspectives of people impacted by these technologies are not reflected in ethical reviews of their research, and that what constitutes a risk or is considered a high-priority risk is left solely to the discretion of employees of the company. The lack of engagement with external experts or people affected by this research may mean that critical or non-obvious information about what constitutes a risk to some members of society may be missed. Some participants we spoke with also mentioned that corporate labs experience challenges engaging with external stakeholders and experts to consult on critical issues. Many large companies seek to hire this expertise in-house, bringing in interdisciplinary researchers with social science, economics and other backgrounds. However, engaging external experts can be challenging, given concerns around trade secrets, sharing sensitive data and tipping off rival companies about their work.

Many companies resort to asking participants to sign non-disclosure agreements (NDAs), which are legally binding contracts with severe financial sanctions and legal risks if confidential information is disclosed. These can last in perpetuity, and for many external stakeholders (particularly those from civil society or marginalised groups), signing these agreements can be a daunting risk. However, we did hear from other corporate REC members that they had successfully engaged with external experts in some instances to understand the holistic set of concerns around a research project. In one biomedical-based research project, a corporate REC claimed to have engaged over 25 experts in a range of backgrounds to determine potential risks their work might raise and what mitigations were at their disposal.

Ongoing training

Many corporate RECs we spoke with also place an emphasis on continued skills and training, including providing basic ‘ethical training’ for staff of all levels. One corporate REC member we spoke with noted several lessons learned from their experience running ethical reviews of AI and data science research:

  1. Executive buy-in and sponsorship: It is essential to have senior leaders in the organisation backing and supporting this work. Having a senior spokesperson also helped in communicating the importance of ethical consideration throughout the organisation.
  2. Culture: It can be challenging to create a culture where researchers feel incentivised to talk and think about the ethical implications of their work, particularly in the earliest stages. Having a collaborative company culture in which research is shared openly within the company, and a transparent process where researchers understand what an ethics review will involve, who is reviewing their work, and what will be expected of them can help address this concern. Training programmes for new and existing staff on the importance of ethical reviews and how to think reflexively helped staff level-set with what is expected of them.
  3. Diverse perspectives: Engaging diverse perspectives can result in more robust decision-making. This means engaging with external experts who represent interdisciplinary backgrounds, and may include hiring that expertise internally. This can also include experiential diversity, which incorporates perspectives of different lived experiences. It also involves considering one’s own positionality and biases, and being reflexive as to how one’s own biases and lived experiences can influence consideration for ethical issues.
  4. Early and regular engagement leads to more successful outcomes: Ethical issues can emerge at different stages of a research project’s lifecycle, particularly given quick-paced and shifting political and social dynamics outside the lab. Engaging in ethical reviews at the point of publication can be too late, and the earlier this work is engaged with the better. Regular engagement throughout the project lifecycle is the goal, along with post-mortem reviews of the impacts of research.
  5. Continuous learning: REC processes need to be continuously updated and improved, and it is essential to seek feedback on what is and isn’t working.

Other actors in the research ethics ecosystem

While academic and corporate RECs and researchers share the primary burden for assessing research ethics issues, there are other actors who share this responsibility to varying degrees, including funders, publishers and conference organisers.[79] Along with RECs, these other actors help establish research culture, which refers to ‘the behaviours, values, expectations, attitudes and norms of research communities’.[80] Research culture influences how research is done, who conducts research and who is rewarded for it.

Creating a healthy research culture is a responsibility shared by research institutions, conference organisers, journal editors, professional associations and other actors in the research ecosystem. This can include creating rewards and incentives for researchers to conduct their work according to a high ethical standard, and to reflect carefully on the broader societal impacts of their work. In this section, we examine in detail only three actors in this complex ecosystem.

Figure 5: Different actors in the research ecosystem

This figure shows some of the different actors that comprise the AI and data science research ecosystem. These actors interact and set incentives for each other. For example, funders can set incentives for institutions and researchers to follow (such as meeting certain criteria as part of a research application). Similarly, publishers and conferences can set incentives for researchers to follow in order to be published.

Organisers of research conferences can set particular incentives for a healthy research culture. Research conferences are venues where research is rewarded and celebrated, enabling career advancement and growth opportunities. They are also forums where junior and senior researchers from the public and private sectors create professional networks and discuss field-wide benchmarks, milestones and norms of behaviour. As Ada’s recent paper with CIFAR on AI and machine learning (ML) conference organisers explores, there are a wide variety of steps that conferences can take to incentivise consideration for research ethics and broader societal impacts.[81]

For example, in 2020, the Conference on Neural Information Processing (NeurIPS) introduced a requirement that submitted papers include a broader societal impact statement of the benefits, limitations and risks of the research.[82] These impact statements were designed to encourage researchers submitting work to the conference to consider the risks their research might raise, and to conduct more interdisciplinary consultation with experts from other domains and engagement with people who may be affected by their research.[83] The introduction of this requirement was hotly contested by some researchers, who were concerned it was an overly burdensome ‘tick box’ exercise that would become pro-forma over time.[84]  In 2021, NeurIPs shifted to adding ethical considerations into a checklist of requirements for submitted papers, rather than requiring a standalone statement for all papers to complete.

Editors of academic journals can set incentives for researchers to assess for and mitigate the ethical implications of their work. Having work published in an academic journal is primary goal for most academics, and a pathway for career advancement. Journals often put in place certain requirements for submissions to be accepted. For example, the Committee on Publication Ethics (COPE) has released guidelines on research integrity practices in scholarly publishing, which stipulate that journals should include policies on data sharing, reproducibility and ethical oversight.[85] This includes requirements that studies involving human subjects research must provide self-disclosure that a REC has approved the study.

Some organisations have suggested journal editors could go further towards encouraging researchers to consider questions of broader societal impacts. The Partnership on AI (PAI) published a range of recommendations for responsible publication practice in AI and ML research, which include calls for a change in research culture that normalises the discussion of downstream consequences of AI and ML research.[86]

Specifically for conferences and journals, PAI recommends expanding peer review criteria to include potential downstream consequences by asking submitting researchers to include a broader societal impact statement. Furthermore, PAI recommends establishing a separate review process to evaluate papers based on risk and downstream consequences, a process that may require a unique set of multidisciplinary experts to go beyond the scope of current journal review practices.[87]

Public and private funders (such as research councils) can establish incentives for researchers to engage with questions of research ethics, integrity and broader societal impacts. Funders play a critical role in determining which research proposals will move forward, and what areas of research will be prioritised over others. This presents an opportunity for funders to encourage certain practices, such as requiring that any research that receives funding meets expectations around research integrity, Responsible Research and Innovation and research ethics. For example, Gardner recommends that grant funding and public tendering of AI systems should require a ‘Trustworthy AI Statement’ from researchers that includes an ex ante assessment of how the research will comply with the European HLEG’s Trustworthy AI standards.[88]

Challenges in AI research

In this chapter, we highlight six major challenges that Research Ethics Committees (RECs) face when evaluating AI and data science research, as uncovered during workshops conducted with members of RECs and researchers in May 2021.

Challenge 1:  Many RECs lack the resources, expertise and training to appropriately address the risks that AI and data science pose

Inadequate review requirements

Some workshop participants highlighted that many projects that raise severe privacy and consent issues are not required to undergo research ethics review. For example, some RECs encourage researchers to adopt data minimisation and anonymisation practices and do not require a project to undergo ethics reviews if the data is anonymised after collection. However, research has shown that anonymised data can still be triangulated with other datasets to enable reidentification,[89] raising a privacy risk to data subjects and implications for the consideration of broader impacts.[90] Expert participants noted that it is hard to determine if data collected for a project is anonymous, and that RECs must have the right expertise to fully interrogate whether a research project has adequately addressed these challenges.

As Metcalf and Crawford have noted, data science is usually not considered a form of direct intervention in the body or life of individual human subjects and is, therefore, exempt from many research ethics review processes.[91] Similar challenges arise with AI research projects that rely on data collected from public sources, such as surveillance cameras or scraped from the public web, which are assumed to pose minimal risk to human subjects. Under most current research ethics guidelines, research projects using publicly available or pre-existing datasets collected and shared by other researchers are also not required to undergo research ethics review.[92]

Some of our workshop participants noted that researchers can view RECs as risk averse and overly concerned with procedural questions and reputation management. This reflects some findings from the literature. Samuel et al found that, while researchers perceive research ethics as procedural and centred on operational governance frameworks, societal ethics are perceived as less formal and more ‘fuzzy’, noting the absence of standards and regulations governing AI in relation to societal impact.[93]

Expertise and training

Another institutional challenge our workshop participants identified related to the training, composition and expertise of RECs. These concerns are not unique to reviews of AI and data science and reflect long-running concerns with how effectively RECs operate. In the USA, a 2011 study found that university research ethics review processes are perceived by researchers as inefficient, with review outcomes being viewed as inconsistent and often resulting in delays in the research process, particularly for multi-site trials.[94]

Other studies have found that researchers view RECs as overly bureaucratic and risk-averse bodies, and that REC practices and decisions can vary substantially across institutions.[95] These studies have found that that RECs have differing approaches to determining which projects require a full rather than expedited review, and often do not provide a justification or explanation for their assessments of the risk of certain research practices.[96] In some documented cases, researchers have gone so far as to abandon projects due to delays and inefficiencies of research ethics review processes.[97]

There is some evidence these issues are exacerbated in reviews of AI and data science research. Dove et al found systemic inefficiencies and substantive weaknesses in research ethics review processes, including:

  • a lack of expertise in understanding the novel challenges emerging from data-intensive research
  • a lack of consistency and reasoned decision-making of RECs
  • a focus on ‘tick-box exercises’
  • duplication of ethics reviews
  • a lack of communication between RECs in multiple jurisdictions.[98]

One reason for variation in ethics review process outcomes is disagreement among REC members. This can be the case even when working with shared guidelines. For example, in the context of data acquired through social media for research purposes, REC members differ substantially in their assessment of whether consent is required, as well as the risks to research participants. In part, this difference of opinion can be linked to their level of experience in dealing with these issues.[99] Some researchers suggest that reviewers may benefit from more training and support resources on emerging research ethics issues, to ensure a more consistent approach to decision-making.[100]

A significant challenge arises from the lack of training – and, therefore, lack of expertise – of REC members.[101] While this has already been identified as a persistent issue with RECs generally,[102] AI and data science research can be applied to many disciplines. This means that REC members evaluating AI and data science research must have expertise across many fields. However, many RECs in this space frequently lack expertise across both (i) technical methods of AI and data science, and (ii) domain expertise from other relevant disciplines.[103]

Samuel et al found that some RECs that review AI and data science research are concerned with data governance issues, such as data privacy, which is perceived as not requiring AI-specific technical skills.[104] While RECs regularly draw on specialist advice through cross-departmental collaboration, workshop participants questioned whether resources to support examination of ethical issues relating to AI and data science research are made available for RECs.[105] RECs may need to consider which appropriate expertise is required for these reviews and how it will be sourced, for instance, via specialist ad-hoc advice, or the institution of sub-committees.[106]

The need for reviewers with expertise across disciplines, ethical expertise and cross-departmental collaboration is clear. Participants in our workshops questioned whether interdisciplinary expertise is sufficient to review AI and data science research projects, and whether experiential expertise (expertise on the subject matter gained through first-person involvement) is also necessary to provide a more holistic assessment of potential research risks. This could take the form of changing a REC’s composition to involve a broader range of stakeholders, such as community representatives or external organisations.

Resources

A final challenge that RECs face relates to their resourcing and the value given to their work. According to our workshop participants, RECs are generally under-resourced in terms of budget, staffing and rewarding of members. Many RECs rely on voluntary ‘pro bono’ labour of professors and other staff, with members managing competing commitments and an expanding volume of applications for ethics review.[107] Inadequate resources can result in further delays and have a negative impact on the quality of the reviews. Chadwick shows that RECs rely on the dedication of their members, who prioritise the research subjects, researchers, REC members and the institution ahead of personal gain.[108]

Several of our workshop participants noted reviewers do not have enough time to do a proper ethics review that evaluates the full range of potential ethical issues, or the right range of skills. According to several participants, sitting on a REC is often a ‘thankless’ task, which can make finding people willing to serve difficult. Those who are willing and have the required expertise risk being overloaded. Reviewing is ‘free labour’ with little or no recognition, and the question arises how to incentivise REC members. It was discussed that research ethics review should be budgeted appropriately to engage with stakeholders throughout the project lifecycle.

Challenge 2: Traditional research ethics principles are not well suited for AI research

In their evaluations of AI and data science research, RECs have traditionally relied on a set of legally mandated and self-regulatory ethics principles that largely stem from the biomedical sciences. These principles have shaped the way that modern research ethics is understood at research institutions, how RECs are constructed and the traditional scope of their remit.

Contemporary RECs draw on a long list of additional resources for AI and data science research in their reviews, including data science-specific guidelines like the Association of Internet Researchers ethical guidelines,[109] provisions of the EU General Data Protection Regulation (GDPR) to govern data protection issues, and increasingly the emerging field of ‘AI ethics’ principles. However, the application of these principles raises significant challenges for RECs.

Several of our expert participants noted these guidelines and principles are often not implemented consistently across different countries, scientific disciplines, or across different departments or teams within the same institution.[110] As prominent research guidelines were originally developed in the context of biomedical research, questions have been raised about their applicability to other disciplines, such as the social sciences, data science and computer science.[111] For example, some in the research community have questioned the extension of the Belmont principles to research in non-experimental settings due to differences in methodologies, the relationships between researchers and research subjects, different models and expectations of consent and different considerations for what constitutes potential harm and to whom.[112]

We draw attention to four main challenges in the application of traditional bioethics principles to ethics reviews of AI and data science research:

Autonomy, privacy and consent

One example of how biomedical principles can be poorly applied to AI and data science research relates to how they address questions of autonomy and consent. Many of these principles emphasise that ‘voluntary consent of the human subject is absolutely essential’ and should outweigh considerations for the potential societal benefit of the research.

Workshop participants highlighted consent and privacy issues as one of the most significant challenges RECs are currently facing in reviews of AI and data science research. This included questions about how to implement ‘ongoing consent’, whereby consent is given at various stages of the research process; whether informed consent may be considered forced consent when research subjects do not really understand the implications of the future use of their data; and whether it is practical to require consent be given more than once when working with large-scale data repositories. A primary concern flagged by workshop participants was whether RECs put too much weight on questions of consent and autonomy at the expense of wider ethical concerns.

Issues of consent largely stem from the ways these fields collect and use personal data,[113] which differs substantially from the traditional clinical experiment format. Part of the issue is the relatively distanced relationship between data scientist and research subject. Here, researchers can rely on data scraped from the web – such as social media posts; or collected via consumer devices – such as fitness trackers or smart speakers.[114] Once collected, many of these datasets can be made publicly accessible as ‘benchmark datasets’ for other researchers to test and train their models. The Flickr Faces HQ dataset, for example, contains 70,000 images of faces collected from a photo-sharing website and made publicly accessible with a Creative Commons license for other researchers to use.[115]

These collection and sharing practices pose novel risks to the privacy and identifiability of research subjects, and challenge traditional notions of informed consent from participants.[116] Once collected and shared, datasets may be re-used or re-shared for different purposes than those understood during the original consent process. It is often not feasible for researchers re-using the data to obtain informed consent in relation to the original research. In many cases, informed consent may not have been given in the first place.[117]

Not being able to obtain informed consent does not give the researcher a blank slate, and datasets that are continuously used as a benchmark for technology development risk normalising the avoidance of consent-seeking practices. Some benchmark datasets, such as the longitudinal Pima Indian Diabetes Dataset (PIDD), are tied to a colonial past of oppression and exploitation of indigenous peoples, and its use as a benchmark dataset perpetuates these politics in new forms.[118] The challenges to informed consent can cause significant damage to public trust in institutions and science. One notable example involved a Facebook (now Meta) study in 2014, in which researchers were able to monitor users’ emotional states and manipulated their news feed without their consent, showing more negative content to some users.[119] The study led to significant public concern, and raised questions about how Facebook users could give informed consent in instances where they lack control, let alone awareness of the study.

In some instances, AI and data science research may also pose novel privacy risks relating to the kinds of inferences that can be drawn from data. To take one example, researchers at Facebook (now Meta) developed an AI system to identify suicidal intent in user-generated content, which could be shared with law enforcement agencies to conduct wellness checks on identified users.[120] This kind of ‘emergent’ health data produced through interactions with software platforms or products is not subject to the same requirements or regulatory oversight as data from a mental health professional.[121] This highlights how an AI system can infer sensitive health information about an individual based on non-health related data in the public domain, which could pose severe risks for the privacy of vulnerable and marginalised communities.

Questions of consent and privacy point to another tension between principles of research integrity and the ethical obligations towards protecting research participants from harm. In the spirit of making research reproducible, there is a growing acceptance among the AI and data science research community that scientific data should be openly shared, and that open access policies for data and code should be fostered so that other researchers can easily re-use research outputs. At the same time, it is not possible to make data accessible to everyone, as this can lead to harmful misuses of the data by other parties, or uses of that data that are for a purpose the data subject would not be comfortable with. Participants largely agreed, however, that RECs struggle to assess these types of research projects because the existing ex ante model of RECs addresses potential risks up front and may not be fit to address the potential emerging risks for data subjects.[122]

Risks to research subjects vs societal benefit

A related topic to consent is the challenge of weighing the societal benefit of research against the risks to the research subjects it poses.

Workshop participants acknowledged how AI and data science research create a different researcher-subject relationship from traditional biomedical research. For example, participants noted that research in a clinical context involves a person who is present and with whom researchers have close and personal interaction. A researcher in these contexts is identifiable to their subject, and vice versa. This relationship often does not exist in AI and data science research, where the ‘subject’ of research may not be readily identifiable or may be someone affected by research rather than someone participating in the research. Some research argues that AI and data science research marks a shift from ‘human subjects’ research to ‘data subjects’ research, in which care and concern for the welfare of participants should be given to those whose data is used.[123]

In many cases, data science and AI research projects rely on data sourced from the web through scraping, a process that challenges traditional notions of informed consent and raises questions about whether researchers are in a position to assess the risk of research to participants.[124] Researchers may not be able to identify the people whose data they are collecting, meaning they often lack a relational dynamic that is essential for understanding the needs, interests and risks of their research subjects.  In other cases, AI researchers may use publicly available datasets made available on online repositories like GitHub, and which may be repurposed for reasons that differ from their originally intended basis for collection. Finally, major differences arise with how data is analysed and assessed. Many kinds of AI and data science research rely on the curation of massive volumes of data, a process that many researchers outsource to third-party contract services such as Amazon’s MTurk. These processes create further separation between researchers and research subjects, outsourcing important value-laden decisions about the data to third-party workers who are not identifiable, accountable or known to research subjects.

Responsibility for assessing risks and benefit

Another challenge research ethics principles have sought to address is determining who is responsible for assessing and communicating the risk of research to participants.

One criticism has been that biomedical research ethics frameworks do not reflect the ‘emergent, dynamic and interactional nature’[125] of fields like the social sciences and humanities.[126] For example, ethnographic or anthropological research methods are open-ended, emergent and need to be responsive to the concerns of research participants throughout the research process. Meanwhile, traditional REC reviews have been solely concerned with an up-front risk assessment. In our expert workshops, several participants noted a similar concern within AI and data science research, where risks or benefits cannot be comprehensively assessed in the early stages of research.

Universality of principles

Some biomedical research ethics initiatives have sought to formulate universal principles for research ethics in different jurisdictions, which would help ensure a common standard of review in international research partnerships or multi-site research studies. However, many of these initiatives were created by institutions from predominantly Western countries to respond to Western biomedical research practices, and critics have pointed out that they therefore reflect a deeply Western set of ethics.[127] Other efforts have been undertaken to develop universal principles, including the Emanuel, Wendler and Grady framework, which uses eight principles with associated ‘benchmark’ questions to help RECs from different regions evaluate potential ethical issues relating to exploitation.[128] While there is some evidence that this model has worked well in REC evaluations for biomedical research in African institutions,[129] it has not yet been widely adopted by RECs in other regions.

Challenge 3: Specific principles for AI and data science research are still emerging and are not consistently adopted by RECs

A more recent phenomenon relevant to the consideration of ethical issues relating to AI and data science has been the proliferation of ethical principles, standards and frameworks for the development and use of AI systems.[130], [131], [132], [133] The development of standards for ethical AI systems has been taken up by bodies such as the Institute of Electrical and Electronics Engineers (IEEE) and the International Organization for Standardization (ISO).[134] Some of these efforts have occurred at the international level, such as the OECD or United Nations. A number of principles can be found across this spectrum, including transparency, fairness, privacy and accountability. However, these common principles have variations in how they are defined, understood and scoped, meaning there is no single codified approach to how they should be interpreted.[135]

In developing such frameworks, some have departed from widely adopted guidelines. For example, Floridi and Cowls propose a framework of five overarching principles for AI. This includes the traditional bioethics principles of beneficence, non-maleficence, autonomy and justice, drawn from the Belmont principles, but adds the principle of explicability, which combines questions of intelligibility (how something works) with accountability (who is responsible for the way it works).[136] Others have argued that international human rights frameworks offer a promising basis to develop coherent and universally recognised standards for AI ethics.[137]

Several of our workshop participants mentioned that it is challenging to judge the relevance of existing principles in the context of AI and data science research. During the workshops, a variety of additional principles were mentioned, for example, ‘equality’, ‘human-centricity’, ‘transparency’ and ‘environmental sustainability’. This indicates that there is not yet clear consensus around which principles should guide AI and data science research practices, and that the question of how those principles should be developed (and by which body) is not yet answered. We address this challenge in our recommendations.

The wide range of available frameworks, principles and guidelines demonstrate the difficulty for researchers and practitioners to select suitable frameworks or principles due to the current inconsistencies and a lack of a commonly accepted framework or principles guiding ethical AI and data science research. As many of our expert participants noted, this has led to confusion among RECs about whether these frameworks or principles should supplement biomedical principles, and how they should apply them to reviews of data science and AI research projects.

Complicating this challenge is the question of whether ethical principles guiding AI and data science research would be useful in practice. In a paper comparing the fields of medical ethics with AI ethics, Mittelstadt argues that AI research and development lacks several essential features for developing coherent research ethics principles and practices. These include the lack of common aims and fiduciary duties, a history of professional norms and bodies to translate principles into practice, and robust legal and professional accountability mechanisms.[138] While medical ethics draws on its practitioners being part of a ‘moral community’ characterised by common aims, values and training, AI cannot refer to such established norms and practices, given the wide range of disciplines and commercial fields it can be applied to.

The blurring of commercial and societal motives for AI research can cause AI developers to be driven by values such as innovation and novelty, performance or efficiency, rather than ethical aims rooted in biomedicine around concern for their ‘patient’ or for societal benefit. In some regions, like Canada, professional codes of practice and law around medicine have established fiduciary-like duties between doctors and their patients, which do not exist in the fields of AI and data science.[139] AI does not have a history and professional culture around ethics comparable to the medical field, which has a strong regulating influence on practitioners. Some research has also questioned the aims of AI research, and what kinds of practices are incentivised and encouraged within the research community. A study involving interviews with 53 AI practitioners in India, East and West African countries, and the USA showed that, despite the importance of high-quality data in addressing potential harms and a proliferation of data ethics principles, practitioners find the implementation of these practices to be one of the most undervalued and ‘de-glamorised’ aspects of developing AI systems.[140]

Identifying clear principles for AI research ethics is a major challenge. This is particularly the case because so few of the emerging AI ethics principles specifically focus on AI or data science research ethics. Rather, they centre on the ethics of AI system development and use. In 2019, the IEEE published a report entitled Ethically aligned design: Prioritizing human wellbeing with autonomous and intelligent systems, which contains a chapter on ‘Methods to Guide Ethical Research and Design’.[141] This chapter includes a range of recommendations for academic and corporate research institutions, including that: labs should identify stages in their processes in which ethical considerations, or ‘ethics filters’, are in place before products are further developed and deployed; and that interdisciplinary ethics training should be a core subject for everyone working in the STEM field, and should be incentivised by funders, conferences and other actors. However, this report stops short of offering clear guidance for RECs and institutions on how they should turn AI ethics principles into clear practical guidelines for conducting and assessing AI research.

Several of our expert participants observed that many AI researchers and RECs currently draw on legal guidance and norms relating to privacy and data protection, which can risk conflating questions of AI ethics into narrower issues of data governance. The rollout of the European General Data Protection Regulation (GDPR) in 2018 created a strong incentive for European institutions and institutions working with personal data of Europeans to reinforce existing ethics requirements on how research data is collected, stored and used by researchers. Expert participants noted that data protection questions are common on most REC reviews. As Samuel notes, there is some evidence that AI researchers tend to perceive research ethics as data governance questions, a mindset of thinking that is reinforced by institutional RECs in some of the questions they ask.[142]

There have been some grassroots efforts to standardise research ethics principles and guidance for some forms of data science research, including social media research. The Association of Internet Researchers, for example, has published its third edition of ethical guidelines,[143] which includes suggestions for how to deal with privacy and consent issues posed by scraping online data, how to outline and address questions across different stages of the ethics lifecycle (such as considering issues of bias and in the data analysis stage), and considering issues of potential downstream harms with the use of that data. However, these guidelines are voluntary and are narrowly focused on social media research. It remains unclear whether RECs are consistently enforcing them. As Samuel notes, the lack of established norms and criteria in social media research has caused many researchers to rely on bottom-up, personal ‘ethical barometers’ that create discrepancies in how ethical research should be conducted.[144]

In summary, there are a wide range of broad AI ethics principles that seek to guide how AI technologies are developed and deployed. The iterative nature of AI research, in which a published model or dataset can be used by downstream developers to create a commercial product with unforeseen consequences, raises a significant challenge for RECs seeking to apply AI and data science research ethics principles. As many of our expert participants noted, AI ethics research principles must touch on both how research is conducted (including what methodological choices are made), and also involve consideration for the wider societal impact of that research and how it will be used by downstream developers.

Challenge 4: Multi-site or public-private partnerships can exacerbate existing challenges of governance and consistency of decision-making

RECs face governance and fragmentation challenges in their decision-making. In contrast to clinical research, which is coordinated in the UK by the Health Research Authority (HRA), RECs evaluating AI and data science research are generally not guided by an overarching governing body, and do not have structures to coordinate similar issues between different RECs. Consequently, their processes, decision-making and outcomes can vary substantially.[145]

Expert participants noted this lack of consistent guidance between RECs is exacerbated by research partnerships with international institutions and public-private research partnerships. The specific processes RECs follow can vary between committees, even within the same institution. This can result in different RECs reaching different conclusions on similar types of research. A 2011 survey of research into Institutional Review Board (IRB) decisions found numerous instances where similar research projects received significantly different decisions, with some RECs approving with no restrictions, others requiring substantial restrictions and others rejecting research outright.[146]

This lack of an overarching coordinating body for RECs is especially problematic for international projects that involve researchers working in teams across multiple jurisdictions, often with large datasets that have multiple sources across multiple sites.[147] Most biomedical research ethics guidelines recommend that multi-site research should be evaluated by RECs located in all respective jurisdictions,[148] on the basis that each institution will reflect the local regulatory requirements for REC review, which they are best prepared to respond to.

Historically, most research in the life sciences was conducted with a few participants at a local research institution.[149] In some regions, requirements for local involvement have developed to provide some accountability for research subjects. Canada, for example, requires social science research involving indigenous populations to meet specific research ethics requirements, including around community engagement and involvement with members of indigenous communities, and around requirements for indigenous communities to own any data.[150]

However, this arrangement does not fit the large-scale, international, data-intensive research of AI and data science, which often relies on the generation, scraping and repurposing of large datasets, often without any awareness of who exactly the data may be from or under what purpose it was collected. The fragmented landscape of different RECs and regulatory environments leads to multiple research ethics applications to different RECs with inconsistent outcomes, which can be highly resource intensive.[151] Workshop participants highlighted how ethics committees face uncertainties in dealing with data sourced and/or processed in heterogeneous jurisdictions, where legal requirements and ethical norms can be very different.

Figure 6: Public-private partnerships in AI research[152]

The graphs above show an increasing trend in public-private partnerships in AI research, and in multinational collaborations on AI research. With increasing public-private partnerships and multi-site research, this can increase the challenges for these kinds of research.

Public-private partnerships

Public-private partnerships (PPPs) are common in biomedical research, where partners from the public and private sector share, analyse and use data.[153] The type of collaborations can vary, from project-specific collaborations to long-term strategic alliances between different groups, or large multi-consortia. The data ecosystem is fragmented and complex, as health data is increasingly being shared, linked, re-used or re-purposed in novel ways.[154] Some regulations, such as the General Data Protection Regulation (GDPR) may apply to all research; however, standards, drivers or reputational concerns may differ between actors in the public and private sector. This means that PPPs navigate an equally complex and fragmented landscape of standards, norms and regulations.[155]

As our expert participants noted, public-private partnerships can raise concerns about who derives benefit from the research, who controls the intellectual property of findings, and how data is shared in a responsible and rights-respecting way. The issue of data sharing is particularly problematic when research is used for the purpose of commercial product or service development. For example, wearable devices or apps that track health and fitness data can produce enormous amounts of biomedical ‘big data’ when combined with other biomedical datasets.[156] While the data generated by these consumer devices can be beneficial for society, through opportunities to advance clinical research in, for instance, chronic illness, consumers of these services may not be aware of these subsequent uses, and their expectations of personal and informational privacy may be violated.[157]

These kinds of violations can have devastating consequences. One can take the recent example of the General Practice Data for Planning and Research (GPDPR), a proposal by England’s National Health Service to create a centralised database of pseudonymised patient data that could be made accessible for researchers and commercial partners.[158] The plan was criticised for failing to alert patients about the use of this data, leading to millions of patients in England opting out of their patient data being accessible for research purposes. As of this publication date, the UK Government has postponed the plan.

Expert participants highlighted that data sharing must be conducted responsibly, aligning with the values and expectations of affected communities, a similar view held by bodies like the UK’s Centre for Data Ethics and Innovation.[159] However, what these values and expectations are, and how to avoid making unwarranted assumptions, is less clear. Recent research suggests that participatory approaches to data stewardship may increase legitimacy of and confidence in the use of data that works for people and society.[160]

Challenge 5: RECs struggle to review potential harms and impacts that arise throughout AI and data science research

REC reviews of AI and data science research are ex ante assessments done before research takes place. However, many of the harms and risks in AI research may only become evident at later stages of the research. Furthermore, many of the types of harms that can arise – such as issues of bias, or wider misuses of AI or data – are challenging for a single committee to predict. This is particularly true with the broader societal impacts of AI research, which require a kind of evaluation and review that RECs currently do not undertake.

Bias and discrimination

Identifying or predicting potential biases, and consequent discrimination, that can arise in datasets and AI models at various stages of development constitute a significant challenge for the evaluation of AI and data science research. Numerous kinds of bias can arise during data collection, model development and deployment, leading to potentially harmful downstream effects.[161] For example, Buolamwini and Gebru demonstrate that many popular facial recognition systems have much poorer performance on darker skin and non-male identities due to sampling biases in the population dataset used to train the model.[162] Similarly, numerous studies have shown predictive algorithms for policing and law enforcement can reproduce societal biases due to choices in their model architecture, design and deployment.[163],[164],[165] In supervised machine learning, manually annotated datasets can harbour bias through problematic application of gender or race categories.[166],[167],[168] In unsupervised machine learning, datasets commonly represent different types of historical biases (because data reflect existing sociotechnical bias in the world), which lead to a lack of demographic diversity, aggregation or population.[169] Crawford argues that datasets used for model training purposes are asked to capture a very complex world through taxonomies consisting of discrete classifications, an act that requires non-trivial political, cultural and social choices.[170]

Figure 7: How bias can arise in different ways in the AI development lifecycle[171]

This figure uses the example of an AI-based healthcare application, to show how bias can arise from patterns in the real world, in the data, in the design of the system, and in its use.

Understanding the ways in which biases can arise in different stages of an AI research project creates a challenge for RECs, which may not have the capacity, time or resources to determine what kinds of biases might arise in a particular project or how they should be evaluated and mitigated. Under current REC guidelines, it may be easier for RECs to challenge researchers on how they can address questions concerning data collection and sampling bias issues, but questions concerning whether research may be used to create biased or discriminatory outcomes at the point of application are outside the scope of most REC reviews.

Data provenance

Workshop participants identified data provenance – how data is originally collected sourced by researchers – as another major challenge for RECs. The issue becomes especially salient when it comes to international and collaborative projects, which draw on complex networks of datasets. Some datasets may constitute ‘primary’ data – that is, data collected by researchers. Meanwhile, other data may be ‘secondary’, which includes data that is shared, disseminated or made public by others. With secondary data, the underlying purpose for its collection, its accuracy and biases embedded at the stage of collection may be unclear.

There is a need for RECs to consider not just where data is sourced from but to also probe into what its intended purposes are, how it has been tested for potential biases that may be baked into a project, and other questions about the ethics of its collection. Some participants said that it is not enough to ask whether a dataset received ethical clearance when collected. One practical tool that might address this would be standardisation of dataset documentation practices by research institutions. For example, there is the option to use datasheets, which list critical information about how a dataset was collected, who to contact with questions and what potential ethical issues it may raise.

Labour practices around data labelling

Another issue flagged by our workshop participants related to considerations for the labour conditions and mental and physical wellbeing of data annotators. Data labellers form part of the backbone of AI and data science research, and include people who review, tag and label data to form a dataset, or evaluate the success of a model. These workers are often recruited from services like MTurk. Research and data labeller activism has shown that many face exploitative working conditions and underpayment.[172]

According to some workshop participants, it remains unclear whether data labellers are considered ‘human subjects’ in their reviews. Their wellbeing is not routinely considered by RECs. While some institutions maintain MTurk policies, these are often not written from the perspective of workers themselves and may not fully consider the variety of risks that workers face. These can include non-payment of services, or asking workers to undertake too much work in too short of a time.[173] Initiatives like the Partnership on AI’s Responsible Sourcing of Data Enrichment Services and the Northwestern Institutional Review Board’s Guidelines for Academic Requesters offer models for how corporate and academic RECs might develop policies.[174]

Societal and downstream impacts

Several experts noted standard RECs practices can fail to assess the broader societal impacts of AI and data science research, leading to traditionally marginalised population groups being disproportionately affected by AI and data science research. Historically, RECs have an anticipatory role, with potential risks assessed and addressed at the initial planning stage of the research. The focus on protecting individual research subjects means that RECs generally do not consider potential broader societal impacts, such as long-term harms to communities.[175]

For example, a study using facial recognition technology to determine sexual orientation of people,[176] or the recognition of Uighur minorities in China,[177] poses serious questions for societal benefit and the impacts on marginalised communities – yet the RECs who reviewed these projects did not consider these kinds of questions. Since the datasets used in these projects consisted of images scraped from the internet and curated, the research did not constitute human subjects research, and therefore passed ethics review.

Environmental impacts

The environmental footprint of AI and data science is a further significant impact that our workshop participants highlighted as an area most RECs do not currently review for. Some forms of AI research, such as deep learning and multi-agent learning, can be compute-intensive, raising questions about whether their benefits offset the environmental cost.[178] Similar questions have been raised about large language models (LLMs), such as OpenAI’s GPT-3, which rely on intensive computational methods without articulating a clearly defined benefit to society.[179] Our workshop participants noted that RECs could play a role in assessing whether a project’s aims justify computationally intensive methods, or whether a researcher is using the most computationally efficient method of training their model (avoiding unnecessary computational spend). However, there is no existing framework for RECs to use to help make these kinds of determinations, and it is unclear whether many REC members would have the right competencies to evaluate such questions.

Considerations of ‘legitimate research’

Workshop participants discussed whether RECs are well suited to determine what constitutes ‘legitimate research’. For example, some participants raised questions about the intellectual proximity of AI research to discredited forms of pseudoscience like phrenology, citing AI research that is based on flawed assumptions about race and gender – a point raised in empirical research evaluating the use of AI benchmark datasets.[180] AI and data science research regularly involves the categorisation of data subjects into particular groups, which may involve crude assumptions that, nonetheless, can lead to severe population-level consequences. These ‘hidden decisions’ are often baked into a dataset and, once shared, can remain unchallenged for long periods of time. To give one example, portions of the MIT Tiny Images dataset, first created in 2006, were removed in 2018 after it was discovered to include racist and sexist categorisations of images of minoritised people and women.[181] This dataset has been used to train a range of subsequent models and may still be in use today, given the ability to download and repost datasets without subsequent documentation explaining their limitations. Several participants noted that RECs are not set up to identify, let alone assess, for these kinds of issues, and may consider defining ‘good science’ out of their remit.

A lack of incentives for researchers to consider broader societal impacts

Another point of discussion in the workshops was how to incentivise researchers to consider broader societal impact questions. Researchers are usually incentivised and rewarded by producing novel and innovative work, evidenced by publications in relevant scientific journals or conferences. Often, this involves researchers making broad statements about how AI or data science research can have positive implications for society, yet there is little incentive for researchers to consider potentially harmful impacts of their work.

Some of the expert participants pointed out that other actors in the research ecosystem, such as funders, could help to incentivise researchers to reflexively consider and document the potential broader societal impacts of their work. Stanford University’s Ethics and Society Review, for example, requires researchers seeking funding from the Stanford Institute for Human-Centered Artificial Intelligence to write an impact statement reflecting on how their proposal might create negative societal impacts for society, how they can mitigate those impacts, and to work with an interdisciplinary faculty panel to ensure those concerns are addressed before funding is received. Participants in this programme overwhelmingly described it as a positive for their research and training experience.[182]

A more ambitious proposal from some workshop participants was to go beyond a risk-mitigation plan and incentivise research that benefits society. However, conceptualisations of social, societal or public good are contested, at best – there is no universally agreed on theory of what these are.[183] There are also questions about who is included in ‘society,’ and whether some benefits for those in a position of power would actively harm other members of society who are disadvantaged.

AI and data science research communities have not yet developed a rigorous method for deeply considering what constitutes public benefit, or a rigorous methodology for assessing the long-term impact of AI and data science interventions. Determining what constitutes the ‘public good’ or ‘public benefit’ would, at the very least, require some form of public consultation; even then, it may not be sufficient.[184]

One participant noted it is difficult in some AI and data science research projects to consider these impacts, particularly projects aimed at theory-level problems or small step-change advances in efficiency (for example, research that produces a more efficient and less computationally intensive method for training an image detection model). This dovetails with concerns raised by some in the AI and data science research community that there is too great a focus on creating novel methods for AI research instead of applying research to address applied, real-world problems.[185]

Workshop participants raised a similar concern about AI and data science research that is conducted without any clear rationale for addressing societal problems. Participants used the metaphor of a ‘fishing expedition’ to describe some types of AI and data science research projects that have no clear aim or objective but sought to explore large datasets to see what they found. As one workshop participant put it, researchers should always be aware that, just because data can be collected, or is already available, it does not mean that it should be collected or used for any purpose.

Challenge 6: Corporate RECs lack transparency in relation to their processes

Some participants noted that, while corporate lab reviews may be more extensive, they can also be more opaque, and are at risk of being driven by interests beyond research ethics, including whether research poses a reputational risk to the company if published. Moss and Metcalf note how ethics practices in Silicon Valley technology companies are often chiefly concerned with questions of corporate values and legal risk and compliance, and do not systematically address broader issues such as questions around moral, social and racial justice.[186] While corporate ethics reviewers draw on a variety of guidelines and frameworks, they may not address ongoing harms, evaluate these harms outside of the corporate context, or evaluate organisational behaviours and internal incentive structures.[187] It is worth noting that academic RECs have faced a similar criticism. Recent research has documented how academic REC decisions can be driven by a reputational interest to avoid ‘embarrassment’ of the institution.[188]

Several of our participants highlighted the relative lack of external transparency of corporate REC processes versus academic ones. This lack of transparency can make it challenging for other members of the research community to trust that corporate research review practices are sufficient.

Google, for example, launched a ‘sensitive topics’ review process in 2020 that asks researchers to run their work through legal, policy and public relations teams if it relates to certain topics like face and sentiment analysis or categorisations of race, gender or political affiliation.[189] According to the policy, ‘advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues.’ In at least three reported instances, researchers were told to ‘strike a more positive tone’ and to remove references to Google products, raising concerns about the credibility of findings. In one notable example that became public in 2021, a Google ethical AI researcher was fired from their role after being told that a research paper they had written, which was critical of the use of large language models (a core component in Google’s search engine), could not be published under this policy.[190]

Recommendations

We conclude this paper with a set of eight recommendations, organised into sections aimed primarily at three stakeholders in the research ethics ecosystem:

  1. Academic and corporate Research Ethics Committees (RECs) evaluating AI and data science research.
  2. Academic and corporate AI and data science research institutions.
  3. Funders, conference organisers, journal editors, and other actors in the wider AI and data science research ecosystems.

For academic and corporate RECs

Recommendation 1: Incorporate broader societal impact statements from researchers

The problem

Broader societal impacts of AI and data science research are not currently considered by RECs. These might include ‘dual-use’ research (meaning it can be used for both civilian and military purposes), possible harms to society or the environment, and the potential for discrimination against marginalised populations.  Instead, RECs focus their reviews on questions of research methodology. Several workshop participants noted that there are few incentives for researchers to reflexively consider questions of societal impact. Workshop participants also noted that institutions do not offer any framework for RECs to follow, or training or guidance for researchers. Broader societal impact statements can ensure researchers reflect on, and document, the full list of potential harms, risks and benefits their work may pose.

Recommendations

Researchers should be required to undertake an evaluation of broader societal impact as part of their ethics evaluation.
This would be an impact statement that included a summary of the positive and negative impacts on society they anticipate from their research. They should include any known limitations or risks for misuse that may arise, such as whether their research findings are premised on assumptions that are particular to a geographic region, or if there is a possibility of using the findings to exacerbate certain forms of societal injustices.

Training should be designed and implemented for researchers to adequately conduct stakeholder and impact assessment evaluations, as a precondition to receive funding or ethics approval.[191]
These exercises should encourage researchers to consider the intended uses of their innovations and reflect on what kinds of unintended uses might arise. The result of these assessments can be included in research ethics documentation that reports on the researchers’ reflections on both discursive questions that invite open-ended opinion (such as what the intended use of the research may be) and categorical information that lists objective statistics and data about the project (such as the datasets that will be used, or the methods that will be applied). Some academic institutions are experimenting with this approach for research ethics applications.

Examples of good practice
Recent research from Microsoft provides a structured exercise for how researchers can consider, document and communicate potential broader societal impacts, including who the affected stakeholders are in their work, and what limitations and potential benefits it may have.[192]

Methods for impact assessment of algorithmic systems have emerged from the domains of human rights, environmental studies and data protection law. These methods are not necessarily standardised or consistent, but they seek to encourage researchers to reflect on the impacts of their work. Some examples include the use of algorithmic impact assessments in healthcare settings,[193] and in public sector uses of algorithmic systems in the Netherlands and Canada.[194]

In 2021, Stanford University tested an Ethics and Society Review board (ESR), which sought to supplement the role of its Institutional Review Board. The ESR requires researchers seeking funding from the Stanford Institute for Human-Centered Artificial Intelligence to consider negative or societal risks from their proposal, develop mitigative measures to assess those risks, and to collaborate with an interdisciplinary faculty panel to ensure concerns are addressed before funds are disbursed.[195] A pilot study of 41 submissions to this panel found that ‘58% of submitters felt that it had influenced the design of their research project, 100% are willing to continue submitting future projects to the ESR,’ and that submitting researchers sought additional training and scaffolding about societal risks and impacts.[196]

Figure 8: Stanford University Ethics and Society Review (ESR) process[197]

Understanding the potential impacts of AI and data science research can ensure researchers produce technologies that are fit for purpose and well-suited for the task at hand. The successful development and integration of an AI-powered sepsis diagnostic tool in a hospital in the USA offers an example of how researchers worked with key stakeholders to develop and design a life-changing product. Researchers on this project relied on continuous engagement with stakeholders in the hospital, including nurses, doctors and other staff members, to determine how the system could meet their needs.[198] By understanding these needs, the research team were able to tailor the final product so that it fitted smoothly within the existing practices and procedures of this hospital.

Open questions

There are several open questions on the use of broader societal impact statements. One relates to whether these statements should be a basis for a REC rejecting a research proposal. This was a major point of disagreement among our workshop participants. Some participants pushed back on the idea, out of concern that research institutions should not be in the position to determine what research is appropriate or inappropriate based on potential societal impacts, and that this may cause researchers to view RECs as a policing body for issues that have not occurred. Instead, these participants suggested a softer approach, whereby RECs require researchers to draft a broader societal impact statement but there is not a requirement for RECs to evaluate the substance of those assessments. Other participants noted that these impact assessments would be likely to highlight clear cases where the societal risks are too great, and that RECs should incorporate these considerations into their final decisions.

Another consideration related to whether a broader societal impacts evaluation should involve some aspect of ex post reviews of research, in which research institutions monitor the actual impacts of published research. This process would require significant resourcing. While there is no standard method for conducting these kinds of reviews yet, some researchers in the health field have called for this kind of ex post review conducted by an interdisciplinary committee of academics and stakeholders.[199]

Lastly, some workshop participants questioned whether a more holistic ethics review process could be broken up into parts handled by different sub-committees. For example, could questions of data ethics – how data should be handled, processed and stored, and which datasets are appropriate for researchers to use – have their own dedicated process or sub-committee? This sub-committee would need to adopt clear principles and set expectations with researchers for specific data ethics practices, and could also address the evolving dynamic between researcher and participants.

There was a suggestion that more input from data subjects could help, with a focus on how they can, and whether they should, benefit from the research, and whether this would therefore constitute a different type or segment of ethical review. Participants mentioned the need for researchers to think relationally and understand who the data subject is, the power dynamics at play and to work out the best way of involving research participants in the analysis and dissemination of findings.

Recommendation 2: RECs should adopt multi-stage ethics review processes for AI and data science research

The problem

Ethical and societal risks of AI and data science research can manifest at different stages of research[200] – from early ideation to data collection, to pre-publication. Assessing the ethical and broader societal impacts of AI research can be difficult as the results of data-driven research cannot be known in advance of accessing and processing data or building machine learning (ML) models. Typically, RECs only review research applications once before research beings, and with a narrow focus solely looking at ethical issues pertaining to methodology. This can mean that ethical review processes fail to catch risks that arise in later stages, such as potential environmental or privacy considerations if research is published, particularly for research that is ‘high risk’ and pertains to protected characteristics or has high potential for societal impact.

Recommendations

RECs should set up multi-stage and continuous ethics reviews, particularly for ‘high-risk’ AI research

RECs should experiment with requiring multiple stages of evaluations of research that raises particular ethical concern, such as evaluations at the point of data collection and a separate evaluation at the point of publication. Ethics review processes should engage with considerations raised at all stages of the research lifecycle. RECs must move away from being the ‘owners’ of ethical thinking into being stewards who guide researchers through the review process.

This means challenging the notion of an ethical review being a one-off exercise conducted at the start of a project, and instead shifts the approach of a REC and the ethics review process towards one that embeds ethical reflection throughout a project. This will benefit from more iterative ethics review processes, as well as additional interdisciplinary training for AI and data science researchers.

Several workshop participants suggested that multi-stage ethics review could consist of a combination of formal and informal review processes. Formal review processes could exist at the early and late stages, such as funding or publication, while at other points, the research team could be asked to engage in more informal peer-reviews or discussions with experts or reviewers. In the early stages of the project, milestones could be identified which are defined by the research teams, and in collaboration with RECs. For example, a milestone could be a grant submission, or when changing roles or adding new research partners to the project. Milestones could be used to trigger an interim review. Rather than following a standardised approach, this model allows for flexibility, as the milestones would be different for each project. This could also involve a tiered assessment, which is a standardised assessment based on identified risks a research project poses, which then determines the milestones.

Building on Burr & Leslie,[201] we can speak of four broad stages in an AI or data science research project: design, develop, pre-publication and post-deployment.

At the stage of designing a research project, policies and resources should be in place to:

  • Ensure new funders and potential partnerships adhere to an ethical framework. Beyond legal due diligence, this is about establishing partnerships on the basis of their values and a project’s goals.
  • Implement scoping policies that establish whether a particular research project must undertake REC processes. Two ways are suggested in the literature for such policies, and examining each organisation’s research and capability will help decide which is most suitable:
    • Sandler et al suggest a consultation process whereby RECs produce either ‘an Ethical Issues Profile report or a judgment that there are not substantive ethical issues raised’.[202]
    • The UK Statistics Authority employs an ethics self-assessment tool that determines a project’s level of risk.[203]
  • Additionally, scoping processes can result in establishing whether a project must undertake data, stakeholder, human rights or other impact assessments that focus on the broader societal impacts of their work (see Recommendation 1). Stanford’s Ethical and Societal Review Board offers one model for how institutions can set more ‘carrots and sticks’ for researchers to reflexively engage in the potential broader impacts of their research by tying the completion of a societal impact statement to their funding proposal.

At the development stage of a project, a typical REC evaluation should be undertaken to consider any ethical risks. RECs should provide a point of contact to ensure changes in the project’s aims and methods that raise new challenges are subjected to due reflection. This ensures an iterative process that aligns with the practicalities of research. RECs may also experiment with creating specialised sub-committees that address different issues, such as a separate data ethics review board that includes expertise in data ethics and domain-specific expertise, or a health data or social media data review board. It could help evaluate potential impact for people and society; depending on composition, it could also be adept at reviewing the technical aspects of a research project.[204] This idea builds from a hybrid review mechanism that Ferretti et al propose, which merges aspects of the traditional model of RECs with specialised research committees that assess particular parts of a research project.[205]

One question that RECs must turn into practice is to establish which projects must undertake particular REC processes, as it may be too burdensome for all projects to undergo this scrutiny. In some cases, it may be that a REC determines a project should undergo stricter scrutiny if an analysis of its potential impacts on various stakeholders highlights serious ethical issues. Whether or not a project is ‘in scope’ for a more substantial REC review process might depend on:

  • the level of risk it raises
  • the training or any certifications its researchers hold
  • whether it is reviewed by a relevant partner’s REC.

Determining what quantifies a risk is challenging, as not all risks may be evident or within the imagination of a REC. More top-level guidance on risks (see Recommendation 4) and interdisciplinary/experiential membership on RECs (see Recommendation 3) can help ensure that a wider scope of AI risks are identified.

At the stage of pre-publication of a research project, RECs should encourage researchers to revisit the ethical and broader societal impact considerations that may have arisen earlier. In light of the research findings, have these changed at all? Have new risks arisen? At this stage, REC members can act as stewards to help researchers navigate publication requirements, which may include filling in the broader societal impact statements that some AI and ML conferences are beginning to implement. They might also connect researchers with subject-matter experts in particular domains, who can help them understand potential ethical risks with their research. Finally, RECs may be able to provide guidance on how to release research responsibly, including whether to release publicly a dataset or code that may be used to cause harm.

Lastly, RECs and research institutions should experiment with post-publication evaluations of the impacts of research. RECs could, for example, take a pool of research submissions that involved significant ethical review and conduct an analysis of how that work was received 2–3 years down the line. Criteria this assessment could look at may include how that work was received by the media or press, who has cited that work subsequently, and whether negative or positive impacts came to fruition.

Figure 9: Example of multi-stage ethics review process

This figure shows what a multi-stage ethics review process could look like. It involves an initial self-assessment for broader impacts issues at the design stage, a REC review (and potential review by a specialised data ethics board at the production stage, another review of high-risk research at pre-publication stage, and a potential post-publication review of the research 2–3 years after it is published.

Examples of good practice

As explored above, there is not yet consensus on how to operationalise a continuous, multi-stage ethics review process, but there is an emerging body of work addressing ethics consideration at different stages in a projects’ lifecycle. Building on academic research,[206] the UK’s Centre for Data Ethics and Innovation has proposed an ‘AI assurance’ framework for continuously testing the potential risks of AI systems. This framework involves the use of different mechanisms like audits, testing and evaluation at different stages of an AI product’s lifecycle.[207] However, this framework is focused on AI products rather than research, and further work would be needed to adapt this framework for research.

D’Aquin et al propose an ethics-by-design methodology for AI and data science research that takes a broader view of data ethics.[208] Assessment usually happens at the research design/planning stage, and there are no incentives for the researcher to consider ethical issues as they eventually emerge with the progress of research. Instead, considerations for emerging ethical risks should be ongoing.[209] A few academic and corporate research institutions, such as the Alan Turing Institute, have already introduced or are in the process of implementing continuous ethics review processes (see Appendix 2). Further research is required to study how these work in practice.

Open questions

A multi-stage research review process should capture more of the ethical issues that arise in AI research, and enable RECs to evaluate ex post impacts of their research. However, continuous, multi-stage reviews require a substantial increase in resources and so are an option only for institutions who are ready to make an investment in ethics practices. These proposals could require multiples of the current time commitments of REC members and officers, and therefore require greater compensation for REC members.

The prospect of implementing a multi-stage review process raises further questions of scope, remit and role of ethics reviews. Informal reviews spread over time could see REC members take more of an advisory role than in the compliance-oriented models of the status quo, allowing researchers to informally check in with ethics experts, to discuss emerging issues and the best way to approach them. Dove argues that the role of RECs is to operate as regulatory stewards, who guide researchers through the review process.[210] To do this, RECs should establish communication channels for researchers to get in touch and interact. However, Ferretti et al warn there is a risk that ethics oversight might become inefficient if different committees overlap, or if procedures become confusing and duplicated. It would also be challenging to bring together different ethical values and priorities across a range of stakeholders, so this change needs sustaining over the long term.[211]

Recommendation 3: Include interdisciplinary expertise in REC membership

The problem

The make-up and scope of a REC review came up repeatedly in our workshops and literature reviews, with considerable concern raised about how RECs can accurately capture the wide array of ethical challenges posed by different kinds of AI and data science research. There was wide agreement within our workshop of the importance of ensuring that different fields of expertise have their voices heard in the REC process, and that the make-up of RECs should reflect a diversity of backgrounds.

Recommendations

RECs must include more interdisciplinary expertise in their membership

In recruiting new members, RECs should draw on members from different research and professional fields that go beyond just computer science, such as the social sciences, humanities, STEM sciences and other fields. By having these different disciplines present, they can each bring a different ethical lens to the challenges that a project may raise. RECs might also consider including members who work in the legal, communications or marketing teams to ensure that the concerns raised speak to a wider audience and respond to broader institutional contexts. Interdisciplinarity involves the development of a common language, a reflective stance towards research, and a critical perspective towards science.[212] If this expertise is not present at an institution, RECs could make greater use of external experts for specific questions that arise from data science research.[213]

RECs must include individuals with different experiential expertise

RECs must also seek to include members who represent different forms of experiential expertise, which includes individuals from historically marginalised groups with perspectives that are often not represented in these settings. This both includes more diverse experiences in discussions about data science and AI research outputs, and ensures that these meet the values of a culturally rich and heterogeneous society.

Crucially, the mere representation of a diversity of viewpoints is not enough to ensure the successful integration of those views into REC decisions. Members must feel empowered to share their concerns and be heard, and careful attention must be paid to the power dynamics that underlie how decisions are made within a REC. Mechanisms for ensuring more transparent and ethical decision-making practices are an area of future research worth pursuing.

In terms of the composition of RECs, Ferretti et al suggest that these should become more diverse and include members of the public and research subjects or communities affected by the research.[214] Besides the public, members from inside an institution should also be selected to achieve a multi-disciplinary composition of the board.

Examples of good practice

One notable example is the SAIL (Secure Anonymised Information Linkage) Databank, a Wales-wide research databank with approximately 30 billion records of individual level population datasets. Requests to access the databank are reviewed by an Information Governance Review Panel which includes representatives from public health agencies, clinicians, and members of the public who may be affected by the uses of this data. More information on SAIL can be found in Appendix 2.

Open questions

Increasing experiential and subject-matter expertise in AI and data science research reviews will hopefully lead to more holistic evaluations of the kinds of risks that may arise, particularly given the wide range of societal applications of AI and data science research. However, expertise from members of the public and external experts must be fairly compensated, and the impact of more diverse representation on these boards should be the subject of future study and evaluation.

Figure 10: The potential make-up of an AI/data science ethics committee[215]

For academic/corporate research institutions

Recommendation 4: Create internal training and knowledge-sharing hubs for researchers and REC members, and encourage more cross-institutional learning

The problem

A recurring concern raised by members of our workshops was a lack of shared resources to help RECs address common ethical issues in their research. This was coupled with a lack of transparency and openness of decision-making in many modern RECs, particularly for some corporate institutions where publication review processes can feel opaque to researchers. When REC processes and decisions are enacted behind closed doors, it becomes challenging to disseminate lessons learned to other institutions and researchers. It also raises a challenge for researchers who may come to view a REC as a ‘compliance’ body, rather than a resource for seeking advice and guidance. Several workshop participants noted that shared resources and trainings could help REC members, staff and students to better address these issues.

Recommendations

Research institutions should create institutional training and knowledge-sharing hubs

These hubs can serve five core functions:

1. Pooling shared resources on common AI and data science ethics challenges for students, staff and REC members to use.

The repository can compile resources, news articles and literature on ethical risks and impacts of AI systems, tagged and searchable by research type, risk or topic. These can prompt reflection on research ethics by providing students and staff with current, real-world examples of these risks in practice.

The hub could also provide a list of ‘banned’ or problematic datasets that staff or students should not use. This could help address concerns around datasets that are collected without underlying consent from research subjects, and which are commonly used as ‘benchmark’ datasets. The DukeMTC dataset of recorded videos on campus, for example, continues to be used by computer vision researchers in papers, despite being removed by Duke due to ethical concerns. Similar efforts to create a list of problematic datasets are underway at some major AI and ML conferences, and some of our workshop participants suggested that some research institutions already maintain lists like this.

2. Providing hypothetical or actual case studies of previous REC submissions and decisions to give a sense of the kinds of issues others are facing.

Training hubs could include repositories of previous applications that have been scrutinised and approved by the pertinent REC, which form a body of case studies that can inform both REC policies and individual researchers. Given the fast pace of AI and data science research, RECs can often encounter novel ethical questions. By logging past approved projects and making them available to all REC members, RECs can ensure consistency in their decisions about new projects.

We suggest that logged applications also be made available to the institution’s researchers for their own preparation when undertaking the REC process. Making applications available must be done with the permission of the relevant project manager or principal investigator, where necessary. To support the creation of these repositories, we have developed a resource consisting of six hypothetical AI and data science REC submissions that can be used for training purposes.[216]

3. Listing the institutional policies and guidance developed by the REC, such as policies outlining the research review process, self-assessment tools and societal impact assessments (see Recommendation 1).

By including a full list of its policies, hubs can foster dialogue between different processes within research institutions. Documentation from across the organisation can be shared and framed in its importance for pursuing thoughtful and responsible research.

In addition to institutional guidelines, we suggest training hubs include national, international or professional society guidelines that may govern specific kinds of research. For example, researchers seeking to advance healthcare technologies in the UK should ensure compliance with relevant Department of Health and Social Care guidelines, such as their guidelines for good practice for digital and data-driven health technologies.[217]

4. Providing a repository of external experts in subject-matter domains who researchers and REC members can consult with.

This would include a curated list of subject-matter experts in specific domains that students, staff and REC members can consult with. This might include contact details for experts in subjects like data protection law or algorithmic bias within or outside of the institution, but may extend to include lived experience experts and civil society organisations who can reflect societal concerns and potential impacts of a technology.

5. Signposting to other pertinent institutional policies (such as compliance, data privacy, diversity and inclusion).

By listing policies and resources on data management, sharing, access and privacy, training hubs could ensure researchers have more resources and training on how to properly manage and steward the data they use. Numerous frameworks are readily available online, such as the FAIR Principles,[218] promoting findability, accessibility, interoperability and reuse of digital assets; and DCC’s compilation of metadata standards for different research fields.[219]

Hubs could also include the institution’s policies on data labeller practices (if such policies exist). Several academic institutions have developed policies regarding MTurk workers that cover issues regarding fair pay, communication and acknowledgment.[220] [221] Some resources have even been co-written with input directly from MTurk workers. These resources vary from institution to institution, and there is a need for UK Research and Innovation (UKRI) and other national research institutions to codify these requirements into practical guidance for research institutions. One resource we suggest RECs tap into is the know-how and policies of human resources departments. Most large institutions and companies will already have pay and reward schemes in place. Data labellers and annotators must have access to the same protections as other legally defined positions.

The hub can also host or link to forums or similar communication channels that encourage informal peer-to-peer discussions. All staff should be welcomed into such spaces.

Examples of good practice

There are some existing examples of shared databases of AI ethics issues, including the Partnership on AI’s AI Incident Database and Charlie Pownall’s AI, Algorithmic, and Automation Incidents and Controversies Database. These databases compile news reports of instances of AI risks and ethics issues and make them searchable by type and function.[222] [223]

The Turing Institute’s Turing Way offers an excellent example of a research institution’s creation of shared resources for training and research ethics issues. For more information on the Turing Way, see Appendix 2.

Open questions

One pertinent question is whether these hubs should exist at the institutional or national level. Training hubs could start at the institutional level in the UK, and over time could connect to a shared resource managed by a centralised body like UKRI. It may be easier to start at the institutional level with repositories of relevant documentation, and spaces that foster dialogue among an institution’s workforce. An international hub could help RECs coordinate with one another and external stakeholders through international and cross-institutional platforms, and explore the opportunity of inter-institutional review standards and/or ethics review processing. We suggest that training hubs be made publicly accessible and open to other institutions, and that they are regularly reviewed and updated as appropriate.

Recommendation 5: Corporate labs must be more transparent about their decision-making and do more to engage with external partners

The problem

Several of our workshop participants noted that corporate RECs face particular opportunities and challenges in reviews of AI and data science research. Members of corporate RECs and research institutions shared that they are likely to have more resources to undertake ethical reviews than public labs, and several noted that these reviews often come at various stages of a project’s lifecycle, including near publication.

However, there are serious concerns around a lack of internal and external transparency in how some corporate RECs make their decisions. Some researchers within these institutions have cited they are unable to assess what kind of work is acceptable or unacceptable, and there are reports of some companies changing research findings for reputational reasons. Some participants claimed that corporate labs can be more risk averse when it comes to seeking external stakeholder feedback, due to privacy and trade secret concerns. Finally, members of corporate RECs are made up of members of that institution, and do not reflect experiential or disciplinary expertise outside of the company. Several interview and workshop participants noted that corporate RECs often do not consult with external experts on research ethics or broader societal impact issues, choosing instead to keep such deliberations in house.

Recommendations

Corporate labs must publicly release their ethical review criteria and process

To address concerns around transparency, corporate RECs should publicly release details on their REC review processes, including what criteria they evaluate for and how decisions are made. This is crucial for public-private research collaborations, which risk the findings of public institutions being censored for private reputational concerns, and for internal researchers to know what ethical considerations they should factor into their research. Corporate RECs should also commit to releasing transparency reports citing how many research studies they have rejected, amended and approved, on what grounds, and some example case studies (even if hypothetical) exploring the reasons why.

Corporate labs should consult with external experts on their research ethics reviews, and ideally include external and experiential experts on members of their ethics review boards

Given their research may have significant impacts on people and society, corporate labs must ensure their research ethics review boards include individuals who sit outside the company and reflect a range of experiential and disciplinary expertise. Not including this expertise will mean that corporate labs lack meaningful evaluations of the risks their research can pose. To complement their board membership, corporate labs should also consult more regularly on ethics issues with external experts to understand the impact of their research on different communities, disciplines and sectors.

Examples of good practice

In a blog post from 2022, the AI research company DeepMind explained how their ethical principles applied to their evaluation of a specific research project relating to the use of AI for protein folding.[224] In this post, DeepMind stated they had engaged with more than 30 experts outside of the organisation to understand what kinds of challenges their research might pose, and how they might release their research responsibly. This offers a model of how private research labs might consult with external expertise, and could be replicated as a standard for DeepMind and other companies’ activities.

In our research, we did not identify any corporate AI or data science research lab that has released their policies and criteria for ethical review. We also did not identify any examples of corporate labs that have experiential experts or external experts on their research ethics review boards.

Open questions

Some participants noted that it can be difficult for corporate RECs to be more transparent due to concerns around trade secrets and competition – if a company releases details on its research agenda, competitors may use this information for their own gain. One option suggested by our workshop participants is to engage in questions around research practices and broader societal impacts with external stakeholders at a higher level of abstraction that avoids getting into confidential internal details. Initiatives like the Partnership on AI seek to create a forum where corporate labs can more openly discuss common challenges and seek feedback in semi-private ways. However, corporate labs must engage in these conversations with some level of accountability. Reporting what actions they are taking as a result of those stakeholder engagements is one way to demonstrate how these engagements are leading to meaningful change.

For funders, conference organisers and other actors in the research ecosystem

Recommendation 6: Develop standardised principles and guidance for AI and data science research principles

The problem

A major challenge observed by our workshop participants is that RECs often produce inconsistent decisions, due to a lack of widely accepted frameworks or principles that deal specifically with AI and data science research ethics issues. Institutions who are ready to update their processes and standards are left to take their own risks choosing how to draft new rules. In the literature, a plethora of principles, frameworks and guidance around AI ethics has started to converge around principles like  transparency, justice, fairness, non-maleficence, responsibility and privacy.[225] However, there has yet to be a global effort to translate these principles into AI research ethics practices, or to determine how ethical principles should be interpreted or operationalised by research institutions.[226] This requires researchers to consider diverse ethics interpretations and understanding in regions, other than Western societies, which so far have not adequately featured in this debate.

Recommendations

UK policymakers should engage in a multi-stakeholder international effort to develop a ‘Belmont 2.0’ that translates AI ethics principles into specific guidelines for AI and data science research.

There is a significant need for a centralised body, such as the OECD, Global Partnership on AI or other international body to lead a multinational and inclusive effort to develop more consistent ethical guidance for RECs to use with AI and data science research. The UK must take a lead on this and use its position in these bodies to call for the development of a ‘Belmont 2.0’ for AI and data science.[227] This effort must involve representatives from all nations and avoid the pitfalls of previous research ethics principle developments that have overly favoured Western conceptions of ethics and principles. This effort should seek to define a minimum global standard of research ethics assessment that is flexible, responsive to and considerate of local circumstances.

By engaging in a multinational effort, UK national research ethics bodies like the UK Research Integrity Office (UKRIO) can develop more consistent guidance for UK academic RECs to address common challenges. This could include standardised trainings on broader societal impact issues, bias and consent challenges, privacy and identifiability issues, and other questions relating to research integrity, research ethics and broader societal impact considerations.

We believe that UKRIO can also help in the effort for standardising RECs by developing common guidance for public-private AI research partnerships, and consistent guidance for academic RECs. A substantial amount of AI research involves public-private partnerships. Common guidance could include specific support for core language around intellectual property concerns and data privacy issues.

Examples of good practice

There are some existing cross-national associations of RECs that jointly draft guidance documents or conduct training programmes. The European Network of Research Ethics Committee (EUREC) is one such example, though others could be created for other regions, or specifically for RECs who evaluate AI and data science research.[228]

In respect to laws and regulations, experts observe a gap in the regulation of AI and data science research. For example, the General Data Protection Regulation (GDPR) does provide some guidance for how European research institutions should collect, handle and use data for research purposes, though our participants noted this guidance has been interpreted by different institutions and researchers in widely different ways, leading to legal uncertainty.[229] While the UK Information Commissioner’s Office (ICO) published guidance on AI and data protection,[230] it does not offer specific guidance for AI and data science researchers.

Open questions

It is important to note that standardised principles for AI research are not a silver bullet. Significant challenges will remain in the implementation of these principles. Furthermore, as the history of biomedical research ethics principle development has shown, it will be essential for a body or network of bodies with global legitimacy and authority to steer the development of these principles, and to ensure that they accurately reflect the needs of regions and communities that are traditionally underrepresented in AI and data science research.

Recommendation 7: Incentivise a responsible research culture

The problem

RECs form one part of the research ethics ecosystem, a complex matrix of responsibility shared and supported by other actors including funding bodies, conference organisers, journal editors and researchers themselves.[231] In our workshops, one of the many challenges that our participants highlighted was a lack of strong incentives in this research ecosystem to consider ethical issues. In some cases, considering ethical risks may not be rewarded or valued by journals, funders or conference organisers. Considering the ethical issues that AI and data science research can raise, it is essential for these different actors to align their incentives and encourage AI and data science researchers to reflect on and document the societal impacts their research.

Recommendations

Conference organisers, funders, journal editors and other actors in the research ecosystem must incentivise and reward ethical reflection

Different actors in the research ecosystem can encourage a culture of ethical behaviour. Funders, for example, can create requirements that researchers conduct a broader societal impact statement of their research in order to receive a grant, and conference organisers and journal editors can encourage researchers to include a broader societal impact statement when submitting research. Conference organisers and journal editors can put in place similar requirements, and reward papers that exemplify strong ethical consideration. Publishers, for example, could potentially be assigned to evaluate broader societal impact questions in addition to research integrity issues.[232] By creating incentives for ethical reflection throughout the research ecosystem, ethical reflection can become more desirable and rewarded.

Examples of good practice

Some AI and data science conference organisers are putting in place measures to incentivise researchers to consider the broader societal impacts of their research. The 2020 NeurIPS conference, one of the largest AI and machine learning conferences in the world, required submissions to include a statement reflecting on broader societal impact, and created guidance for researchers to complete this.[233] The conference had a set of reviewers who specifically evaluated these impact statements. The use of these impact statements led to some controversy, with some researchers suggesting they could led to a chilling effect on particular types of research, and others suggesting difficulties in creating these kinds of impact assessments for more theoretical forms of AI research.[234] As of 2022, the NeurIPs conference has included these statements as part of its checklist of expectations for submission.[235] In a 2022 report, the Ada Lovelace Institute, CIFAR, and the Partnership on AI identified several measures that AI conference organisers could take to incentivise a culture of ethical reflection.[236]

There are also proposals underway for funders to include these considerations. Gardner and colleagues recommend that grant funding and public tendering of AI systems requires a ‘Trustworthy AI Statement’.[237]

Open questions

Enabling a stronger culture of ethical reflection and consideration in the AI and data science research ecosystem will require funding and resources. Reviewers of AI and data science research papers for conferences and journals already face a tough task; this work is voluntary and unpaid, and these reviewers often lack clear standards or principles to review against. We believe more training and support will be needed to ensure this recommendation can be successfully implemented.

Recommendation 8: Increase funding and resources for ethical reviews of AI and data science research

The problem

RECs face significant operational challenges around compensating their members for their time, providing timely feedback, and maintaining the necessary forms of expertise on their boards. A major challenge is the lack of resources that RECs face, and their reliance on voluntary and unpaid labour from institutional staff.

Recommendations

As part of their R&D strategy, UK policymakers must earmark additional funding for research institutions to provide greater resource, training and support to RECs.

In articulating national research priorities, UK policymakers should mandate an amount of funding towards initiatives that focus on interdisciplinary ethics training and support for research ethics committees. Funding must be made available for continuous, multi-stage research ethics review processes, and rewarding behaviour from organisations including UK Research and Innovation (UKRI) and UK research councils. Future iterations of the UK’s National AI Strategy should earmark funding for ethics training and for the work of RECs to expand their scope and remit.

Increasing funding and resources for institutional RECs will enable these essential bodies to undertake their critical work fully and holistically. Increased funding and support will also enable RECs to expand their remit and scope to capture risks and impacts of AI and data science research, which are essential for ensuring AI and data science are viewed as trustworthy disciplines and for mitigating the risks this research can pose. The traditional approach to RECs has treated their labour as voluntary and unpaid. RECs must be properly supported and resourced to meet the challenges that AI and data science pose.

Acknowledgements

This report was authored by:

  • Mylene Petermann, Ada Lovelace Institute
  • Niccolo Tempini, Senior Lecturer in Data Studies at the University of Exeter’s Institute for Data Science and Artificial Intelligence (IDSAI)
  • Ismael Kherroubi Garcia, Kairoi
  • Kirstie Whitaker, Alan Turing Institute
  • Andrew Strait, Ada Lovelace Institute

This project was made possible by the Arts and Humanities Research Council who provided a £100k grant for this work. We are grateful for our reviewers – Will Hawkins, Edward Dove and Gabrielle Samuel. We are also grateful for our workshop participants and interview subjects, who include the following and several others who wished to remain anonymous:

  • Alan Blackwell
  • Barbara Prainsack
  • Brent Mittelstadt
  • Cami Rincón
  • Claire Salinas
  • Conor Houghton
  • David Berry
  • Dawn Bloxwich
  • Deb Raji
  • Deborah Kroll
  • Edward Dove
  • Effy Vayena
  • Ellie Power
  • Elizabeth Buchanan
  • Elvira Perez
  • Frances Downey
  • Gail Seymour
  • Heba Youssef
  • Iason Gabriel
  • Jade Ouimet
  • Josh Cowls
  • Katharine Wright
  • Kerina Jones
  • Kiruthika Jayaramakrishnan
  • Lauri Kanerva
  • Liesbeth Venema
  • Mark Chevilet
  • Nicola Stingelin
  • Ranjit Singh
  • Rebecca Veitch
  • Richard Everson
  • Rosie Campbell
  • Sara Jordan
  • Shannon Vallor
  • Sophia Batchelor
  • Thomas King
  • Tristan Henderson
  • Will Hawkins

Appendix 1: Methodology and limitations

This report uses the term data science to mean the extraction of actionable insights and knowledge from data, which involves preparing data for analysis, performing data analysis using statistical methods leading to the identification of patterns in the data.[238]

This report uses the term AI research in its broadest sense, to cover research into software and systems that display intelligent behaviour, which includes subdisciplines like machine learning, reinforcement learning, deep learning and others.[239]

This report relied on a review of the literature on RECs, research ethics and broader societal impact questions in AI, most of which covers challenges in academic RECs. This report also draws on a series of workshops with 42 members of public and private AI and data science research institutions in May 2021, along with eight interviews with experts in research ethics and AI issues. These workshops and interviews provided some additional insight into the ways corporate RECs operate, though we acknowledge that much of this information is challenging to verify given the relative lack of transparency of many corporate institutions in sharing their internal research review processes (one of our recommendations is explicitly aimed at this challenge). We are grateful to our workshop participants and research subjects for their support in this project.

This report contains two key limitations:

  1. While we sought to review the literature of ethics review processes in both commercial and academic research institutions, the literature on RECs in industry is scarce and largely reliant on statements and articles published by companies themselves. Their claims are therefore not easily verifiable, and sections relating to industry practice should be read with this in mind.
  2. The report exclusively focuses on research ethics review processes at institutions in the UK, Europe and the USA, and our findings are therefore not representative of a broader international context. We encourage future work to focus on how research ethics and broader societal impact reviews are conducted in other regions.

Appendix 2: Examples of ethics review processes

In our workshops, we invited presentations from four UK organisations to share how they currently construct their ethics review processes. We include short descriptions of three of these institutions below:

The Alan Turing Institute

The Alan Turing Institute was established in 2015 as the UK National Institute for Data Science. In 2017, artificial intelligence was added to its remit, on Government recommendation. The Turing Institute was created by five founding universities and the UK Engineering and Physical Sciences Research Council.[240] The Turing Institute has since published The Turing Way, a handbook for reproducible, ethical and collaborative data science. The handbook is open source and community-driven.[241]

In 2020, The Turing Way expanded to a series of guides that covered reproducible research,[242] project design,[243] communication,[244] collaboration[245] and ethical research.[246] For example, the Guide for Ethical Research advises to consider consent in cases where the data is already available, and to understand the terms and conditions under which the data has been made available. The guide also advises to consider further societal consequences. This involves an assessment of the societal, environmental and personal risks involved in research, and measures in place to mitigate these risks.

As of writing, the Turing Institute is working on changes to its ethics review processes towards a continuous integration approach based on the model of ‘DevOps’. This is a term used in software development that involves a process of continuous integration and feedback loops across the stages of planning, building and coding, deployment and operations. To ensure ethical standards are upheld in a project, this model involves frequent communication and ongoing, real-time collaboration between researchers and research ethics committees. Currently an application to RECs for ethics review is usually submitted after a project is defined, and a funding application has been made. However, the continuous integration approach covers all stages in the research lifecycle, from project design to publication, communication and maintenance. For researchers, this means considering research ethics from the beginning of a research project and fostering a continuous conversation with RECs, for example when defining the project, or so that RECs could offer support when submitting an application for funding. The project documentation would be updated continuously as the project progresses through various stages.

The project would go through several rounds of reviews by RECs, for example, when accessing open data, during data analysis or at the publication stage. This is a rapid, collaborative process where researchers incorporate the comments from the expert reviewers. This model ensures that researchers address ethical issues as they arise throughout the research lifecycle. For example, the ethical considerations of publishing synthetic data cannot be known in advance, therefore, an ongoing ethics review is required.

This model of research ethics review requires a pool of practising researchers as reviewers. There would also need to be decision-makers who are empowered by the institution to reject an ethics application, even if funding is in place. Furthermore, this model requires permanent specialised expert staff who would be able to hold these conversations with researchers, which also requires additional resources.

SAIL Databank

The Secure Anonymised Information Linkage (SAIL) Databank[247] is a platform for robust secure storage and use of anonymised person-based data for research to improve health, wellbeing and services in Wales. The data held in this repository can be linked together to address research questions, subject to safeguards and approvals. The databank contains over 30 billion records from individual-level population datasets from about 400 data providers, used by approximately 1,200 data users. The data is primarily sourced in Wales, but also England.

The data is securely stored, and access is tightly controlled through a robust and proportionate ‘privacy by design’ methodology, which is regulated by a team of specialists and overseen by an independent Information Governance Review Panel (IGRP). The core datasets come from Welsh organisations, and include hospital inpatient and outpatient data. With the Core Restricted Datasets, the provider reserves the right to review every proposed use of the data, while approval for the Core Datasets is devolved to the IGRP.

The data provider divides the data into two parts. The demographic data goes to a trusted third party (an NHS organisation), which matches the data against a register of the population of Wales and assigns each person represented a unique anonymous code. The content data is sent directly to SAIL. The two parts can be brought together to create de-identified copies of the data, which are then subjected to further controls and presented to researchers in anonymised form.

The ‘privacy by design’ methodology is enacted in practice by a suite of physical, technical and procedural controls. This is guided by the ‘five safes’ model, for example, ‘safe projects’, ‘safe people’ (through research accreditation) or ‘safe data’ (through encryption, anonymisation or control before information can be accessed).

In practice, if a researcher wishes to work with some of the data, they submit a proposal and SAIL reviews feasibility and scoping. The researcher is assigned to an analyst who has extensive knowledge of the available datasets and who advises on which datasets they need to request data from, and which variables will help the researcher answer the questions. After this process, the researcher makes an application to SAIL, which goes to the IGRP. The application can be approved, rejected or recommendations for amendments made. The IGRP is comprised of representatives from organisations including Public Health Wales, Welsh government, Digital Health and Care Wales and the British Medical Association (BMA), and members of the public.

The criteria for review include, for example, an assessment of whether the research contributes to new knowledge, whether it improves health, wellbeing and public services, whether there is a risk that the output may be disclosive of individuals or small groups, and whether measures are in place to mitigate the risks of disclosure. In addition, public engagement and involvement ensures that a public voice is present in terms of considering potential societal impact, and who also provide a public perspective on research.

Researchers must complete a recognised safe researcher training programme and abide by the data access agreement. The data is then provided through a virtual environment, which allows the researchers to carry out the data analysis and request results. However, researchers cannot transfer data out of the environment. Instead, researchers must propose to SAIL which results they would like to transfer for publication or presentation, and these are then checked by someone at SAIL to ensure that they do not contain any disclosive elements.

Previously, the main data types were health data, but more recently, SAIL deals increasingly with administrative data, e.g. the UK Census, and with emerging data types, which may require multiple approval processes, and which can be a problem in terms of coordination. For example, data access that falls under the Digital Economy Act must have approval from the Research Accreditation Panel, and there is an expectation that each project will have undergone formal research ethical review, in addition to the IGRP.

University of Exeter

The University of Exeter has a central University Ethics Committee (UEC) and 11 devolved RECs at college or discipline level. The devolved RECS report to the UEC, which is accountable to the University Council (the governing body).[248] Exeter University also has a dual assurance scheme, with an independent member of the governing body also providing oversight.

The work of RECs is based on a single research ethics framework[249] which was first developed in 2013. This sets common standards and requirements, which also allows for flexibility to adapt to local circumstances. The framework underwent further substantial revision in 2019/20, which was a collaborative process with researchers from all disciplines with the aim to make it as reflective as possible of all discipline requirements while meeting common standards. Exeter also provides guidance and training on research ethics and as well as taught content for undergraduate and postgraduate students.

The REC operating principles[250] include:

  • independence (mitigating conflicts of interest and ensuring sufficient impartial scrutiny; enhancing lay membership of committees)
  • competence (ensuring that membership of committees/selection of reviewers is informed by relevant expertise and that decision-making is consistent, coherent, and well-informed; cross-referral of projects)
  • facilitation (recognising the role of RECs in facilitating good research and support for researchers; ethical review processes recognised as valuable by researchers)
  • transparency and accountability (REC decisions and advice to be open to scrutiny with responsibilities discharged consistently).

Some of the challenges include the lack of specialist knowledge, especially on emerging issues, such as AI and data science, new methods, or interdisciplinary research. Another challenge is information governance, e.g. ensuring that researchers have access to research data, as well as appropriate options for research data management and secure storage. Furthermore, ensuring transparency and clarity for research participants is important, e.g. active, or ongoing consent, where relevant. Secondary data use reviews include a risk-adapted or proportionate approach.

In terms of data sharing, researchers must have the appropriate permissions in place and understand the requirements of those. There are concerns about the potential misuse of data and research outputs, and researchers are encouraged to reflect on the potential implications or uses of their research, and to consider the principles of Responsible Research and Innovation (RRI) with the support of RECs. The potential risks with data sharing and international collaborations means that it is important to ensure that there is informed decision-making around these issues.

Due to the potentially significant risks of AI and data science research, Exeter University currently focuses on the Trusted Research Guidance issued by the Centre for Protection of National Infrastructure. Export Control compliance plays a role as well, but there is a greater need for awareness and training.

The University of Exeter has scope in the existing research ethics framework for setting up a specialist data science and AI ethics reference group (advisory group), which requires further work, e.g. how to balance the conflict between having a very specialist group of researchers reviewing the research, while also maintaining a certain level of independence. This would require more specialist training for RECs and researchers.

Furthermore, the University is currently evaluating how to review international and multi-site research, and how to streamline the process of ethics review as much as possible to avoid potential duplication in research ethics applications. This also requires capacity building with research partners.

Finally, improving the ability for reporting, auditing and monitoring plays a significant role, especially as the University recently implemented a new single, online research ethics application and review system.


Footnotes

[1] Source: Zhang, D. et al. (2022). ‘The AI Index 2022 Annual Report’. arXiv. Available at:
https://doi.org/10.48550/arXiv.2205.03468

[2] Bender, E.M. (2019). ‘Is there research that shouldn’t be done? Is there research that shouldn’t be encouraged?’. Medium. Available at: https://medium.com/@emilymenonbender/is-there-research-that-shouldnt-be-done-is-there-research-that-shouldn-t-be-encouraged-b1bf7d321bb6

[3]Truong, K. (2020). ‘This Image of a White Barack Obama Is AI’s Racial Bias Problem In a Nutshell’. Vice. Available at: https://www.vice.com/en/article/7kpxyy/this-image-of-a-white-barack-obama-is-ais-racial-bias-problem-in-a-nutshell

[4] Small, Z. ‘600,000 Images Removed from AI Database After Art Project Exposes Racist Bias’. Hyperallergic. Available at: https://hyperallergic.com/518822/600000-images-removed-from-ai-database-after-art-project-exposes-racist-bias/

[5] Richardson, R. (2021). ‘Racial Segregation and the Data-Driven Society: How Our Failure to Reckon with Root Causes Perpetuates Separate and Unequal Realities’. Berkeley Technology Law Journal, 36(3). Available at: https://papers.ssrn.com/abstract=3850317; [5] Buolamwini, J. and Gebru, T. (2018). ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’. Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Conference on Fairness, Accountability and Transparency, PMLR, pp. 77–91. Available at: https://proceedings.mlr.press/v81/buolamwini18a.html

[6] Petrozzino, C. (2021). ‘Who pays for ethical debt in AI?’. AI and Ethics, 1(3), pp. 205–208. Available at: https://doi.org/10.1007/s43681-020-00030-3

[7] Abdalla, M. and Abdalla, M. (2021). ‘The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity’. AIES ’21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Available at: https://doi.org/10.1145/3461702.3462563

[8] For example, a recent paper from researchers at Microsoft includes guidance for a structured exercise to identify potential limitations in AI research. See: Smith, J. J. et al. (2022). ‘REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research’. 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 587–597. Available at: https://doi.org/10.1145/3531146.3533122

[9] Metcalf, J. and Crawford, K. (2016). ‘Where are human subjects in big data research? The emerging ethics divide.’ Big Data & Society, 3(1). Available at: https://doi.org/10.1177/205395171665021

[10] Metcalf, J. and Crawford, K. (2016).

[11] Hecht, B. et al. (2021). ‘It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process’. arXiv. Available at:
https://doi.org/10.48550/arXiv.2112.09544

[12] Ashurst, C. et al. (2021). ‘AI Ethics Statements — Analysis and lessons learnt from NeurIPS Broader Impact Statements’. arXiv. Available at:
https://doi.org/10.48550/arXiv.2111.01705

[13] See: Ada Lovelace Institute. (2022). Looking before we leap: Case studies. Available at: https://www.adalovelaceinstitute.org/resource/research-ethics-case-studies/

[14] Raymond, N. (2019). ‘Safeguards for human studies can’t cope with big data’. Nature, 568(7752), pp. 277–277. Available at: https://doi.org/10.1038/d41586-019-01164-z

[15] The number of AI journal publications grew by 34.5% from 2019 to 2020, compared to a growth of 19.6% between 2018 and 2019. See: Stanford University. (2021). Artificial Intelligence Index 2021, chapter 1. Available at: https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report-_Chapter-1.pdf

[16] Chuvpilo, G. (2020). ‘AI Research Rankings 2019: Insights from NeurIPS and ICML, Leading AI Conferences’. Medium. Available at: https://medium.com/@chuvpilo/ai-research-rankings-2019-insights-from-neurips-and-icml-leading-ai-conferences-ee6953152c1a

[17] Minsky, C. (2020). ‘How AI helps historians solve ancient puzzles’. Financial Times. Available at: https://www.ft.com/content/2b72ed2c-907b-11ea-bc44-dbf6756c871a

[18] Zheng, S., Trott, A., Srinivasa, S. et al. (2020). ‘The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies’. Salesforce Research. Available at: https://blog.einstein.ai/the-ai-economist/

[19] Eraslan, G., Avsec, Ž., Gagneur, J. and Theis, F. J. (2019). ‘Deep learning: new computational modelling techniques for genomics’. Nature Reviews Genetics. Available at: https://doi.org/10.1038/s41576-019-0122-6

[20] DeepMind. (2020). ‘AlphaFold: a solution to a 50-year-old grand challenge in biology’. DeepMind Blog. Available at: https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology

[21] Boyarskaya, M., Olteanu, A. and Crawford, K. (2020). ‘Overcoming Failures of Imagination in AI Infused System Development and Deployment’. arXiv. Available at: https://doi.org/10.48550/arXiv.2011.13416

[22] Clifford, C. (2018). ‘Google CEO: A.I. is more important than fire or electricity’. CNBC. Available at: https://www.cnbc.com/2018/02/01/google-ceo-sundar-pichai-ai-is-more-important-than-fire-electricity.html

[23] Boyarskaya, M., Olteanu, A. and Crawford, K. (2020). ‘Overcoming Failures of Imagination in AI Infused System Development and Deployment’. arXiv. Available at: https://doi.org/10.48550/arXiv.2011.13416

[24] Metcalf, J. (2017). ‘“The study has been approved by the IRB”: Gayface AI, research hype and the pervasive data ethics…’ Medium. Available at: https://medium.com/pervade-team/the-study-has-been-approved-by-the-irb-gayface-ai-research-hype-and-the-pervasive-data-ethics-ed76171b882c

[25] Coalition for Critical Technology. (2020). ‘Abolish the #TechToPrisonPipeline’. Medium. Available at: https://medium.com/@CoalitionForCriticalTechnology/abolish-the-techtoprisonpipeline-9b5b14366b16.

[26] Ongweso Jr, E. (2020). ‘An AI Paper Published in a Major Journal Dabbles in Phrenology’. Vice. Available at: https://www.vice.com/en/article/g5pawq/an-ai-paper-published-in-a-major-journal-dabbles-in-phrenology

[27]Colaner, S. (2020). ‘AI Weekly: AI phrenology is racist nonsense, so of course it doesn’t work’. VentureBeat. Available at: https://venturebeat.com/2020/06/12/ai-weekly-ai-phrenology-is-racist-nonsense-so-of-course-it-doesnt-work/.

[28] Hsu, J. (2019). ‘Microsoft’s AI Research Draws Controversy Over Possible Disinformation Use’.  IEEE Spectrum. Available at: https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/microsofts-ai-research-draws-controversy-over-possible-disinformation-use

[29] Harlow, M., Murgia, M. and Shepherd, C. (2019). ‘Western AI researchers partnered with Chinese surveillance firms’. Financial Times. Available at: https://www.ft.com/content/41be9878-61d9-11e9-b285-3acd5d43599e

[30] This report does not focus on considerations relating to research integrity, though we acknowledge this is an important and related topic.

[31] For a deeper discussion on these issues, see: Ashurst, C. et al. (2022). ‘Disentangling the Components of Ethical Research in Machine Learning’. FAccT ’22: 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 2057–2068. Available at: https://doi.org/10.1145/3531146.3533781

[32] Dove, E. S., Townend, D., Meslin, E. M. et al. (2016). ‘Ethics review for international data-intensive research’. Science, 351(6280), pp. 1399–1400.

[33] Dove, E. S., Townend, D., Meslin, E. M. et al. (2016).

[34] UKRI. ‘Research integrity’. Available at: https://www.ukri.org/what-we-offer/supporting-healthy-research-and-innovation-culture/research-integrity/

[35] Engineering and Physical Sciences Research Council. ‘Responsible research and innovation’. UKRI. Available at: https://www.ukri.org/councils/epsrc/guidance-for-applicants/what-to-include-in-your-proposal/health-technologies-impact-and-translation-toolkit/research-integrity-in-healthcare-technologies/responsible-research-and-innovation/

[36] UKRI. ‘Research integrity’. Available at: https://www.ukri.org/what-we-offer/supporting-healthy-research-and-innovation-culture/research-integrity/

[37] Partnership on AI. (2021). Managing the Risks of AI Research. Available at: http://partnershiponai.org/wp-content/uploads/2021/08/PAI-Managing-the-Risks-of-AI-Resesarch-Responsible-Publication.pdf

[38] Korenman, S. G., Berk, R., Wenger, N. S. and Lew, V. (1998). ‘Evaluation of the research norms of scientists and administrators responsible for academic research integrity’. Jama, 279(1), pp. 41–47.

[39] Douglas, H. (2014). ‘The moral terrain of science’. Erkenntnis, 79(5), pp. 961–979.

[40] European Commission. (2018). Responsible Research and Innovation, Science and Technology. Available at: https://data.europa.eu/doi/10.2777/45726

[41] National Human Genome Research Institute. ‘Ethical, Legal and Social Implications Research Program’. Available at: https://www.genome.gov/Funded-Programs-Projects/ELSI-Research-Program-ethical-legal-social-implications

[42] Bazzano, L. A. et al. (2021). ‘A Modern History of Informed Consent and the Role of Key Information’. Ochsner Journal, 21(1), pp. 81–85. Available at: https://doi.org/10.31486/toj.19.0105

[43] Hedgecoe, A. (2017). ‘Scandals, Ethics, and Regulatory Change in Biomedical Research’. Science, Technology, & Human Values, 42(4), pp. 577–599.  Available at: https://journals.sagepub.com/doi/abs/10.1177/0162243916677834

[44] Israel, M. (2015). Research Ethics and Integrity for Social Scientists, second edition. SAGE Publishing. Available at: https://uk.sagepub.com/en-gb/eur/research-ethics-and-integrity-for-social-scientists/book236950

[45] The Nuremberg Code was in part based on pre-war medical research guidelines from the German Medical Association, which included elements of patient consent to a procedure. These guidelines were disused during the rise of the Nazi Regime in favour of guidelines that contributed to the ‘healing of the nation’, as defendants at the Nuremberg trial put it. See: Ernst, E. and Weindling, P. J. (1998). ‘The Nuremberg Medical Trial: have we learned the lessons?’ Journal of Laboratory and Clinical Medicine, 131(2), pp. 130–135; and British Medical Journal. (1996). ‘Nuremberg’. British Medical Journal, 313(7070). Available at: https://www.bmj.com/content/313/7070

[46] Center for Disease Control and Prevention. (2021). The U.S. Public Health Service Syphilis Study at Tuskegee. Available at: https://www.cdc.gov/tuskegee/timeline.htm

[47] The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont Report.

[48] Council for International Organizations of Medical Sciences (CIOMS). (2016). International Ethical Guidelines for Health-related Research Involving Humans, Fourth Edition. Available at: https://cioms.ch/wp-content/uploads/2017/01/WEB-CIOMS-EthicalGuidelines.pdf

[49] A more extensive study of the history of research ethics is provided by: Garcia, K. et al. (2022). ‘Introducing An Incomplete History of Research Ethics’. Open Life Sciences. Available at: https://openlifesci.org/posts/2022/08/08/An-Incomplete-History-Of-Research-Ethics/

[50] Hoeyer, K. and Hogle, L. F. (2014). ‘Informed consent: The politics of intent and practice in medical research ethics’. Annual Review of Anthropology, 43, pp. 347–362.

 

Legal guardianship: The Helsinki Declaration specifies that underrepresented groups should have adequate access to research and to the results of research. However, vulnerable population groups are often excluded from research if they are not able to give informed consent. A legal guardian is usually appointed by a court and can give consent on the participants’ behalf, see: Brune C,, Stentzel U., Hoffmann W. and van den Berg, N. (2021). ‘Attitudes of legal guardians and legally supervised persons with and without previous research experience towards participation in research projects: A quantitative cross-sectional study’. PLoS ONE, 16(9).

 

Group or community consent refers to research that can generate risks and benefits as part of the wider implications beyond the individual research participant. This means that consent processes may need to be supplemented by community engagement activities, see: Molyneux, S. and Bull, S. (2013). ‘Consent and Community Engagement in Diverse Research Contexts: Reviewing and Developing Research and Practice: Participants in the Community Engagement and Consent Workshop, Kilifi, Kenya, March 2011’. Journal of Empirical Research on Human Research Ethics (JERHRE), 8(4), pp. 1–18. Available at: https://doi.org/10.1525/jer.2013.8.4.1

 

Blanket consent refers to a process by which individuals donate their samples without any restrictions. Broad (or ‘general’) consent refers to a process by which individuals donate their samples for a broad range of future studies, subject to specified restrictions, see: Wendler, D. (2013). ‘Broad versus blanket consent for research with human biological samples’. The Hastings Center report, 43(5), pp. 3–4. Available at: https://doi.org/10.1002/hast.200

[51] World Medical Association. (2008). WMA Declaration of Helsinki – ethical principles for medical research involving human subjects. Available at: https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/

[52] Ashcroft, R. ‘The Declaration of Helsinki’ in: Emanuel, E. J., Grady, C. C., Crouch, R. A., Lie, R. K., Miller, F. G. and Wendler, D. D. (eds.). (2008). The Oxford textbook of clinical research ethics. Oxford University Press.

[53] World Medical Association. (2008). WMA Declaration of Helsinki – ethical principles for medical research involving human subjects. Available at: https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/

[54] World Medical Association. (2008).

[55] Millum, J., Wendler, D. and Emanuel, E. J. (2013). ‘The 50th anniversary of the Declaration of Helsinki: progress but many remaining challenges’. Jama, 310(20), pp. 2143–2144.

[56] The Belmont Report was published by the National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research, which was created for the U.S. Department of Health, Education, and Welfare (DHEW) based on authorisation by the U.S. Congress in 1974. The National Commission had been tasked by the U.S. Congress with the identification of guiding research ethics principles in response to public outrage over the Tuskegee Syphilis Study and other ethically questionable projects that emerged during this time.

[57] The Nuremberg Code failed to deal with several related issues, including how international research trial should be run, questions of care for research subjects after the trial has ended or how to assess the benefit of the research to a host community. See: Annas, G. and Grodin, M. (2008). The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation. Oxford University Press

[58] In 1991, the regulations of the DHEW became a ‘common rule’ that covered 16 federal agencies.

[59] Office for Human Research Protections. (2009). Code of Federal Regulations, Part 46: Protection of Human Subjects. Available at: https://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/index.html

[60] In 2000, the Central Office for Research Ethics was formed, followed by the establishment of the National Research Ethics Service and later the Health Research Authority (HRA). See: NHS Health Research Authority. (2021). Research Ethics Committees – Standard Operating Procedures. Available at: https://www.hra.nhs.uk/about-us/committees-and-services/res-and-recs/research-ethics-committee-standard-operating-procedures/

[61] There is some guidance for non-health RECs in the UK – the Economic and Social Science Research Council released research ethics guidelines for any project funded by ESRC to undergo certain ethics review requirements if the project involves human subjects research. See: Economic and Social Research Council. (2015). ESRC Framework for Research Ethics. UKRI. Available at: https://www.ukri.org/councils/esrc/guidance-for-applicants/research-ethics-guidance/framework-for-research-ethics/

[62] Tinker, A. and Coomber, V. (2005). ‘University research ethics committees—A summary of research into their role, remit and conduct’. Research Ethics, 1(1), pp. 5–11.

[63] European Network of Research Ethics Committees. ‘Short description of the UK REC system’. Available at: http://www.eurecnet.org/information/uk.html

[64] University of Cambridge. ‘Ethical Review’. Available at: https://www.research-integrity.admin.cam.ac.uk/ethical-review

[65] University of Oxford. ‘Committee information: Structure, membership and operation of University research ethics committees’. Available at: https://researchsupport.admin.ox.ac.uk/governance/ethics/committees

[66] Tinker, A. and Coomber, V. (2005). ‘University Research Ethics Committees — A Summary of Research into Their Role, Remit and Conduct’. SAGE Journals. Available at: https://doi.org/10.1177/174701610500100103

[67] The Turing Way Community et al. Guide for Ethical Research – Introduction to Research Ethics. Available at: https://the-turing-way.netlify.app/ethical-research/ethics-intro.html

[68] For an example of a full list of risks and the different processes, see: University of Exeter. (2021). Research Ethics Policy and Framework: Appendix C – Risk and Proportionate Review checklist. Available at: https://www.exeter.ac.uk/media/universityofexeter/governanceandcompliance/researchethicsandgovernance/Appendix_C_Risk_and_Proportionate_Review_v1.1_07052021.pdf; and University of Exeter. (2021). Research Ethics Policy and Framework. Available at: https://www.exeter.ac.uk/media/universityofexeter/governanceandcompliance/researchethicsandgovernance/Revised_UoE_Research_Ethics_Framework_v1.1_07052021.pdf.

[69] NHS Health Research Authority. (2021). Governance arrangements for Research Ethics Committees. Available at: https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/governance-arrangement-research-ethics-committees/; and Economic and Social Research Council. (2015). ESRC Framework for Research Ethics. UKRI. Available at: https://www.ukri.org/councils/esrc/guidance-for-applicants/research-ethics-guidance/framework-for-research-ethics/

[70] NHS Health Research Authority. (2021). Research Ethics Committee – Standard Operating Procedures. Available at: https://www.hra.nhs.uk/about-us/committees-and-services/res-and-recs/research-ethics-committee-standard-operating-procedures/

[71] NHS Health Research Authority. (2021).

[72] Economic and Social Research Council. (2015). ESRC Framework for Research Ethics. UKRI. Available at: https://www.ukri.org/councils/esrc/guidance-for-applicants/research-ethics-guidance/framework-for-research-ethics/

[73] See: saildatabank.com

[74] Moss, E. and Metcalf, J. (2020). Ethics Owners. A New Model of Organizational Responsibility in Data-Driven Technology Companies. Data & Society. Available at: https://datasociety.net/library/ethics-owners/

[75] We note this article reflects Facebook’s process in 2016, and that this process may have undergone significant changes since that period. See: Jackman, M. and Kanerva, L. (2016). ‘Evolving the IRB: building robust review for industry research’. Washington and Lee Law Review Online, 72(3), p. 442.

[76] See: Google AI. ‘Artificial Intelligence at Google: Our Principles’. Available at: https://ai.google/principles/.

[77] Future of Life Institute. (2018). Lethal autonomous weapons pledge. Available at: https://futureoflife.org/2018/06/05/lethal-autonomous-weapons-pledge/

[78] Moss, E. and Metcalf, J. (2020). Ethics Owners. A New Model of Organizational Responsibility in Data-Driven Technology Companies. Data & Society. Available at: https://datasociety.net/library/ethics-owners/

[79] Samuel, G., Derrick, G. E., and Van Leeuwen, T. (2019). ‘The ethics ecosystem: Personal ethics, network governance and regulating actors governing the use of social media research data.’ Minerva, 57(3), pp. 317–343. Available at: https://link.springer.com/article/10.1007/s11024-019-09368-3

[80] The Royal Society. ‘Research Culture’. Available at: https://royalsociety.org/topics-policy/projects/research-culture/

 

[81] Canadian Institute for Advanced Research, Partnership on AI and Ada Lovelace Institute. (2022). A culture of ethical AI: report. Available at: https://www.adalovelaceinstitute.org/event/culture-ethical-ai-cifar-pai/

[82] Prunkl, C. E. et al. (2021). ‘Institutionalizing ethics in AI through broader impact requirements’. Nature Machine Intelligence, 3(2), pp. 104–110. Available at: https://www.nature.com/articles/s42256-021-00298-y

[83] Prunkl et al state that potential negative effects to impact statements are that these could be uninformative, biased, misleading or overly speculative, and therefore lack quality. The statements could lead to trivialising of ethics and governance and the complexity involved in assessing ethical and societal implications. Researchers could develop a negative attitude towards submitting an impact statement, and may find it a burden, confusing or irrelevant. The statements may also create a false sense of security, in cases where positive impacts are overstated or negative impacts understated, which may polarise the research community along political or institutional lines. See: Prunkl, C. E. et al. (2021).

[84] Some authors felt that the requirement of an impact statement is important, but there was uncertainty over who should complete them and how. Other authors also did not feel qualified to address the broader impact of their work. See: Abuhamad, G. and Rheault, C. (2020). ‘Like a Researcher Stating Broader Impact For the Very First Time’. arXiv. Available at: https://arxiv.org/abs/2011.13032

[85] Committee on Publication Ethics. (2018). Principles of Transparency and Best Practices in Scholarly Publishing. Available at: https://publicationethics.org/files/Principles_of_Transparency_and_Best_Practice_in_Scholarly_Publishingv3_0.pdf

[86] Partnership on AI. (2021). Managing the Risks of AI Research: Six Recommendations for Responsible Publication. Available at: https://partnershiponai.org/workstream/publication-norms-for-responsible-ai/

[87] Partnership on AI. (2021).

[88] Gardner, A., Smith, A. L., Steventon, A. et al. (2021). ‘Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice’. AI and Ethics. pp.1–15. Available at: https://link.springer.com/article/10.1007/s43681-021-00069-w

[89] Vayena, E., Brownsword, R., Edwards, S. J. et al. (2016). ‘Research led by participants: a new social contract for a new kind of research’. Journal of Medical Ethics, 42(4), pp. 216–219.

[90] There are three types of disclosure risks and possible reidentification of an individual despite masking or de-identification of data: identity disclosure, attribute disclosure, e.g., when a person is identified to belong to a particular group, or inferential disclosure, e.g., when information about a person can be inferred with released data.  See: Xafis, V., Schaefer, G. O., Labude, M. K. et al. (2019). ‘An ethics framework for big data in health and research’. Asian Bioethics Review, 11(3). Available at: https://doi.org/10.1007/s41649-019-00099-x

[91] Metcalf, J. and Crawford, K. (2016). ‘Where are human subjects in big data research? The emerging ethics divide’. Big Data & Society, 3(1). Available at: https://journals.sagepub.com/doi/full/10.1177/2053951716650211

[92] Metcalf, J. and Crawford, K. (2016).

[93] Samuel, G., Chubb, J. and Derrick, G. (2021). ‘Boundaries Between Research Ethics and Ethical Research Use in Artificial Intelligence Health Research’. Journal of Empirical Research on Human Research Ethics. Available at: https://journals.sagepub.com/doi/full/10.1177/15562646211002744

[94] Abbott, L. and Grady, C. (2011). ‘A systematic review of the empirical literature evaluating IRBs: What we know and what we still need to learn’. Journal of Empirical Research on Human Research Ethics, 6(1). Available at: https://doi.org/10.1525/jer.2011.6.1.3

[95] Zywicki, T. J. (2007). ‘Institutional review boards as academic bureaucracies: An economic and experiential analysis’. Northwestern University Law Review, 101(2), p.861. Available at: https://heinonline.org/HOL/LandingPage?handle=hein.journals/illlr101&div=36&id=&page=

[96] Abbott, L. and Grady, C. (2011). ‘A systematic review of the empirical literature evaluating IRBs: What we know and what we still need to learn’. Journal of Empirical Research on Human Research Ethics, 6(1). Available at: https://doi.org/10.1525/jer.2011.6.1.3

[97] Abbott, L. and Grady, C. (2011).

[98] Dove, E. S. and Garattini, C. (2018). ‘Expert perspectives on ethics review of international data-intensive research: Working towards mutual recognition’. Research Ethics, 14(1), pp. 1–25. Available at: https://journals.sagepub.com/doi/full/10.1177/1747016117711972

[99] Hibbin, R. A., Samuel, G. and Derrick, G. E. (2018). ‘From “a fair game” to “a form of covert research”: Research ethics committee members’ differing notions of consent and potential risk to participants within social media research’. Journal of Empirical Research on Human Research Ethics, 13(2). Available at: https://journals.sagepub.com/doi/full/10.1177/1556264617751510

[100] Guillemin, M., Gillam, L., Rosenthal, D. and Bolitho, A. (2012). ‘Human research ethics committees: examining their roles and practices’. Journal of Empirical Research on Human Research Ethics, 7(3). Available at: https://journals.sagepub.com/doi/abs/10.1525/jer.2012.7.3.38

[101] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021). ‘Ethics review of big data research: What should stay and what should be reformed?’. BMC Medical Ethics, 22(1), pp. 1–13. Available at: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00616-4

[102] Guillemin, M., Gillam, L., Rosenthal, D. and Bolitho, A. (2012). ‘Human research ethics committees: examining their roles and practices’. Journal of Empirical Research on Human Research Ethics, 7(3). Available at: https://journals.sagepub.com/doi/abs/10.1525/jer.2012.7.3.38

[103] Yuan, H., Vanea, C., Lucivero, F. and Hallowell, N. (2020). ‘Training Ethically Responsible AI Researchers: a Case Study’. arXiv. Available at: https://arxiv.org/abs/2011.11393

[104] Samuel, G., Chubb, J. and Derrick, G. (2021). ‘Boundaries Between Research Ethics and Ethical Research Use in Artificial Intelligence Health Research’. Journal of Empirical Research on Human Research Ethics. Available at: Available at: https://journals.sagepub.com/doi/full/10.1177/15562646211002744

[105] Rawbone, R. (2010). ‘Inequality amongst RECs’. Research Ethics Review, 6(1), pp. 1–2. Available at: https://journals.sagepub.com/doi/pdf/10.1177/174701611000600101

[106] Hine, C. (2021). ‘Evaluating the prospects for university-based ethical governance in artificial intelligence and data-driven innovation’. Research Ethics. Available at: https://journals.sagepub.com/doi/full/10.1177/17470161211022790

[107] Page, S. A. and Nyeboer, J. (2017). ‘Improving the process of research ethics review’. Research integrity and peer review, 2(1), pp. 1–7. Available at: https://researchintegrityjournal.biomedcentral.com/articles/10.1186/s41073-017-0038-7

[108] Chadwick, G. L. and Dunn, C. M. (2000). ‘Institutional review boards: changing with the times?’. Journal of public health management and practice, 6(6), pp. 19–27. Available at: https://europepmc.org/article/med/18019957

[109] Association of Internet Researchers. (2020). Internet Research: Ethical Guidelines 3.0. Available at: https://aoir.org/reports/ethics3.pdf

[110] Emanuel, E. J., Grady, C. C., Crouch, R. A., Lie, R. K., Miller, F. G. and Wendler, D. D. (eds.). (2008). The Oxford textbook of clinical research ethics. Oxford University Press.

[111] Oakes, J. M. (2002). ‘Risks and wrongs in social science research: An evaluator’s guide to the IRB’. Evaluation Review, 26(5), pp. 443–479. Available at: https://journals.sagepub.com/doi/10.1177/019384102236520?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed; and Dyer, S. and Demeritt, D. (2009). ‘Un-ethical review? Why it is wrong to apply the medical model of research governance to human geography’. Progress in Human Geography, 33(1), pp. 46–64. Available at: https://journals.sagepub.com/doi/10.1177/0309132508090475

[112] Cannella, G. S. and Lincoln, Y. S. (2011). ‘Ethics, research regulations, and critical social science’. The Sage handbook of qualitative research, 4, pp. 81–90; and Israel, M. (2014). Research ethics and integrity for social scientists: Beyond regulatory compliance. SAGE Publishing.

[113] The ICO defines personal data as ‘information relating to natural persons who

can be identified or who are identifiable, directly from the information in question; or

who can be indirectly identified from that information in combination with other information.’ See: Information Commissioners Office. Guide to the UK General Data Protection Regulation (UK GDPR) – What is Personal Data? Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/key-definitions/what-is-personal-data/

[114] Friesen, P., DouglasJones, R., Marks, M. et al. (2021). ‘Governing AIDriven Health Research: Are IRBs Up to the Task?’ Ethics & Human Research, 43(2), pp. 35–42. Available at: https://onlinelibrary.wiley.com/doi/abs/10.1002/eahr.500085

[115] Karras, T., Laine, S. and Aila, T. (2019). ‘A style-based generator architecture for generative adversarial networks’. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401–4410.

[116] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021). ‘Ethics review of big data research: What should stay and what should be reformed?’. BMC Medical Ethics, 22(1), pp. 1–13. Available at: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00616-4

[117] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021).

[118] Radin, J. (2017). ‘“Digital Natives”: How Medical and Indigenous Histories Matter for Big Data’. Osiris, 32, pp. 43–64. Available at: https://doi.org/10.1086/693853

[119] Kramer, A. D., Guillory, J. E. and Hancock, J. T. (2014). ‘Experimental evidence of massive-scale emotional contagion through social networks’. Proceedings of the National Academy of Sciences, 111(24), pp. 8788–8790. Available at: https://www.pnas.org/doi/abs/10.1073/pnas.1320040111; and Selinger, E. and Hartzog, W. (2016). ‘Facebook’s emotional contagion study and the ethical problem of co-opted identity in mediated environments where users lack control’. Research Ethics, 12(1), pp. 35–43.

[120] Marks, M. (2020). ‘Emergent medical data: Health Information inferred by artificial intelligence’. UC Irvine Law Review, 995. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3554118

[121] Marks, M. (2020).

[122] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021). ‘Ethics review of big data research: What should stay and what should be reformed?’. BMC Medical Ethics, 22(1), pp. 1–13. Available at: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00616-4

[123] Samuel, G., Ahmed, W., Kara, H. et al. (2018). ‘Is It Time to Re-Evaluate the Ethics Governance of Social Media Research?’. Journal of Empirical Research on Human Research Ethics, 13(4), pp. 452–454. Available at: https://www.jstor.org/stable/26973881

[124] Taylor, J. and Pagliari, C. (2018). ‘Mining Social Media Data: How are Research Sponsors and Researchers Addressing the Ethical Challenges?’. Research Ethics, 14(2). Available at: https://journals.sagepub.com/doi/10.1177/1747016117738559

[125] Iphofen, R. and Tolich, M. (2018). ‘Foundational issues in qualitative research ethics’. The Sage handbook of qualitative research ethics, pp. 1–18. Available at: https://methods.sagepub.com/book/the-sage-handbook-of-qualitative-research-ethics-srm/i211.xml

[126] Schrag, Z. M. (2011). ‘The case against ethics review in the social sciences’. Research Ethics, 7(4), pp. 120–131.

[127] Goodyear, M. et al. (2007). ‘The Declaration of Helsinki. Mosaic tablet, dynamic document or dinosaur?’. British Medical Journal, 335; and Ashcroft, R. E. (2008). ‘The declaration of Helsinki’. The Oxford textbook of clinical research ethics, pp. 141–148.

[128] Emanuel, E.J., Wendler, D. and Grady, C. (2008) ‘An Ethical Framework for Biomedical Research’. The Oxford Textbook of Clinical Research Ethics, pp. 123–135.

[129] Tsoka-Gwegweni, J. M. and Wassenaar, D.R. (2014). ‘Using the Emanuel et al. Framework to Assess Ethical Issues Raised by a Biomedical Research Ethics Committee in South Africa’. Journal of Empirical Research on Human Research Ethics, 9(5), pp. 36–45. Available at: https://journals.sagepub.com/doi/10.1177/1556264614553172?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%20%200pubmed

[130] Hagendorff, T. (2020). ‘The ethics of AI ethics: An evaluation of guidelines’. Minds and Machines, 30(1), pp. 99–120. Available at: https://link.springer.com/article/10.1007/s11023-020-09517-8;

[131] Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. and Srikumar, M. (2020). ‘Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI’. Berkman Klein Center Research Publication No. 2020–1. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3518482

[132] Gardner, A., Smith, A. L., Steventon, A. et al. (2021). ‘Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice’. AI and Ethics, pp. 1–15. Available at: https://link.springer.com/article/10.1007/s43681-021-00069-w

[133] Floridi, L. and Cowls, J. (2019). ‘A unified framework of five principles for AI in society’. Social Science Research Network. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3831321

[134] These include standards initiatives like the IEEE’s P7000 series on ethical design of AI systems, which include P7001 – Standard for Transparency of Autonomous Systems (2021), P7003 – Algorithmic Bias Considerations (2018) and P7010 – Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems (2020). ISO/IEC JTC 1/SC 42 – Artificial Intelligence takes on a series of related standards around data management, trustworthiness of AI systems, and transparency.

[135] Jobin, A., Ienca, M. and Vayena, E. (2019). ‘The global landscape of AI ethics guidelines’. Nature Machine Intelligence, 1, pp. 389–399. Available at: https://doi.org/10.1038/s42256-019-0088-2

[136] Floridi, L. and Cowls, J. (2019). ‘A unified framework of five principles for AI in society’. Social Science Research Network. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3831321

[137] Yeung, K., Howes, A. and Pogrebna, G. (2019). ‘AI governance by human rights-centred design, deliberation and oversight: An end to ethics washing’. The Oxford Handbook of AI Ethics. Oxford University Press.

[138] Mittelstadt, B. (2019). ‘Principles alone cannot guarantee ethical AI’. Nature Machine Intelligence, 1(11), pp. 501–507. Available at: https://www.nature.com/articles/s42256-019-0114-4

[139] Mittelstadt, B. (2019).

[140]Sambasivan, N., Kapania, S., Highfill, H. et al. (2021). ‘“Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI’. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–15. Available at: https://research.google/pubs/pub49953/

[141] IEEE Standards Association. (2019). Ethically Aligned Design, First Edition. Available at: https://ethicsinaction.ieee.org/#ead1e 

[142] Samuel, G., Diedericks, H. and Derrick, G. (2021). Population health AI researchers’ perceptions of the public portrayal of AI: A pilot study’. Public Understanding of Science, 30(2),  pp. 196–211. Available at: https://journals.sagepub.com/doi/full/10.1177/0963662520965490

[143] Association of Internet Researchers. (2020). Internet Research: Ethical Guidelines 3.0. Available at: https://aoir.org/reports/ethics3.pdf

[144] Samuel, G., Derrick, G. E. and Van Leeuwen, T. (2019). ‘The ethics ecosystem: Personal ethics, network governance and regulating actors governing the use of social media research data’. Minerva, 57(3), pp. 317–343. Available at: https://link.springer.com/article/10.1007/s11024-019-09368-3

[145] Vadeboncoeur, C., Townsend, N., Foster, C. ,and Sheehan, M. (2016). ‘Variation in university research ethics review: Reflections following an inter-university study in England’. Research Ethics, 12(4), pp. 217–233. Available at: https://journals.sagepub.com/doi/full/10.1177/1747016116652650; and Abbott, L. and Grady, C. (2011). ‘A systematic review of the empirical literature evaluating IRBs: What we know and what we still need to learn’. Journal of Empirical Research on Human Research Ethics, 6(1), pp.3-19. Available at: https://journals.sagepub.com/doi/abs/10.1525/jer.2011.6.1.3

[146] Silberman, G. and Kahn, K. L. (2011). ‘Burdens on research imposed by institutional review boards: the state of the evidence and its implications for regulatory reform’. The Milbank quarterly, 89(4), pp. 599–627. Available at: https://doi.org/10.1111/j.1468-0009.2011.00644

[147] Dove, E. S. and Garattini, C. (2018). ‘Expert perspectives on ethics review of international data-intensive research: Working towards mutual recognition’. Research Ethics, 14(1), pp. 1–25. Available at: https://journals.sagepub.com/doi/10.1177/1747016117711972

[148] Coleman, C. H., Ardiot, C., Blesson, S.et al . (2015). ‘Improving the Quality of Host Country Ethical Oversight of International Research: The Use of a Collaborative ‘PreReview’Mechanism for a Study of Fexinidazole for Human African Trypanosomiasis’. Developing World Bioethics, 15(3), pp. 241–247. Available at: https://onlinelibrary.wiley.com/doi/full/10.1111/dewb.12068

[149] Dove, E. S. and Garattini, C. (2018). ‘Expert perspectives on ethics review of international data-intensive research: Working towards mutual recognition’. Research Ethics, 14(1), pp. 1–25. Available at: https://journals.sagepub.com/doi/10.1177/1747016117711972

[150] Government of Canada. (2018). Tri-Council Policy Statement Ethical Conduct for Research Involving Humans, Chapter 9: Research Involving the First Nations, Inuit and Métis Peoples of Canada. Available at: https://ethics.gc.ca/eng/policy-politique_tcps2-eptc2_2018.html

[151] Dove, E. S. and Garattini, C. (2018). ‘Expert perspectives on ethics review of international data-intensive research: Working towards mutual recognition’. Research Ethics, 14(1), pp. 1–25. Available at: https://journals.sagepub.com/doi/10.1177/1747016117711972

[152] Source: Zhang, D. et al. (2022) ‘The AI Index 2022 Annual Report’. arXiv. Available at:
https://doi.org/10.48550/arXiv.2205.03468

[153] Ballantyne, A. and Stewart, C. (2019). ‘Big data and public-private partnerships in healthcare and research.’ Asian Bioethics Review, 11(3), pp. 315–326. Available at: https://link.springer.com/article/10.1007/s41649-019-00100-7

[154] Ballantyne, A. and Stewart, C. (2019).

[155] Ballantyne, A. and Stewart, C. (2019). ‘Big data and public-private partnerships in healthcare and research.’ Asian Bioethics Review, 11(3), pp. 315–326. Available at: https://link.springer.com/article/10.1007/s41649-019-00100-7

[156] Mittelstadt, B. and Floridi, L. (2016). ‘The ethics of big data: Current and foreseeable issues in biomedical contexts’. Science and Engineering Ethics, 22(2), pp. 303–341. Available at: https://link.springer.com/article/10.1007/s11948-015-9652-2

[157] Mittelstadt, B. (2017). ‘Ethics of the health-related internet of things: a narrative review’. Ethics and Information Technology, 19, pp. 157–175. Available at: https://doi.org/10.1007/s10676-017-9426-4

[158] Machirori, M. and Patel. R. (2021). ‘Turning distrust in data sharing into “engage, deliberate, decide”’. Ada Lovelace Institute. Available at: https://www.adalovelaceinstitute.org/blog/distrust-data-sharing-engage-deliberate-decide/

[159] Centre for Data Ethics and Innovation. (2020). Addressing trust in public sector data use. UK Government. Available at: https://www.gov.uk/government/publications/cdei-publishes-its-first-report-on-public-sector-data-sharing/addressing-trust-in-public-sector-data-use

[160] Ada Lovelace Institute. (2021). Participatory data stewardship: A framework for involving people in the use of data. Available at: https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/

[161] Suresh, H. and Guttag, J. (2021). ‘Understanding Potential Sources of Harm throughout the Machine Learning Life Cycle’. MIT Schwarzman College of Computing. Available at: https://mit-serc.pubpub.org/pub/potential-sources-of-harm-throughout-the-machine-learning-life-cycle/release/1

[162] Buolamwini, J. and Gebru, T. (2018). ‘Gender shades: Intersectional Accuracy Disparities in Commercial Gender Classification.’ Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Conference on Fairness, Accountability and Transparency, PMLR, pp. 77–91. Available at: https://proceedings.mlr.press/v81/buolamwini18a.html

[163] Asaro, P.M. (2019). AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care. Available at: https://peterasaro.org/writing/AsaroPredicitvePolicingAIEthicsofCare.pdf

[164] O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

[165] Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). ‘Machine Bias’. ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[166] Keyes, O. (2018). ‘The misgendering machines: Trans/HCI implications of automatic gender recognition’. Proceedings of the ACM on human-computer interaction, 2(CSCW), pp. 1–22.

[167] Hamidi, F., Scheuerman, M. K. and Branham, S. M. (2018). ‘Gender recognition or gender reductionism? The social implications of embedded gender recognition systems’. CHI ’18. Proceedings of the 2018 CHI conference on human factors in computing systems, pp. 1–13. Available at: https://dl.acm.org/doi/abs/10.1145/3173574.3173582

[168] Scheuerman, M. K., Wade, K., Lustig, C. and Brubaker, J. R. (2020). ‘How We’ve Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis’. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW1), pp. 1–35. Available at: https://dl.acm.org/doi/abs/10.1145/3392866

[169] Mehrabi, N., Morstatter, F., Saxena, N. et al. (2021). ‘A survey on bias and fairness in machine learning’. ACM Computing Surveys (CSUR), 54(6), pp. 1–35.

[170] Crawford, K. (2021). The Atlas of AI. Yale University Press.

[171] Source: Leslie, D. et al. (2021). ‘Does “AI” stand for augmenting inequality in the era of COVID-19 healthcare?’. BMJ, 372. Available at: https://www.bmj.com/content/372/bmj.n304

[172] Irani, L. C. and Silberman, M. S. (2013). ‘Amazon Mechanical Turk: Gold Mine or Coal Mine?’ CHI ’13: Proceedings of the SIGCHI conference on human factors in computing systems, pp. 611–620); Available at: https://dl.acm.org/doi/abs/10.1145/2470654.2470742

[173] Massachusetts Institute of Technology – Committee on the Use of Humans as Experimental Subjects. COUHES Policy for Using Amazon’s Mechanical Turk. Available at: https://couhes.mit.edu/guidelines/couhes-policy-using-amazons-mechanical-turk

[174] Jindal, S. (2021). ‘Responsible Sourcing of Data Enrichment Services’. Partnership on AI. Available at: https://partnershiponai.org/responsible-sourcing-considerations/; and Northwestern University. Guidelines for Academic Requesters. Available at: https://irb.northwestern.edu/docs/guidelinesforacademicrequesters-1.pdf

[175] Friesen, P., DouglasJones, R., Marks, M. et al. (2021). ‘Governing AIDriven Health Research: Are IRBs Up to the Task?’ Ethics & Human Research, 43(2), pp. 35–42. Available at: https://onlinelibrary.wiley.com/doi/abs/10.1002/eahr.500085

[176] Wang, Y. and Kosinski, M. (2018). ‘Deep neural networks are more accurate than humans at detecting sexual orientation from facial images’. Journal of Personality and Social Psychology, 114(2), p. 246. Available at: https://psycnet.apa.org/doiLanding?doi=10.1037%2Fpspa0000098

[177] Wang, C., Zhang, Q., Duan, X. and Gan, J. (2018). ‘Multi-ethnical Chinese facial characterization and analysis’. Multimedia Tools and Applications, 77(23), pp. 30311–30329.

[178] Strubell, E., Ganesh, A. and McCallum, A. (2019). ‘Energy and policy considerations for deep learning in NLP’. arXiv. Available at: https://arxiv.org/abs/1906.02243

[179] Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021). ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’ Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘21), pp. 610–623. Available at: https://doi.org/10.1145/3442188.3445922

[180] Denton, E., Hanna, A., Amironesei, R. et al. (2020). ‘Bringing the people back in: Contesting benchmark machine learning datasets’. arXiv. Available at: https://doi.org/10.48550/arXiv.2007.07399

[181] Birhane, A. and Prabhu, V. U. (2021). ‘Large image datasets: A pyrrhic win for computer vision?’. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 1537–1547. Available at: https://doi.org/10.48550/arXiv.2006.16923

[182] Jensen, B. (2021). ‘A New Approach to Mitigating AI’s Negative Impact’. Institute for Human-Centered Artificial Intelligence. Available at: https://hai.stanford.edu/news/new-approach-mitigating-ais-negative-impact

[183] Green, B. (2019). ‘“Good” isn’t good enough’. Proceedings of the AI for Social Good workshop at NeurIPS. Available at: http://ai.ethicsworkshop.org/Library/LibContentAcademic/GoodNotGoodEnough.pdf

[184] For example, the UK National Data Guardian published the results of a public consultation on how health and care data should be used to benefit the public, which may prove a model for the AI and data science research communities to follow. See: National Data Guardian. (2021). Putting Good Into Practice. A public dialogue on making public benefit assessments when using health and care data. UK Government. Available at: https://www.gov.uk/government/publications/putting-good-into-practice-a-public-dialogue-on-making-public-benefit-assessments-when-using-health-and-care-data

[185] Kerner, H. (2020). ‘Too many AI researchers think real-world problems are not relevant’. MIT Technology Review. Available at: https://www.technologyreview.com/2020/08/18/1007196/ai-research-machine-learning-applications-problems-opinion/

[186] Moss, E. and Metcalf, J. (2020). Ethics Owners. A New Model of Organizational Responsibility in Data-Driven Technology Companies. Data & Society. Available at: https://datasociety.net/library/ethics-owners/

[187] Moss, E. and Metcalf, J. (2020).

[188] Hedgecoe, A. (2015). ‘Reputational Risk, Academic Freedom and Research Ethics Review’. British Sociological Association, 50(3), pp.486–501. Available at: https://journals.sagepub.com/doi/full/10.1177/0038038515590756

[189] Dave, P. and Dastin, J. (2020) ‘Google told its scientists to “strike a positive tone” in AI research – documents’. Reuters. Available at: https://www.reuters.com/article/us-alphabet-google-research-focus-idUSKBN28X1CB

[190] Simonite, T. (2021). ‘What Really Happened When Google Ousted Timnit Gebru’. Wired. Available at: https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/

[191] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021). ‘Ethics review of big data research: What should stay and what should be reformed?’. BMC Medical Ethics, 22(1), pp. 1–13. Available at: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00616-4

[192] Smith, J. J., Amershi, S., Barocas, S. et al. (2022). ‘REAL ML: Recognizing, Exploring, and Articulating Limitations of Machine Learning Research’. 2022 ACM Conference on Fairness, Accountability, and Transparency (FaccT ’22). Available at: https://facctconference.org/static/pdfs_2022/facct22-47.pdf

[193] Ada Lovelace Institute. (2021). Algorithmic impact assessment: a case study in healthcare. Available at: https://www.adalovelaceinstitute.org/project/algorithmic-impact-assessment-healthcare/

[194] Zaken, M. van A. (2022). Impact Assessment Fundamental Rights and Algorithms. The Ministry of the Interior and Kingdom Relations. Available at: https://www.government.nl/documents/reports/2022/03/31/impact-assessment-fundamental-rights-and-algorithms; Government of Canada. (2021). Algorithmic Impact Assessment Tool. Available at: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html

[195] Jensen, B. (2021). ‘A New Approach To Mitigating AI’s Negative Impact’. Institute for Human-Centered Artificial Intelligence. Available at: https://hai.stanford.edu/news/new-approach-mitigating-ais-negative-impact

[196] Bernstein, M. S., Levi, M., Magnus, D. et al. (2021). ‘ESR: Ethics and Society Review of Artificial Intelligence Research’. arXiv. Available at: https://arxiv.org/abs/2106.11521

[197] Center for Advanced Study in the Behavioral Sciences at Stanford University. ‘Ethics & Society Review – Stanford University’. Available at: https://casbs.stanford.edu/ethics-society-review-stanford-university

[198] Sendak, M., Elish, M.C., Gao, M. et al. (2020). ‘“The Human Body Is a Black Box”: Supporting Clinical Decision-Making with Deep Learning.’ FAT* ‘20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 99–109. Available at: https://doi.org/10.1145/3351095.3372827

[199] Samuel, G. and Derrick, D. (2020). ‘Defining ethical standards for the application of digital tools to population health research’. Bulletin of the World Health Organization Supplement, 98(4), pp. 239–244. Available at: https://pubmed.ncbi.nlm.nih.gov/32284646/

[200] Kawas, S., Yuan, Y., DeWitt, A. et al (2020). ‘Another decade of IDC research: Examining and reflecting on values and ethics’. IDC ’20: Proceedings of the Interaction Design and Children Conference, pp. 205–215. Available at: https://dl.acm.org/doi/abs/10.1145/3392063.3394436

[201] Burr, C. and Leslie, D. (2021). ‘Ethical Assurance: A Practical Approach to the Responsible Design, Development, and Deployment of Data-Driven Technologies’. Social Science Research Network. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3937983

[202] Sandler, R. and Basel, J. (2019). Building Data and AI Ethics Committees, p. 19. Accenture. Available at: https://www.accenture.com/_acnmedia/pdf-107/accenture-ai-and-data-ethics-committee-report-11.pdf

 

[203] UK Statistics Authority. Ethics Self-Assessment Tool. Available at: https://uksa.statisticsauthority.gov.uk/the-authority-board/committees/national-statisticians-advisory-committees-and-panels/national-statisticians-data-ethics-advisory-committee/ethics-self-assessment-tool/

[204] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021). ‘Ethics review of big data research: What should stay and what should be reformed?’. BMC Medical Ethics, 22(1), pp. 1–13. Available at: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00616-4

[205] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021).

[206] The concept of ‘ethical assurance’ is a process-based form of project governance that supports inclusive and participatory ethical deliberation while also remaining grounded in social and technical realities. See: Burr, C. and Leslie, D. (2021). ‘Ethical Assurance: A Practical Approach to the Responsible Design, Development, and Deployment of Data-Driven Technologies’. Social Science Research Network. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3937983

[207] Centre for Data Ethics and Innovation (2022). The roadmap to an effective AI assurance ecosystem. UK Government. Available at: https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem

[208] d’Aquin, M., Troullinou, P., O’Connor, N. E. et al. (2018). ‘Towards an “Ethics by Design” Methodology for AI research projects’. AIES ’18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 54–59. Available at: https ://dl.acm.org/doi/abs/10.1145/3278721.3278765

[209] d’Aquin, M., Troullinou, P., O’Connor, N. E. et al. (2018).

[210] Dove, E. (2020). Regulatory Stewardship of Health Research: Navigating Participant Protection and Research Promotion. Edward Elgar.

[211] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021). ‘Ethics review of big data research: What should stay and what should be reformed?’. BMC Medical Ethics, 22(1), pp. 1–13. Available at: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00616-4

[212] d’Aquin, M., Troullinou, P., O’Connor, N. E. et al. (2018). ‘Towards an “Ethics by Design” Methodology for AI research projects’. AIES ’18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 54–59. Available at: https ://dl.acm.org/doi/abs/10.1145/3278721.3278765

[213] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021). ‘Ethics review of big data research: What should stay and what should be reformed?’. BMC Medical Ethics, 22(1), pp. 1–13. Available at: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00616-4

[214] Ferretti, A., Ienca, M., Sheehan, M. et al (2021).

[215] Source: Sandler, R. and Basl, J. (2019). Building Data and AI Ethics Committees, p. 19. Accenture. Available at: https://www.accenture.com/_acnmedia/pdf-107/accenture-ai-and-data-ethics-committee-report-11.pdf

[216] See: Ada Lovelace Institute. (2022). Looking before we leap: Case studies. Available at: https://www.adalovelaceinstitute.org/resource/research-ethics-case-studies/

[217] Department of Health and Social Care. (2021). A guide to good practice for digital and data-driven health technologies. UK Government. Available at: https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology

[218] Go Fair. Fair principles. Available at: https://www.go-fair.org/fair-principles/

[219] Digital Curation Centre (DCC). ‘List of metadata standards’. Available at: https://www.dcc.ac.uk/guidance/standards/metadata/list

[220] Partnership on AI. (2021). Responsible Sourcing of Data Enrichment Services. Available at: https://partnershiponai.org/paper/responsible-sourcing-considerations/

[221] Northwestern University. (2014). Guidelines for Academic Requesters. Available at: https://irb.northwestern.edu/docs/guidelinesforacademicrequesters-1.pdf

[222]Partnership on AI. AI Incidents Database. Available at: https://partnershiponai.org/workstream/ai-incidents-database/

[223] AIAAIC. AIAAIC Repository. Available at: https://www.aiaaic.org/aiaaic-repository

 

[224] DeepMind. (2022). ‘How our principles helped define Alphafolds release’. Available at: https://www.deepmind.com/blog/how-our-principles-helped-define-alphafolds-release

[225] Jobin, A., Ienca, M. and Vayena, E. (2019). ‘The global landscape of AI ethics guidelines’. Nature, 1, pp. 389–399. Available at : https://doi.org/10.1038/s42256-019-0088-2

[226] Jobin, A., Ienca, M. and Vayena, E. (2019).

[227] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021). ‘Ethics review of big data research: What should stay and what should be reformed?’. BMC Medical Ethics, 22(1), pp. 1–13. Available at: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00616-4

[228] Dove, E. S. and Garattini, C. (2018). ‘Expert perspectives on ethics review of international data-intensive research: Working towards mutual recognition’. Research Ethics, 14(1), pp. 1–25.

[229] Mitrou, L. (2018). ‘Data Protection, Artificial Intelligence and Cognitive Services: Is the General Data Protection Regulation (GDPR) “Artificial Intelligence-Proof”?’. Social Science Research Network. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3386914

[230] Information Commissioner’s Office (ICO). Guidance on AI and data protection. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-ai-and-data-protection/

[231] Samuel, G., Derrick, G. E. and Van Leeuwen, T. (2019). ‘The ethics ecosystem: Personal ethics, network governance and regulating actors governing the use of social media research data’. Minerva, 57(3), pp. 317–343. Available at: https://link.springer.com/article/10.1007/s11024-019-09368-3

[232] Ferretti, A., Ienca, M., Sheehan, M. et al. (2021). ‘Ethics review of big data research: What should stay and what should be reformed?’. BMC Medical Ethics, 22(1), pp. 1–13. Available at: https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-021-00616-4

[233]  Ashurst, C., Anderljung, M., Prunkl, C. et al. (2020). ‘A Guide to Writing the NeurIPS Impact Statement’. Centre for the Governance of AI. Available at: https://medium.com/@GovAI/a-guide-to-writing-the-neurips-impact-statement-4293b723f832

[234] Castelvecchi, D. (2020). ‘Prestigious AI meeting takes steps to improve ethics of research’. Nature, 589(7840), pp. 12–13. Available at: https://doi.org/10.1038/d41586-020-03611-8

[235] NeurIPS. (2021). NeurIPS 2021 Paper Checklist Guidelines. Available at: https://neurips.cc/Conferences/2021/PaperInformation/PaperChecklist

[236] Canadian Institute for Advanced Research, Partnership on AI and Ada Lovelace Institute. (2022). A culture of ethical AI: report. Available at: https://www.adalovelaceinstitute.org/event/culture-ethical-ai-cifar-pai/

[237] Gardner, A., Smith, A. L., Steventon, A. et al. (2021). ‘Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice’. AI and Ethics, 2. pp.1–15. Available at: https://link.springer.com/article/10.1007/s43681-021-00069-w

[238] Provost, F. and  Fawcett T. (2013). ‘Data science and its relationship to big data and data-driven decision making’. Big Data, 1(1), pp. 51–59.

[239] We borrow from the definition used by the European Commission’s High Level Expert Group on AI. See: European Commission. (2019). Ethics guidelines for trustworthy AI. Available at: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

[240] The Alan Turing Institute. ‘About us’. Available at: https://www.turing.ac.uk/about-us

[241] The Turing Way Community et al. (2019). The Turing Way: A Handbook for Reproducible Data Science. Available at: https://the-turing-way.netlify.app/welcome

[242] The Turing Way Community et al. (2020). Guide for Reproducible Research. Available at: https://the-turing-way.netlify.app/reproducible-research/reproducible-research.html

[243] The Turing Way Community et al. (2020). Guide for Project Design. Available at: https://the-turing-way.netlify.app/project-design/project-design.html

[244] The Turing Way Community et al. (2020). Guide for Communication. Available at: https://the-turing-way.netlify.app/communication/communication.html

[245] The Turing Way Community et al. (2020). Guide for Collaboration. Available at: https://the-turing-way.netlify.app/collaboration/collaboration.html

[246]The Turing Way Community et al. (2020). Guide for Ethical Research. Available at: https://the-turing-way.netlify.app/ethical-research/ethical-research.html

[247] See: https://saildatabank.com/

[248] University of Exeter. (2021). Ethics Policy. Available at: https://www.exeter.ac.uk/media/universityofexeter/governanceandcompliance/researchethicsandgovernance/Ethics_Policy_Revised_November_2020.pdf

 

[249] University of Exeter. (2021). Research Ethics Policy and Framework. Available at: https://www.exeter.ac.uk/media/universityofexeter/governanceandcompliance/researchethicsandgovernance/Revised_UoE_Research_Ethics_Framework_v1.1_07052021.pdf

[250] University of Exeter (2021).

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 110

Skip to content

Letter from the working group co-chairs

 

This project by an international and interdisciplinary working group of experts from academia, policy, law, technology and civil society, invited by the Ada Lovelace Institute, had a big ambition: to imagine rules and institutions that can shift power over data and make it benefit people and society.

 

We began this work in 2020, only a few months into the pandemic, at a time when public discourse was immersed in discussions about how technologies – like contact tracing apps – could be harnessed to help address this urgent and unprecedented global health crisis.

 

The potential power of data to affect positive change – to underpin public health policy, to support isolation, to assess infection risk – was perhaps more immediate than at any other time in our lives. At the same time, concerns such as data injustice and privacy remained.

 

It was in this climate that our working group sought to explore the relationship people have with data and technology, and to look towards a positive future that would centre governance, regulation and use of data on the needs of people and society, and contest the increasingly entrenched systems of digital power.

 

The working group discussions centred on questions about power over both data infrastructures, and over data itself. Where does power reside in the digital ecosystem, and what are the sources of this power? What are the most promising approaches and interventions that might distribute power more widely, and what might that rebalancing accomplish?

 

The group considered interventions ranging from developing public-service infrastructure to alternative business models, from fiduciary duties for data infrastructures to a new regime for data under a public-interest approach. Many were conceptually interesting but required more detailed thought to be put into practice.

 

Through a process of analysis and distillation, that broad landscape narrowed to four areas for change: infrastructure, governance, institutions and democratic participation in decisions over data processing, collection and use. We are happy that the group has endorsed a pathway towards transformation, identifying a shared vision and practical interventions to begin the work of changing the digital ecosystem.

 

Throughout this process, we wanted to free ourselves from the constraints of currently perceived models and norms, and go beyond existing debates around data policy. We did this intentionally, to extend the scope of what is politically thought to be possible, and to create space for big ideas to flourish and be discussed.

 

We see this work as part of one of the most challenging efforts we have to make as humans and as societies. Its ambitious aim is to bring to the table a richer set of possibilities of our digital future. We uphold that we need new imaginaries if we are to create a world where digital power is distributed among many and serves the public good, as defined in democracies.

 

We hope this report will serve as both a provocation and a way to generate constructive criticism and mature ideas on how to transform digital ecosystems, but also a call to action for those of you – our readers – who hold the power to make the interventions we describe into political and business realities.

 

Diane Coyle

Bennett Professor of Public Policy, University of Cambridge

 

Paul Nemitz

Principal Adviser on Justice Policy, European Commission and visiting Professor of Law at College of Europe

 

Co-chairs

Rethinking data working group

A call for a new vision

In 2020, the Ada Lovelace Institute characterised the digital ecosystem as:

  • Exploitative: Data practices are exploitative, and they fail to produce the potential social value of data, protect individual rights and serve communities.
  • Shortsighted: Political and administrative institutions have struggled to govern data in a way that enables effective enforcement and acknowledges its central role in the data-driven systems.
  • Disempowering: Individuals lack agency over how their data is generated and used, and there are stark power imbalances between people, corporations and states.[footnote]Ada Lovelace Institute. (2020). Rethinking Data – Prospectus. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2020/01/Rethinking-Data-Prospectus-Ada-Lovelace-Institute-January-2019.pdf[/footnote]

We recognised an urgent need for a comprehensive and transformative vision for data that can serve as a ‘North Star’, directing our efforts and encouraging us to think bigger and move further.

Our work to ‘rethink data’ began with a forward-looking question:

‘What is a more ambitious vision for data use and regulation that can deliver a positive shift in the digital ecosystem towards people and society?’

This drove the establishment of an expert working group, bringing together leading thinkers in privacy and data protection, public policy, law and economics from the technology sector, policy, academia and civil society across the UK, Europe, USA, Canada and Hong Kong.

This disciplinarily diverse group brought their perspectives and expertise to understand the current data ecosystem and make sense of the complexity that characterises data governance in the UK, across Europe and internationally. Their reflection on the challenges informed a holistic approach to the changes needed, which is highly relevant to the jurisdictions mentioned above, and which we hope will be of foundational interest to related work in other territories.

Understanding that shortsightedness limits creative thinking, we deliberately set the field of vision to the medium term, 2030 and beyond. We intended to escape the ‘policy weeds’ of unfolding developments in data and technology policy in the UK, EU or USA, and set our sights on the next generation of institutions, governance, infrastructure and regulations.

Using discussions, debates, commissioned pieces, futures-thinking workshops, speculative scenario building and horizon scanning, we have distilled a multitude of ideas, propositions and models. (For full details about our methodology, see ‘Final notes’.)

These processes and methods moved the scope of enquiry on from the original premise – to articulate a positive ambition for the use and regulation of data that recognised asymmetries of power and enabled social value – to seeking the most promising interventions that address the significant power imbalances that exist between large private platforms, and groups of people and individuals.

This report highlights and contextualises four cross-cutting interventions with a strong potential to reshape the digital ecosystem:

  1. Transforming infrastructure into open and interoperable ecosystems.
  2. Reclaiming control of data from dominant companies.
  3. Rebalancing the centres of power with new (non-commercial) institutions.
  4. Ensuring public participation as an essential component of technology policymaking.

The interventions are multidisciplinary and they integrate legal, technological, market and governance solutions. They offer a path towards addressing present digital challenges and the possibility for a new, healthy digital ecosystem to emerge.

What do we mean by a healthy digital ecosystem? One that privileges people over profit, communities over corporations, society over shareholders. And, most importantly, one where power is not held by a few large corporations, but is distributed among different and diverse  models, alongside people who are represented in, and affected by the data used by those new models.

The digital ecosystem we propose is balanced, accountable and sustainable, and imagines new types of infrastructure, new institutions and new governance models that can make data work for people and society.

Some of these interventions can be located within (or built from) emerging or recently adopted policy initiatives, while others require the wholesale overhaul of regulatory regimes and markets. They are designed to spark ideas that political thinkers, forward-looking policymakers, researchers, civil society organisations, funders and ethical innovators in the private sector consider and respond to when designing future regulations, policies or initiatives around data use and governance.

This report also acknowledges the need to prepare the ground for the more ambitious transformation of power relations in the digital ecosystem. Even a well-targeted intervention won’t change the system unless it is supported by relevant institutions and behavioural change.

In addition to targeted interventions, the report explains the preconditions that can support change:

  1. Effective regulatory enforcement.
  2. Legal action and representation.
  3. Removal of industry dependencies.

Reconceptualising the digital ecosystem will require sustained, collective and thorough efforts, and an understanding that elaborating on strategies for the future involves constant experimentation, adaptation and recalibration.

Through discussion of each intervention, the report brings an initial set of provocative ideas and concepts, to inspire a thoughtful debate about the transformative changes needed for the digital ecosystem to start evolving towards a people and society-focused vision. These can help us think about potential ways forward, open up questions for debate instead of rushing to provide answers, and offer a starting point from which more fully fledged solutions for change are able to grow.

We hope that policymakers, researchers, civil society organisations, funders and ethical industry innovators will engage with – and, crucially, iterate on – these propositions in a collective effort to find solutions that lead to lasting change in data practices and policies.

Making data work for people and society

 

The building blocks for a people-first digital ecosystem start from repurposing data to respect individual agency and deliver societal benefits, and from addressing abuses that are well defined and understood today, and are likely to continue if they are not dealt with in a systemic way.

 

Making data work for people means protecting individuals and society from abuses caused by corporations’ or governments’ use of data and algorithms. This means fundamental rights such as privacy, data protection and non-discrimination are both protected in law and reflected in the design of computational processes that generate and capture personal data.

 

The requirement to protect people from harm does not only operate in the present, there is also a need to prevent harms from happening in the future, and to create resilient institutions that will operate effectively against future threats and potential impact that can’t be fully anticipated.

 

To produce long-lasting change, we will need to break structural dependencies and address the sources of power of big technology companies. To do this, one goal must be to create data governance models and new institutions that will balance power asymmetries. Another goal is to restructure economic, technical and legal tools and incentives, to move infrastructure control away from unaccountable organisations.

 

Finally, positive goals for society can emerge from data infrastructures and algorithmic models developed by private and/or public actors, if data serves both individual and societal goals, rather than just the interests of commerce or undemocratic regimes.

How to use this report

The report is written to be of particular use to policymakers, researchers, civil society organisations, funders and those working in data-governance. To understand how and where you can take the ideas explored here forward, we recommend these approaches:

  • If you work on data policy decision-making, go through a brief overview of the sources of power in today’s digital ecosystem in Chapter 1, focus on ‘The vision’ subsections in and answer the call to action in Chapter 3 by considering ways to translate the proposed interventions into policy action and help build the pathway towards a comprehensive and transformative vision for data.
  • If you are a researcher, focus on the ‘How to get from here to there’ and ‘Further considerations and provocative concepts’ subsections in Chapter 2 and answer the call to action in Chapter 3 by reflecting critically on the provocative concepts and help develop the propositions into more concrete solutions for change.
  • If you are a civil society organisation, focus on ‘How to get from here to there’ subsections in Chapter 2 and answer the call to action in Chapter 3 by engaging with the suggested transformations and build momentum to help visualise a positive future for data and society.
  • If you are a funder, go through an overview of the sources of power in today’s digital ecosystem in Chapter 1, focus on ‘The vision’ subsections in Chapter 2 and answer the Call to action in Chapter 3 by supporting the development of a proactive policy agenda by civil society.
  • If you are working on data governance in industry, focus on sections 1 and 2 in Chapter 2, help design mechanisms for responsible generation and use of data, and answer the call to action in Chapter 3 by supporting the development of standards for open and rights enhancing systems.

Chapter 1: Understanding power in data-intensive digital ecosystems

1. Context setting

To understand why  a transformation is needed in the way our digital ecosystem operates, it’s necessary to understand the dynamics and different facets of today’s data-intensive ecosystem.

In the last decade, there has been an exponential increase in the generation, collection and use of data. This upsurge is driven by an increasing datafication of everyday parts of our lives,[footnote]Ada Lovelace Institute. (2020). The data will see you now. Available at: https://www.adalovelaceinstitute.org/report/the-data-will-see-you-now/[/footnote] from work to social interactions and, to the provision of public services. The backbone of this change is the growth of digitally connected devices, data infrastructures and platforms, which enable new forms of data generation and extraction at an unprecedented scale. 

Estimates put the volume of data created and consumed from two zettabytes in 2010 to 64.2 zettabytes in 2020 (one zettabyte is a trillion gigabytes) and project that it will grow to more than 180 zettabytes up to 2025.[footnote]Statista Research Department. (2022). Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2025. Available at: https://www.statista.com/statistics/871513/worldwide-data-created/[/footnote] These oft-cited figures disguise a range of further dynamics (such as the wider societal phenomena of discrimination and inequality that are captured and represented in these datasets), and the textured landscape of who and what is included in the datasets, what data quality means in practice, and whose objectives are represented in data processes and met through outcomes from data use.

Data is often promised to be transformative, but there remains debate as to exactly what it transforms. On one hand, data is recognised as an important economic opportunity, and policy focus across the globe and is believed to deliver significant societal benefits. On the other hand, increased datification and calculability of human interactions can lead to human rights abuses and illegitimate public or private control. In between these opposing views are a variety of observations that reflect the myriad ways data and society interact, broadly considering the ways such practices reconfigure activities, structures and relationships.[footnote]Balayn, A. and Gürses, S. (2021). Beyond Debiasing, Regulating AI and its inequalities. European Digital Rights (EDRi). Available at: https://edri.org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online.pdf[/footnote]

According to scholars of surveillance and informational capitalism, today’s digital economy is built on deeply rooted, exploitative and extractive data practices.[footnote]Zuboff, S. (2019). The age of surveillance capitalism: the fight for a human future at the new frontier of power. New York: PublicAffairs and Cohen, J. E. (2019). Between truth and power: the legal constructions of informational capitalism. New York: Oxford University Press.[/footnote] These result in the accrual of immense surpluses of value to dominant technology corporations, and a role for the human participants enlisted in value creation for these big technology companies that has been described as a form of ‘data rentiership’.[footnote]Birch, K., Chiappetta, M. and Artyushina, A. (2020). ‘The problem of innovation in technoscientific capitalism: data rentiership and the policy implications of turning personal digital data into a private asset’. Policy Studies, 41(5), pp. 468–487. doi: 10.1080/01442872.2020.1748264[/footnote]

Commentators differ, however, on the real source of the value that is being extracted. Some consider that value comes from data’s predictive potential, while others emphasise that the economic arrangements in the data economy allow for huge profits to be made (largely through the advertising-based business model) even if predictions are much less effective than technology giants claim.[footnote]Hwang, T. (2020). Subprime attention crisis: advertising and the time bomb at the heart of the Internet. New York: FSG Originals.[/footnote]

In practice, only a few large technology corporations – Alphabet (Google), Amazon, Apple, Meta Platforms (Facebook) and Microsoft – have the data, processing abilities, engineering capacity, financial resources, user base and convenience appeal to provide a range of services that are both necessary to smaller players and desired by a wide base of individual users.

These corporations extract value from their large volumes of interactions and transactions, and process massive amounts of personal and non-personal data in order to optimise the service and experience of each business or individual user. Some platforms have the ability to simultaneously coordinate and orchestrate multiple sensors or computers in the network, like smartphones or connected objects. This drives the platform’s ability to innovate and offer services that seem either indispensable or unrivalled.

While there is still substantial innovation outside these closed ecosystems, the financial power of the platforms means that in practice they are able to either acquire or imitate (and further improve) innovations in the digital economy. Their efficiency in using this capacity enables them to leverage their dominance into new markets. The acquisition of open-source code platforms like GitHub by Microsoft in 2018 and RedHat by IBM in 2019 also points to a possibility that incumbents intend to extend their dominance to open-source software. The difficulty new players face to compete makes the largest technological players seem unmovable and unchangeable.

Over time, access to large pools of personal data has allowed platforms to develop services that now represent and influence the infrastructure or underlying basis for many public and private services. Creating ever-more dependencies in both public and private spheres, large technology companies are extending their services to societally sensitive areas such as education and health.

This influence has become more obvious during the COVID-19 pandemic, when large companies formed contested public-private partnerships with public health authorities.[footnote]Fitzgerald M. and Crider C. (2020). ‘Under pressure, UK government releases NHS COVID data deals with big tech’. openDemocracy. Available at: https://www.opendemocracy.net/en/ournhs/under-pressure-uk-government-releases-nhs-covid-data-deals-big-tech/[/footnote] They also partnered among themselves to influence contact tracing in the pandemic response, by facilitating contact tracing technologies in ways that were favourable or unfavourable to particular nation states. This revealed the difficulty, even at state level, of engaging in advanced use of data without the cooperation of the corporations that control the software and hardware infrastructure. 

Focusing on data alone is insufficient to understand power in data-intensive digital systems. A vast number of interrelated factors consolidate both economic and societal power of particular digital platforms.[footnote]European Commission – Expert Group for the Observatory on the Online Platform Economy. (2021). Uncovering blindspots in the policy debate on platform power. Available at: https://www.sipotra.it/wp-content/uploads/2021/03/Uncovering-blindspots-in-the-policy-debate-on-platform-power.pdf[/footnote] These factors go beyond market power and consumer behaviour, and extend to societal and democratic influence (for example through algorithmic curation and controlling how human rights can be exercised).[footnote]European Commission – Expert Group for the Observatory on the Online Platform Economy. (2021).[/footnote]

Theorists of platform governance highlight the complex ways in which vertically integrated platforms make users interacting with them legible to computers, and extract value by intermediating access to them.[footnote]Cohen, J.E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford: Oxford University Press.[/footnote]

This makes it hard to understand power from data without understanding complex technological interactions up and down the whole technology ‘stack’, from the basic protocols and connectivity that underpin technologies, through hardware, and the software and cloud services that are built on them.[footnote]Andersdotter, A. and Stasi, I. Framework for studying technologies, competition and human rights. Available at: https://amelia.andersdotter.cc/framework_for_competition_technology_and_human_rights.html[/footnote]

Large platforms have become – as a result of laissez-faire policies (minimal government intervention in market and economic affairs) rather than by deliberate, democratic design – one of the building blocks for data governance in the real world, unilaterally defining the user experience and consumer rights. They have used a mix of law, technology and economic influence to place themselves in a position of power over users, governments, legislators and private-sector developers, and this has proved difficult to dislodge or alter.[footnote]Cohen, J. E. (2017). ‘Law for the Platform Economy’. U.C. Davis Law Review, 51, pp. 133–204. Available at: https://perma.cc/AW7P-EVLC[/footnote] 

2. Rethinking regulatory approaches in digital markets

There is a recent, growing appetite to regulate both data and platforms using a variety of legal approaches to regulate market concentration, platforms as public spheres, and data and AI governance. The year 2021 alone marked a significant global uptick in proposals for the regulation of AI technologies, online markets, social media platforms and other digital technologies, with more still to come in 2022.[footnote]Mozur, P., Kang, C., Satariano, A. and McCabe, D. (2021). ‘A Global Tipping Point for Reining In Tech Has Arrived’. New York Times. Available at: https://www.nytimes.com/2021/04/20/technology/global-tipping-point-tech.html[/footnote]

A range of jurisdictions are reconsidering the regulation of digital platforms both as marketplaces and places of public speech and opinion building (‘public spheres’). Liability obligations are being reanalysed, including in bills around ‘online harms’ and content moderation. The Online Safety Act in Australia,[footnote]Australia’s Online Safety Act (2021). Available at: https://www.legislation.gov.au/Details/C2021A00076[/footnote] India’s Information Technology Rules,[footnote]Ministry of Electronics and Information Technology. (2021). The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Available at: https://prsindia.org/billtrack/the-information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021[/footnote] the EU’s Digital Services Act[footnote]European Parliament. (2022). Legislative resolution of 5 July 2022 on the proposal for a regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act). Available at: https://www.europarl.europa.eu/doceo/document/TA-9-2022-0269_EN.html[/footnote] and the UK’s draft Online Safety Bill[footnote]Online Safety Bill. (2022-23). Parliament: House of Commons. Bill no. 121. London: Published by the authority of the House of Commons. Available at https://bills.parliament.uk/bills/3137[/footnote] are all pieces of legislation that seek to regulate more rigorously the content and practices of online social media and messaging platforms.

Steps are also being made to rethink the relationship between competition, data and platforms, and jurisdictions are using different approaches. In the UK, the Competition and Markets Authority launched the Digital Markets Unit, focusing on a more flexible approach, with targeted interventions in competition in digital markets and codes of conduct.[footnote]While statutory legislation will not be introduced in the 2022–23 Parliamentary session, the UK Government reconfirmed its intention to establish the Digital Market Unit’s statutory regime in legislation as soon as Parliamentary time allows. See: Hayter, W. (2022). ‘Digital markets and the new pro-competition regime’. Competition and Markets Authority. Available at: https://competitionandmarkets.blog.gov.uk/2022/05/10/digital-markets-and-the-new-pro-competition-regime/ and UK Government. (2021). ‘Digital Markets Unit’. Gov.uk. Available at https://www.gov.uk/government/collections/digital-markets-unit[/footnote] In the EU, the Digital Markets Act (DMA) takes a top-down approach and establishes general rules for large companies that prohibit certain practices up front, such as combining or cross-using personal data across services without users’ consent, or giving preference to their own services and products in rankings.[footnote]Replace: European Parliament and Council of the European Union. (2022). Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), Article 5 (2) and Article 6 (5). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote] India is also responding to domestic market capture and increased influence from large technology companies with initiatives such as the Open Network for Digital Commerce, which aims to create a decentralised and interoperable platform for direct exchange between buyers and sellers without intermediary services such as Amazon.[footnote]Ansari, A. A. (2022), ‘E-commerce is the latest target in India’s push for an open digital economy’. Atlantic Council. Available at: https://www.atlanticcouncil.org/blogs/southasiasource/e-commerce-is-the-latest-target-in-indias-push-for-an-open-digital-economy/[/footnote] At the same time, while the draft 2019 Indian Data Protection Bill is being withdrawn, a more comprehensive legal framework is expected in 2022 covering – alongside privacy and data protection – broader issues such as non-personal data, regulation of hardware and devices, data localisation requirements and rules to seek approval for international data transfers.[footnote] Aryan, A., Pinnu, S. and Agarwal, S. (2022). ‘Govt looks to table data bill soon, draft at advanced stage’. Economic Times. Available at: https://telecom.economictimes.indiatimes.com/news/govt-looks-to-table-data-bill-soon-draft-at-advanced-stage/93358857 and Raj, R. (2022). ‘Data protection: Four key clauses may go in new bill’. Financial Express. Available at: https://www.financialexpress.com/industry/technology/data-protection-four-key-clauses-may-go-in-new-bill/2618148/[/footnote] 

Developments in data and AI policy

Around 145 countries now have some form of data privacy law, and many new additions or revisions are heavily influenced by legislative standards including the Council of Europe’s Convention 108 + and the EU General Data Protection Regulation (GDPR).[footnote]Greenleaf, G. (2021). ‘Global Data Privacy Laws 2021: Despite COVID Delays, 145 Laws Show GDPR Dominance’. Privacy Laws & Business International Report, 1, pp. 3–5.[/footnote]

The GDPR is a prime example of legislation aimed at curbing the worst excesses of exploitative data practices, and many of its foundational elements are still being developed and tested in the real world. Lessons learned from the GDPR show how vital it is to consider power within attempts to create more responsible data practices. This is because regulation is not just the result of legal design in isolation, but is also shaped by immense corporate lobbying,[footnote]Corporate Europe Observatory. (2021). The Lobby Network: Big Tech’s Web of Influence in the EU. Available at: https://corporateeurope.org/en/2021/08/lobby-network-big-techs-web-influence-eu[/footnote] applied within organisations via their internal culture and enforced in a legal environment that gives major corporations tools to stall or create disincentives to enforcement. 

In the United States, there have been multiple attempts at proposing privacy legislation,[footnote]Rich, J. (2021). ‘After 20 years of debate, it’s time for Congress to finally pass a baseline privacy law’. Brookings. Available at https://www.brookings.edu/blog/techtank/2021/01/14/after-20-years-of-debate-its-time-for-congress-to-finally-pass-a-baseline-privacy-law/ and Levine, A. S. (2021). ‘A U.S. privacy law seemed possible this Congress. Now, prospects are fading fast’. Politico. Available at: https://www.politico.com/news/2021/06/01/washington-plan-protect-american-data-silicon-valley-491405[/footnote] and there is growing momentum with privacy laws being adopted at the state level.[footnote]Zanfir-Fortuna, G. (2020). ‘America’s “privacy renaissance”: What to expect under a new presidency and Congress’. Ada Lovelace Institute. Available at https://www.adalovelaceinstitute.org/blog/americas-privacy-renaissance/[/footnote] A recent bipartisan privacy bill proposed in June 2022[footnote]American Data Privacy and Protection Act, discussion draft, 117th Cong. (2021). Available at: https://www.commerce.senate.gov/services/files/6CB3B500-3DB4-4FCC-BB15-9E6A52738B6C[/footnote] includes broad privacy provisions, with a focus on data minimisation, privacy by design and by default, loyalty duties to individuals and the introduction of a private right to action against companies. So far, the US regulatory approach to new market dynamics has been a suite of consumer protection, antitrust and privacy laws enforced under the umbrella of a single body, the Federal Trade Commission (FTC), which has a broad range of powers to protect consumers and investigate unethical business practices.[footnote]Hoofnagle, C. J., Hartzog, W. and Solove, D. J. (2019). ‘The FTC can rise to the privacy challenge, but not without help from Congress’. Brookings. Available at: https://www.brookings.edu/blog/techtank/2019/08/08/the-ftc-can-rise-to-the-privacy-challenge-but-not-without-help-from-congress/[/footnote]

Since the 1990s, with very few exceptions, the US technology and digital markets have been dominated by a minimal approach to antitrust intervention[footnote] Bietti, E. (2021). ‘Is the goal of antitrust enforcement a competitive digital economy or a different digital ecosystem?’. Ada Lovelace Institute. Available at: https://www.adalovelaceinstitute.org/blog/antitrust-enforcement-competitive-digital-economy-digital-ecosystem/[/footnote] (which is designed to promote competition and increase consumer welfare). Only recently has there been a revival of antitrust interventions in the US with a report on competition in the digital economy[footnote]House Judiciary Committee’s Antitrust Subcommittee. (2020). Investigation of Competition in the Digital Marketplace: Majority Staff Report and Recommendations. Available at: https://judiciary.house.gov/news/documentsingle.aspx?DocumentID=3429[/footnote] and cases launched against Facebook and Google.[footnote]In the case of Facebook, see the Federal Trade Commission and the State Advocate General cases: https://www.ftc.gov/enforcement/cases-proceedings/191-0134/facebook-inc-ftc-v and https://ag.ny.gov/sites/default/files/facebook_complaint_12.9.2020.pdf. In the case of Google, see the Department of Justice and the State Advocate General cases: https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws and https://coag.gov/app/uploads/2020/12/Colorado-et-al.-v.-Google-PUBLIC-REDACTED-Complaint.pdf[/footnote]

In the UK, a consultation launched in September 2021 proposed a number of routes to reform the Data Protection Act and the UK GDPR.[footnote]Ada Lovelace Institute. (2021). ‘Ada Lovelace Institute hosts “Taking back control of data: scrutinising the UK’s plans to reform the GDPR”‘. Available at: https://www.adalovelaceinstitute.org/news/data-uk-reform-gdpr/[/footnote] Political motivations to create a ‘post-Brexit’ approach to data protection may test ‘equivalence’ with the European Union, to the detriment of the benefits of coherence and seamless convergence of data rights and practices across borders.

There is also the risk that the UK lowers levels of data protection to try to increase investment, including by large technology companies operating in the UK, therefore reinforcing their market power. Recently released policy documents containing significant changes are the National Data and AI Strategies,[footnote]See: UK Government. (2021). National AI Strategy. Available at: https://www.gov.uk/government/publications/national-ai-strategy and UK Government. (2020). National Data Strategy. Available at: https://www.gov.uk/government/publications/uk-national-data-strategy/national-data-strategy[/footnote] and the Government’s response to the consultation on the reforms to the data protection framework,[footnote]UK Government. (2022). Data: a new direction – Government response to consultation. Available at: https://www.gov.uk/government/consultations/data-a-new-direction/outcome/data-a-new-direction-government-response-to-consultation[/footnote] followed by a draft bill published in July 2022.[footnote]Data Protection and Digital Information Bill. (2022-23). Parliament: House of Commons. Bill no. 143. London: Published by the authority of the House of Commons. Available at: https://bills.parliament.uk/bills/3322/publications[/footnote]

Joining the countries that have developed AI policies and national strategies,[footnote] Stanford University. (2021). Artificial Intelligence Index 2021, chapter 7. Available at https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report-_Chapter-7.pdf and OECD/European Commission. (2021). AI Policy Observatory. Available at: https://oecd.ai/en/dashboard[/footnote] Brazil,[footnote]Ministério da Ciência, Tecnologia e Inovações. (2021). Estratégia Brasileira de Inteligência Artificial. Available at: https://www.gov.br/mcti/pt-br/acompanhe-o-mcti/transformacaodigital/inteligencia-artificial[/footnote] the USA[footnote]See: National Artificial Intelligence Initiative Act, 116th Cong. (2020). Available at https://www.congress.gov/bill/116th-congress/house-bill/6216 and the establishment of the National Artificial Intelligence Research Resource Task Force: The White House. (2021). ‘The Biden Administration Launches the National Artificial Intelligence Research Resource Task Force’. Available at: https://www.whitehouse.gov/ostp/news-updates/2021/06/10/the-biden-administration-launches-the-national-artificial-intelligence-research-resource-task-force/[/footnote] and the UK[footnote]UK Government. (2021). National AI Strategy. Available at: https://www.gov.uk/government/publications/national-ai-strategy[/footnote] launched their own initiatives, with regulatory intentions ranging from developing ethical principles and guidelines for responsible use, to boosting research and innovation, to becoming a world leader, an ‘AI superpower’ and a global data hub. Many of these initiatives are industrial policy rather than regulatory frameworks, and focus on creating an enabling environment for the rapid development of AI markets, rather than mitigating risk and harms.[footnote]For concerns raised by the US National Artificial Intelligence Research Resource (NAIRR) see: AI Now and Data & Society’s joint comment. Available at https://ainowinstitute.org/AINow-DS-NAIRR-comment.pdf[/footnote]

In August 2021, China adopted its comprehensive data protection framework consisting of the Personal Information Protection Law,[footnote]For a detailed analysis, see: Dorwart, H., Zanfir-Fortuna, G. and Girot, C. (2021). ‘China’s New Comprehensive Data Protection Law: Context, Stated Objectives, Key Provisions’. Future of Privacy Forum. Available at https://fpf.org/blog/chinas-new-comprehensive-data-protection-law-context-stated-objectives-key-provisions/[/footnote] which is modelled on the GDPR, and the Data Security Law, which focuses on harm to national security and public interest from data-driven technologies.[footnote]Creemers, R. (2021). ‘China’s Emerging Data Protection Framework’. Social Science Research Network. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3964684[/footnote] Researchers argue that understanding this unique regulatory approach should not start from a comparative analysis (for example to jurisdictions such as the EU, which focus on fundamental rights). They trace its roots to the Chinese understanding of cybersecurity, which aims to protect national polity, economy and society from data-enabled harms and defend against vulnerabilities.[footnote]Creemers, R. (2021).[/footnote]

While some of these recent initiatives have the potential to transform market dynamics towards less centralised and less exploitative practices, none of them meaningfully contest the dominant business model of online platforms or promote ethical alternatives. Legislators seem to choose to regulate through large actors as intermediaries, rather than by reimagining how regulation could support a more equal distribution of power. In particular, attention must be paid to the way many proposed solutions tacitly require ‘Big Tech’ to stay big.[footnote]Owen, T. (2020). ‘Doctorow versus Zuboff’. Centre for International Governance Innovation. Available at https://www.cigionline.org/articles/doctorow-versus-zuboff/[/footnote]

The EU’s approach to platform, data and AI regulation

 

In the EU, the Digital Services Act (DSA) and the Digital Markets Act (DMA) bring a proactive approach to platform regulation, by prohibiting certain practices up front and introducing a comprehensive package of obligations for online platforms.

 

The DSA sets clear obligations for online platforms against illegal content and disinformation and prohibits some of the most harmful practices used by online platforms (such as using manipulative design techniques and targeted advertising based on exploiting sensitive data).

 

It mandates increased transparency and accountability for key platform services (such as providing the main parameters used by recommendation systems) and includes an obligation for large companies to perform systemic risk assessments. This is complemented with a mechanism for independent auditors and researchers to access the data underpinning the company’s risk assessment conclusions and scrutinise the companies’ mitigation decisions.

 

While this is undoubtedly a positive shift, the impact of this legislation will depend substantially on online platforms’ readiness to comply with legal obligations, their interpretation of new legal obligations and effective enforcement (which has proved challenging in the past, for example with the GDPR).

 

The DMA addresses anticompetitive behaviour and unfair market practices of platforms that – according to this legislation – qualify as ‘gatekeepers’. Next to a number of prohibitions (such as combining or cross-using personal data without user consent), which are aimed at preventing the gatekeepers’ exploitative behaviour, the DMA contains obligations that – if enforced properly – will lead to more user choice and competition in the market for digital services.

 

These include basic interoperability requirements for instant messaging services, as well as interoperability with the gatekeepers’ operating system, hardware and software when the gatekeeper is providing complementary or supporting services.[footnote]European Parliament and Council of the European Union. (2022). Digital Markets Act, Article 7, Article 6 and Recital 57. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote] Another is the right for business users of the gatekeepers’ services to obtain free-of-charge, high quality, continuous and real-time access to data (including personal data) provided or generated in connection with their use of the gatekeepers’ core service.[footnote]European Parliament and Council of the European Union. (2022). Article 6 (10).[/footnote] End users will also have the right to exercise the portability of their data, both provided as well as generated through their activity on core services such as marketplaces, app stores, search and social media.[footnote]European Parliament and Council of the European Union. (2022). Digital Markets Act, Article 6 (9). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote]

 

The DMA and DSA do not go far enough in terms of addressing deeply rooted challenges, such as supporting alternative business models that are not premised on data exploitation or speaking to users’ expectations to be able to control algorithmic interfaces (such as the interface for content filtering/generating recommendations). Nor does it create a level playing field for new market players who would like to develop services that compete with the gatekeepers’ core services.

 

New approaches to data access and sharing are also seen with the adopted Data Governance Act (DGA)[footnote]Replace: European Parliament and Council of the European Union. (2022). Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0868&qid=1657887017015[/footnote] and the draft Data Act.[footnote]European Commission. (2021). Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN[/footnote] The DGA introduces the concept of ‘data altruism’ (the possibility for individuals or companies to voluntarily share data for the public good), facilitates the re-use of data from public and private bodies, and creates rules for data intermediaries (providers of data sharing services that are free of conflicts of interests relating to the data they share).

 

Complementing this approach, the proposed Data Act aims at securing end users’ right to obtain all data (personal, non-personal, observed or provided) generated by their use of products such as wearable devices and related services. It also aims to develop a framework for interoperability and portability of data between cloud services, including requirements and technical standards enabling common European data spaces.

 

There is also an increased focus on regulating the design and use of data-driven technologies, such as those that use artificial intelligence (machine learning algorithms). The draft Artificial Intelligence Act (AI Act) follows a risk-based approach that is limited to regulating ‘unacceptable’ and high-risk AI systems, such as prohibiting AI uses that pose a risk to fundamental rights or imposing ex ante design obligations on providers of high-risk AI systems.[footnote]European Commission. (2021). Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206[/footnote]

 

Perhaps surprisingly, the AI Act, as proposed by the European Commission, does not impose any transparency or accountability requirements on systems that pose less than high risk (with the exception of AI system that may deceive or confuse consumers), which include the dominant commercial business-to-consumer (B2C) services (e.g. search engines, social media, some recommendation systems, health monitoring apps, insurance and payment services).

 

Regardless of the type of risk (high-risk or limited-risk), this approach leaves a significant gap in accountability requirements for both large and small players that could be responsible for creating unfair AI systems. Responsibility measures should aim both at regulating the infrastructural power of large technology companies that supply most of the tools for ‘building AI’ (such as large language models, cloud computing power, text and speech generation and translation), as well as at creating responsibility requirements for smaller downstream providers who make use of these tools to construct their underlying services.

 

3. Weak enforcement response in digital markets

Large platforms are by their nature multi-sided, multi-sectoral and operate globally. The regulation of their commercial practices cuts across many sectors, and they are overseen by multiple bodies in different jurisdictions with varying degrees of expertise and in-house knowledge about how platforms operate. These include consumer protection authorities, data protection and competition authorities, non-discrimination and equal opportunities bodies, and financial markets, telecom regulators, media regulators, etc.).

It is well known that these regulatory bodies are frequently under-equipped for the task they are charged with, and there is an asymmetry between the resources available to them compared to the resources large corporations invest in neutralising enforcement efforts. For example, in the EU there is an acute lack of resources and institutional capacity: half the data protection authorities in the EU have an annual budget of €5 million or less, and 21 of the data protection authorities declare that their existing resources are not enough to operate effectively.[footnote]Ryan, J. and Toner, A. (2020). ‘Europe’s governments are failing the GDPR’. brave.com. Available at: https://brave.com/static-assets/files/Brave-2020-DPA-Report.pdf and European Data Protection Board (2020). Contribution of the EDPB to the evaluation of the GDPR under Article 97. Available at: https://edpb.europa.eu/sites/default/files/files/file1/edpb_contributiongdprevaluation_20200218.pdf[/footnote] 

A bigger problem is the lack of regulatory response in general, and recent lessons learned from insufficient data-protection enforcement responses show there needs to be a shift towards a stronger response from regulators, and a more proactive, collaborative approach to curbing exploitative and harmful activities, and bringing down illegal practices.

For example, in 2018 the first complaints against the invasive practices of the online advertising industry (such as real-time bidding, an online ad auctioning system that broadcasts personal data to thousands of companies)[footnote]More details at Irish Council for Civil Liberties. See: https://www.iccl.ie/rtb-june-2021/[/footnote] were filed with the Irish Data Protection Commissioner (Irish DPC) and with the UK’s Information Commissioner Office (ICO),[footnote]Irish Council for Civil Liberties. (2018). Regulatory complaint concerning massive, web-wide data breach by Google and other ‘ad tech’ companies under Europe’s GDPR. Available at: https://www.iccl.ie/digital-data/regulatory-complaint-concerning-massive-web-wide-data-breach-by-google-and-other-ad-tech-companies-under-europes-gdpr/[/footnote] two of the more resourceful – but still not sufficiently funded – authorities. Similar complaints followed across the EU.

After three years of inaction, civil society groups initiated court cases against the two regulators for lack of enforcement, as well as a lawsuit against major advertising and tracking companies.[footnote]See: Irish Council for Civil Liberties. (2022). ‘ICCL sues DPC over failure to act on massive Google data breach’. Available at: https://www.iccl.ie/news/iccl-sues-dpc-over-failure-to-act-on-massive-google-data-breach/; Irish Council for Civil Liberties. (2021). ‘ICCL lawsuit takes aim at Google, Facebook, Amazon, Twitter and the entire online advertising industry’. Available at: https://www.iccl.ie/news/press-announcement-rtb-lawsuit/; and Open Rights Group. Ending illegal online advertising. Available at: https://www.openrightsgroup.org/campaign/ending-adtech-abuse/[/footnote] It was a relatively small regulator, the Belgian Data Protection Authority, that confirmed in its 2022 decision that those ad tech practices are illegal, showing that the lack of resources is not the sole cause for regulatory inertia.[footnote]Belgian Data Protection Authority. (2022). ‘The BE DPA to restore order to the online advertising industry: IAB Europe held responsible for a mechanism that infringes the GDPR’. Available at: https://www.dataprotectionauthority.be/citizen/iab-europe-held-responsible-for-a-mechanism-that-infringes-the-gdpr[/footnote]

Some EU data protection authorities have been criticised for their reluctance to intervene in the technology sector. For example, it took three years from launching the investigation for the Irish regulator to issue a relatively small fine against WhatsApp for failure to meet transparency requirements under the GDPR.[footnote]Data Protection Commission. (2021). ‘Data Protection Commission announces decision in WhatsApp inquiry’. Available at: https://www.dataprotection.ie/en/news-media/press-releases/data-protection-commission-announces-decision-whatsapp-inquiry[/footnote] The authority is perceived as a key ‘bottleneck’ to enforcement because of its delays in delivering enforcement decisions,[footnote]The European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE Committee) also issued a draft motion in 2021 in relation to how the Irish DPC was handling the ‘Schrems II’ case and recommended the European Commission to start the infringement procedures against Ireland for not properly enforcing the GDPR.[/footnote] as many of the large US technology companies are established in Dublin.[footnote]Espinoza, J. (2021). ‘Fighting in Brussels bogs down plans to regulate Big Tech’. Financial Times.. Available at: https://www.ft.com/content/7e8391c1-329e-4944-98a4-b72c4e6428d0[/footnote]

Some have suggested that ‘reform to centralise enforcement of the GDPR could help rein in powerful technology companies’.[footnote]Manancourt, V. (2021). ‘EU privacy law’s chief architect calls for its overhaul’. Politico. Available at: https://www.politico.eu/article/eu-privacy-laws-chief-architect-calls-for-its-overhaul/[/footnote]

The Digital Markets Act (DMA) awards the European Commission the role of a sole enforcer against certain data-related practices performed by ‘gatekeeper’ companies (for example the prohibition of combining and cross-using personal data from different services without consent). The enforcement mechanism of the DMA gives powers to the European Commission to target selected data practices that may also infringe rules typically governed by the GDPR.

In the UK, the ICO has been subject to criticism for its preference for dialogue with stakeholders over formal enforcement of the law. Members of Parliament as well as civil society organisations have increasingly voiced their disquiet over this approach,[footnote]Burgess, M. (2020). ‘MPs slam UK data regulator for failing to protect people’s rights’. Wired UK. Available at: https://www.wired.co.uk/article/ico-data-protection-gdpr-enforcement; Open Rights Group (2021). ‘Open Rights Group calls on the ICO to do its job and enforce the law’. Available at: https://www.openrightsgroup.org/press-releases/open-rights-group-calls-on-the-ico-to-do-its-job-and-enforce-the-law/[/footnote] while academics have queried how the ICO might be held accountable for its selective and discretionary application of the law.[footnote]Erdos, D. (2020). ‘Accountability and the UK Data Protection Authority: From Cause for Data Subject Complaint to a Model for Europe?’. Social Science Research Network. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3521372[/footnote]

The 2021 public consultation led by the UK Government – Data: A New Direction – will do little to reassure those concerned, given the significant incursions into the ICO’s regulatory independence mooted.[footnote]Lynskey, O. (2021). ‘EU-UK Data Flows: Does the “New Direction” lead to “Essentially Equivalent” Protection?’. The Brexit Institute. Available at https://dcubrexitinstitute.eu/2021/09/eu-uk-data-new-direction/[/footnote] It remains to be seen whether subsequent consultations initiated by the ICO regarding its regulatory approach signal a shift from selective and discretionary application of law towards formal enforcement action.[footnote]Erdos, D. (2022). ‘What Way Forward on Information Rights Regulation? The UK Information Commissioner’s Office Launches a Major Consultation’. Inforrm. Available at https://inforrm.org/2022/01/21/what-way-forward-on-information-rights-regulation-the-uk-information-commissioners-office-launches-a-major-consultation-david-erdos/[/footnote]

The measures proposed for consultation go even further towards removing some of the important requirements and guardrails against data abuses, which in effect will legitimise practices that have been declared illegal in the EU.[footnote]Delli Santi, M. (2022). ‘A day of reckoning for IAB and Adtech’. Open Rights Group. Available at https://www.openrightsgroup.org/blog/a-day-of-reckoning-for-iab-and-adtech/[/footnote]

Recognising the need for cooperation among different regulators

Examinations of abuses, market failure, concentration tendencies in the digital economy and market power of large platforms are more prominent. Extensive reports were commissioned by governments in the UK,[footnote]Digital Competition Expert Panel. (2019). Unlocking digital competition. UK Government. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf[/footnote] Germany,[footnote]Schweitzer, H., Haucap, J., Kerber, W. and Welker, R. (2018). Modernisierung der Missbrauchsaufsicht für marktmächtige Unternehmen. Baden-Baden: Nomos. Available at https://www.bmwk.de/Redaktion/DE/Publikationen/Wirtschaft/modernisierung-der-missbrauchsaufsicht-fuer-marktmaechtige-unternehmen.pdf?__blob=publicationFile&v=15. An executive summary in English is available at: https://ssrn.com/abstract=3250742[/footnote], the European Union,[footnote]Crémer, J., de Montjoye, Y-A. and Schweitzer, H. (2019) Competition policy for the digital era. European Commission. Available at: http://ec.europa.eu/competition/publications/reports/kd0419345enn.pdf[/footnote] Australia[footnote]Australian Competition and Consumer Commission (ACCC). (2019). Digital Platforms Inquiry – Final Report. Available at: https://www.accc.gov.au/system/files/Digital%20platforms%20inquiry%20-%20final%20report.pdf[/footnote] and beyond, asking what transformations are necessary in competition policy, to address the challenges of the digital economy.

A comparison of these four reports highlights the problem of under-enforcement in competition policy and recommends a more active enforcement response.[footnote]Kerber, W. (2019). ‘Updating Competition Policy for the Digital Economy? An Analysis of Recent Reports in Germany, UK, EU, and Australia’. Social Science Research Network. Available at: https://ssrn.com/abstract=3469624[/footnote] It also underlines that all the reports analyse the important interplay between competition policy and other policies such as data protection and consumer protection law.

The Furman report in the UK recommended the creation of a new Digital Markets Unit that collaborates on enforcement with regulators in different sectors and draws on their experience to form a more robust approach to regulating digital markets.[footnote]Digital Competition Expert Panel. (2019). Unlocking digital competition. UK Government. Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf[/footnote] In 2020, the UK Digital Regulation Cooperation Forum (DRCF) was established to enhance cooperation between the Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO), the Office of Communications (Ofcom) and the Financial Conduct Authority (FCA) and support a more coordinated regulatory approach.[footnote]Digital Regulation Cooperation Forum. Plan of work for 2021 to 2022. Ofcom. Available at: https://www.ofcom.org.uk/__data/assets/pdf_file/0017/215531/drcf-workplan.pdf[/footnote]

The need for more collaboration and joined-up thinking among regulators was highlighted by the European Data Protection Supervisor (EDPS) in 2014.[footnote] European Data Protection Supervisor. (2014). Privacy and Competitiveness in the Age of Big Data. The Interplay between data Protection, Competition Law and Consumer Protection in the Digital Economy, Preliminary Opinion. Available at: https://edps.europa.eu/sites/edp/files/publication/14-03-26_competitition_law_big_data_en.pdf[/footnote] In 2016, the EDPS launched the Digital Clearinghouse initiative, an international voluntary network of enforcement bodies in different fields,[footnote]See: European Data Protection Supervisor 2016 initiative to create a network of data protection, consumer and competition regulators. Available at: https://www.digitalclearinghouse.org/[/footnote] however its activity has been limited.

Today there is still limited collaboration between regulators across sectors and borders because of a lack of legal basis for effective cooperation and exchange of information, including compelled and confidential information. Support for a more proactive and coherent regulatory enforcement must increase substantially to make a significant impact in terms of limiting the overwhelming power of large technology corporations in markets, over people and in democracy.

Chapter 2: Making data work for people and society

This chapter explores four cross-cutting interventions that have the potential to shift power in the digital ecosystem, especially if implemented in coordination with each other. These provocative ideas are offered with the aim to push forward the thinking around existing data policy and practice.

Each intervention is capable of integrating legal, technological, market and governance solutions that could help transition the digital ecosystem towards a people-first vision. While there are many potential approaches, for the purposes of this report – for clarity and ease of understanding –one type of potential solution or remedy is focused on under each intervention.

Each intervention is woven and connected to the others in a way that sets out a cross-cutting vision of an alternative data future, and which can frame forward-looking debates about data policy and practice. The vision these interventions offer will require social and political standing. Behind each intervention there is a promise of a positive change that needs the support and collaboration of policymakers, researchers, civil society organisations and industry practitioners to make them into a reality.

1. Transforming infrastructure through open ecosystems

The vision

Imagine a world in which digital systems have been transformed, and control over technology infrastructure and algorithms no longer lies in the hands of a few large corporations.

Transforming infrastructure means what was once a closed system of structural dependencies, which enabled large corporations to concentrate power, has been replaced by an open ecosystem where power imbalances are reduced and people can shape the digital experiences they want.

No single company or subset of companies controls the full technological stack of digital infrastructures and services. Users can exert meaningful control over the ways an operating system functions on phones and computers, and actions performed by web browsers and apps.

The incentive structures that drove technology companies to entrench power have been dismantled, and new business models are more clearly aligned with user interests and societal benefits. This means there are no more ‘lock in’ models, in which users find it burdensome to switch to another digital service provider, and fewer algorithmic systems that are optimised to attract clicks, prioritising advertising revenue over people’s needs and interests.

Instead, there is competition and diversity of digital services for users to choose from, and these services use interoperable architectures that enable users to switch easily to other providers and mix-and-match services of their choice within the same platform. For example, third-party providers create products that enable users to seamlessly communicate  on social media channels from a standalone app. Large platforms allow their users to change the default content curation algorithm to the one of their choice.

Thanks to full horizontal and vertical interoperability, people using digital services are empowered to choose their favourite or trusted provider of infrastructure, content and interface. Rather than platforms setting rules and objectives that determine what information is surfaced by their recommender system, third-party providers, including reputable news organisations and non-profits, can build customised filters (operating on the top of default recommender systems to modify the newsfeed) or design alternative recommender systems.

All digital platforms and service providers operate within high standards of security and protection, which are audited and enforced by national regulators. Following new regulatory requirements, large platforms operate under standard protocols that are designed to respect choices made by their users, including strict limitations on the use of their personal data.

How to get from here to there

In today’s digital markets, there is unprecedented consolidation of power in the hands of a few, large US and Chinese digital companies. This tendency towards centralised power is supported by the current abilities of platforms to:

  • process substantial quantities of personal and non-personal data, to optimise their services and the experience of each business or individual user
  • extract market-dominating value from large-volume interactions and transactions
  • use their financial power to either acquire or imitate (and further improve) innovations in the digital economy
  • use this capacity to leverage dominance into new markets
  • use financial power to influence legislation and stall enforcement through litigation.

The table below takes a more detailed look at some of the sources of power and possible remedies.

These dynamics reduce the possibility for new alternative services to be introduced and contribute to users’ inability to switch services and to make value-based decisions (for example, to choose a privacy-optimised social media application, or to determine what type of content is prioritised on their devices).[footnote]Brown, I. (2021). ‘From ‘walled gardens’ to open meadows’. Ada Lovelace Institute. Available at: https://www.adalovelaceinstitute.org/blog/walled-gardens-open-meadows/[/footnote] Instead, a few digital platforms have the ability to capture a large user base and extract value from attention-maximising algorithms and ‘dark patterns’ – deceptive design practices that influence users’ choices and encourage them to take actions that result in more profit for the corporation, often at the expense of the user’s rights and digital wellbeing.[footnote]See: Brown, I. (2021) and Norwegian Consumer Council. (2018). Deceived by Design. Available at: https://www.forbrukerradet.no/undersokelse/no-undersokelsekategori/deceived-by-design/[/footnote]

As discussed in Chapter 1, there is still much to explore when considering possible regulatory solutions, and there are many possible approaches to reducing concentration and market dominance. Conceptual discussions about regulating digital platforms that have been promoted in policy and academia range from ‘breaking up big tech’,[footnote]Warren, E. (2020). Break Up Big Tech. Available at: https://2020.elizabethwarren.com/toolkit/break-up-big-tech[/footnote] by separating the different services and products they control into separate companies, to nationalising and transforming platforms into public utilities or conceiving of them as universal digital services.[footnote]Muldoon, J. (2020). ‘Don’t Break Up Facebook — Make It a Public Utility’. Jacobin. Available at: https://www.jacobinmag.com/2020/12/facebook-big-tech-antitrust-social-network-data[/footnote] Alternative proposals suggest limiting the number of data-processing activities a company can perform concurrently, for example separating search activities from targeted advertising that exploits personal profiles.

There is a need to go further. The imaginary picture painted at the start of this section points towards an environment where there is competition and meaningful choice in the digital ecosystem, where rights are more rigorously upheld and where power over infrastructure is less centralised. This change in power dynamics would require, as one of the first steps, that digital infrastructure is transformed with full vertical and horizontal interoperability. The imagined ecosystem includes large online platforms, but in this scenario they find it much more difficult to maintain a position of dominance, thanks to real-time data portability, user mobility and requirements for interoperability stimulating real competition in digital services.

What is interoperability?

Interoperability is the ability of two or more systems to communicate and exchange information. It gives end users the ability to move data between services (data portability), and to access services across multiple providers.

 

How can interoperability be enabled?

Interoperability can be enabled by developing (formal or informal) standards that define a set of rules and specifications that, when implemented, allow different systems to communicate and work together. Open standards are created through the consensus of a group of interested parties and are openly accessible and usable by anyone.

This section explores a range of interoperability measures that can be introduced by national or European policy makers, and discusses further considerations to transform the current, closed platform infrastructure into an open ecosystem.

Introducing platform interoperability

Drawing from examples of other sectors that historically have operated in silos,  mandatory interoperability measures are a potential tool that merit further exploration, to create new opportunities for both companies and users.

Interoperability is a longstanding policy tool in EU legislation and more recent digital competition reviews suggest it as a measure against highly concentrated digital markets.[footnote]Brown, I. (2020). ‘Interoperability as a tool for competition’. CyberBRICS. Available at: https://cyberbrics.info/wp-content/uploads/2020/08/Interoperability-as-a-tool-for-competition-regulation.pdf and Brown, I. (2021). ‘From ‘walled gardens’ to open meadows’. Ada Lovelace Institute. Available at: https://www.adalovelaceinstitute.org/blog/walled-gardens-open-meadows/[/footnote] 

In telecommunications, interoperability measures make it possible to port phone numbers from one provider to another, and enable customers of one phone network to call and message customers on other networks, improving choice for consumers. In the banking sector, interoperability rules made it possible for third parties to facilitate account transfers from one bank to another, and to access data about account transactions to build new services. This opened up the banking market for new competitors and delivered new types of financial services for customers.

In the case of large digital platforms, introducing mandatory interoperability measures is one way to allow more choice of service (preventing both individual and business users from being trapped in one company’s products and services), and to re-establish the conditions to enable a competitive market for start-ups and small and medium-sized enterprises to thrive.[footnote]Brown, I. (2021).[/footnote]

While some elements of interoperability are present in existing or proposed EU legislation, this section explores a much wider scope of interoperability measures than those that have already been adopted. (For a more detailed discussion on ‘Possible interoperability mandates and their practical implications’, see the text box below.)

Some of these elements of interoperability in existing or proposed EU legislation are:[footnote]For a more comprehensive list, see: Brown, I. (2020). ‘Interoperability as a tool for competition’. CyberBRICS. Available at: https://cyberbrics.info/wp-content/uploads/2020/08/Interoperability-as-a-tool-for-competition-regulation.pdf[/footnote]

  • The Digital Markets Act enables interoperability requirements between instant messaging services, as well as with the gatekeepers’ operating system, hardware and software (when the gatekeeper is providing complementary or supporting services), and strengthens data portability rights.[footnote]European Parliament and Council of the European Union. (2022). Digital Markets Act, Recital 64, Article 6 (7), Recital 57, and Article 6 (9) and Recital 59. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote]
  • The Data Act proposal aims to enable switching between cloud providers.[footnote]European Commission. (2021). Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act). Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN[/footnote]
  • Regulation on promoting fairness and transparency for business users of online intermediation services (‘platform-to-business regulation’) gives business users the right to access data generated through the provision of online intermediation services.[footnote]European Parliament and European Council. Regulation 2019/1150 on promoting fairness and transparency for business users of online intermediation services. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32019R1150[/footnote]

These legislative measures address some aspects of interoperability, but place limited requirements on services other than instant messaging services, cloud providers and operating systems in certain situations.[footnote]Gatekeepers designated under the Digital Markets Act need to provide interoperability to their operating system, hardware or software features that are available or used by the gatekeeper in the provision of its own complementary or supporting services or hardware. See: European Parliament and Council of the European Union. (2022). Digital Markets Act, Recital 57. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote] They also do not articulate a process for creating technical standards around open protocols for other services. This is why there is a need to test more radical ideas, such as mandatory interoperability for large online platforms covering both access to data and platform functionality.

 

Possible interoperability mandates and their practical implications

Ian Brown

 

Interoperability in digital markets requires some combination of access to data and platform functionality.

 

Data interoperability

Data portability (Article 20 of the EU GDPR) is the right of a user to move their personal data from one company to another. (The Data Transfer Project developed by large technology companies is slowly developing technical tools to support this.[footnote]The Data Transfer Project is a collaboration launched in 2017 between large companies such as Google, Facebook, Microsoft, Twitter, Apple to build a common framework with open-source code for data portability and interoperability between platforms. More information is available at: https://datatransferproject.dev/[/footnote]) This should help an individual switch from one company to another, including by giving price comparison tools access to previous customer bills. 

 

However, a wider range of uses could be enabled by real-time data mobility[footnote]Digital Competition Expert Panel. (2019). Unlocking digital competition. UK Government. Available at: https://assets. publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/785547/unlocking_digital_ competition_furman_review_web.pdf[/footnote] or interoperability,[footnote]Kerber, W. and Schweitzer, H. (2017). ‘Interoperability in the Digital Economy’. JIPITEC, 8(1). Available at: https://www.jipitec.eu/issues/jipitec-8-1-2017/4531[/footnote] implying that an individual can give one company permission to access their data held by another, and meaning it can be updated whenever they use the second service. These remedies can stand alone, where the main objective is to enable individuals to give access to their personal data held by an incumbent firm to competitors.

 

Scholars make an additional distinction between syntactic or technical interoperability, the ability for systems to connect and exchange data (often via Application Programming Interfaces or ‘APIs’) and semantic interoperability, that connected systems share a common understanding of the meaning of data they exchange.[footnote]Kerber, W. and Schweitzer, H. (2017).[/footnote]

 

An important element of making both types of data-focused interoperability work is developing more data standardisation to require datasets to be structured, organised, stored and transmitted in more consistent ways across different devices, services and systems. Data standardisation creates common ontologies, or classifications, that specify the meaning of data.[footnote]Gal, M.S. and Rubinfeld, D. L. (2019), ‘Data Standardization’. NYU Law Review, 94, no. (4). Available at: https://www.nyulawreview.org/issues/volume-94-number-4/data-standardization/[/footnote]

 

For example, two different instant messaging services would benefit from a shared internal mapping of core concepts such as identity (phone number, nickname, email), rooms (public or private group chats, private messaging), reactions, attachments, etc. – these are concepts and categories that could be represented in a common ontology, to bridge functionality and transfer data across these services.[footnote]Matrix.org is a recent design of an open protocol for instant messaging service interoperability.[/footnote]

 

Data standardisation is an essential underpinning for all types of portability and interoperability and, just like the development of technical standards for protocols, it needs both industry collaboration and measures to ensure powerful companies do not hijack standards to their own benefit.

 

An optional interoperability function is to require companies to support personal data stores (PDS), where users store and control data about them using a third-party provider and can make decisions about how it is used (e.g. the MyData model[footnote]Kuikkaniemi, K., Poikola, A. and Honko, H. (2015). MyData – A Nordic Model for Human-Centered Personal Data Management and Processing’. Ministry of Transport and Communications. Available at: https://julkaisut.valtioneuvosto.fi/bitstream/handle/10024/78439/MyData-nordic-model.pdf[/footnote] and Web inventor Tim Berners-Lee’s Solid project). 

 

The data, or consent to access it, could be managed by regulated aggregators (major Indian banks are developing a model where licensed entities aggregate account data with users’ consent and therefore act as an interoperability bridge between multiple financial services),[footnote]Singh, M. (2021) ‘India’s Account Aggregator Aims to Bring Financial Services to Millions’. TechCrunch. Available at: https://social.techcrunch.com/2021/09/02/india-launches-account-aggregator-system-to-extend-financial-services-to-millions/[/footnote] or facilitated by user software through an open set of standards adopted by all service providers (as in the UK’s Open Banking). It is also possible for service providers to send privacy-protective queries or code to run on personal data stores inside a protected sandbox, limiting the service provider’s access to data (e.g. a mortgage provider could send code, checking an applicant’s monthly income was above a certain level, to their PDS or current account provider, without gaining access to all of their transaction data).[footnote]Yuchen, Z., Haddadi, H., Skillman, S., Enshaeifar, S., and Barnaghi, P. (2020) ‘Privacy-Preserving Activity and Health Monitoring on Databox’. EdgeSys ’20: Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, pp. 49–54. Available at: https://doi.org/10.1145/3378679.3394529[/footnote]

 

The largest companies currently have a significant advantage in their access to very large quantities of user data, particularly when it comes to training machine learning systems. Requiring access to statistical summaries of the data (e.g. popularity of specific social media content and related parameters) may be sufficient, while limiting the privacy problems caused. Finally, firms could be required to share the (highly complex) details of machine learning models, or provide regulators and third-parties access to them to answer specific questions (such as the likelihood a given piece of social-media content is hate speech).

 

The interoperability measures described above would enable a smoother transfer of data between digital services, and enable users to exert more control over what kind of data is shared and in what circumstances. This would make for a ‘cleaner’ data ecosystem, in which platforms and services are no longer incentivised to gather as much data as possible on every user.

 

Rather, users would have more power to determine how their data is collected and shared, and smaller services wouldn’t need to engage in extractive data practices to ‘catch up’ with larger platforms, as barriers to data access and transfer would be reduced. The overall impact on innovation would depend on whether increased competition resulting from data sharing at least counterbalanced these reduced incentives.

 

Functionality-oriented interoperability

Another form of interoperability relates to enabling digital services and platforms to work cross-functionally, which could improve user choice in the services they use and reduce the risk of ‘lock in’ to a particular service. Examples of functionality-oriented interoperability (sometimes referred to as protocol interoperability,[footnote]Crémer, J., de Montjoye, Y-A., and Schweitzer, H. (2019). Competition Policy for the Digital Era. European Commission. Available at https://data.europa.eu/doi/10.2763/407537[/footnote] or in telecoms regulation, interconnection of networks) include:

  • the ability for a user of one instant-messaging service to send a message to a user or group on a competing service
  • the user of one social media service can ‘follow’ a user on another service, and ‘like’ their shared content
  • the ability of a user of a financial services tool to initiate a payment from an account held with a second company
  • the user of one editing tool can collaboratively edit a document or media file with the user of a different tool, hosted on a third platform.

 

Services communicate with each other using open/publicly accessible APIs and/or standardised protocols. In Internet services, this generally looks like the ‘decentralised’ network architectures shown below:

The UK’s Open Banking Standard recommended: ‘The Open Banking API should be built as an open, federated and networked solution, as opposed to a centralised/hub-like approach. This echoes the design of the Web itself and enables far greater scope for innovation.’[footnote]Open Data Institute. (2016). The Open Banking Standard. Available at: http://theodi.org/wp-content/uploads/2020/03/298569302-The-Open-Banking-Standard-1.pdf[/footnote]

 

An extended version of functional interoperability would allow users to exercise other forms of control over the products and services they use, including:

  • signalling their preferences to platforms on profiling – the recording of data to assess or predict their preferences – using a tool such as the Global Privacy Control, or expressing their preferred default services such as search
  • replacing core platform functionality, such as a timeline ranking algorithm or an operating system default mail client, with a preferred version from a competitor (known as modularity)[footnote]Farrell, J., and Weiser, P. (2003). ‘Modularity, Vertical Integration, and Open Access Policies: Towards a Convergence of Antitrust and Regulation in the Internet Age’. Harvard Journal of Law and Technology, 17(1). Available at: https://doi.org/10.2139/ssrn.452220[/footnote]
  • using their own choice of software to interact with the platform.

 

Noted competition economist Cristina Caffarra has concluded: ‘We need wall-to-wall [i.e. near-universal] interoperability obligations at each pinch point and bottleneck: only if new entrants can connect and leverage existing platforms and user bases can they possibly stand a chance to develop critical mass.’[footnote]Caffarra, C. (2021). ‘What Are We Regulating For?’. VOX EU. Available at: https://cepr.org/voxeu/blogs-and-reviews/what-are-we-regulating[/footnote] Data portability alone is a marginal solution (and a limited remedy for GAFAM (Google, Apple, Facebook (now Meta Platforms), Amazon, Microsoft) when those companies want to flag their good intentions.[footnote]Caffarra, C. (2021).[/footnote] A review of portability in the Internet of Things sector came to a similar conclusion.[footnote]Turner, S., Quintero, J. G., Turner, S., Lis, J. and Tanczer, L. M. (2020). ‘The Exercisability of the Right to Data Portability in the Emerging Internet of Things (IoT) Environment’. New Media & Society. Available at: https://doi.org/10.1177/1461444820934033[/footnote]

 

Further considerations and provocative concepts

Mandatory interoperability measures have the potential to transform digital infrastructure, and to enable innovative services and new experiences for users. However, they need to be supported by carefully considered additional regulatory measures, such as cybersecurity, data protection and related accountability frameworks. (See text box below on ‘How to address sources of platform power? Possible remedies’ for an overview of interoperability and data protection measures that could tackle some of the sources of power for platforms.)

Also, the development of technical standards for protocols and classification systems or ontologies specifying the meaning of data (see text box above on ‘Possible interoperability mandates and their practical implications’) is foundational to data and platform interoperability. However, careful consideration must be placed on designing new types of infrastructure, in order to prevent platforms from consolidating control. Examples from practice show that developing open standards and protocols are not enough on their own.

Connected to the example above on signalling preferences to platforms, open protocols such as the ‘Do Not Track’ header were meant to help users more easily exercise their data rights by signalling an opt-out preference from website tracking.[footnote]Efforts to standardise the ‘Do Not Track’ header ended in 2019 and expressing tracking preferences at browser level is not currently a widely adopted practice. More information is available here: https://www.w3.org/TR/tracking-dnt/[/footnote] In this case, the standardisation efforts stopped due to insufficient deployment,[footnote]See here: https://github.com/w3c/dnt/commit/5d85d6c3d116b5eb29fddc69352a77d87dfd2310[/footnote] demonstrating the significant challenge in obliging platforms to  facilitate the use of standards in the services they deploy. 

A final point relates to creating interoperable systems that do not overload users with too many choices. Already today it is difficult for users to manage all the permissions they give across all the services and platforms they use. Interoperability may offer solutions for users to share their preferences and permissions for how their data should be collected and used by platforms, without requiring recurring ‘cookie notice’-type requests to a user when using each service.

How to address sources of platform power? Possible remedies

Ian Brown

 

Interoperability and related remedies have the potential to address not only problems resulting from market dominance of a few large firms, but – more importantly – some of the sources of their market power. However, every deep transformation needs to be carefully designed to prevent unwanted effects. The challenges associated with designing market interventions based on interoperability mandates need to be identified early in the policy- making process so that problems can either be solved or accounted for.

 

The table below presents specific interoperability measures, classified by their potential to address various sources of large platforms’ power, next to problems that are likely to result from their implementation.

 

While much of the policy debate so far on interoperability remedies has taken place within a competition-law framework (including telecoms and banking competition), there are equally important issues to consider under data and consumer protection law, as well as useful ideas from media regulation. Competition-focused measures are generally applied only to the largest companies, while other measures can be more widely applied. In some circumstances these measures can be imposed under existing competition-law regimes on dominant companies in a market, although this approach can be extremely slow and resource-intensive for enforcement agencies.

 

The EU Digital Markets Act (DMA), and US proposals (such as the ACCESS Act and related bills), impose some of these measures up-front on the very largest ‘gatekeeper’ companies (as defined by the DMA). The European Commission has also introduced a Data Act that includes some of the access to data provisions below.[footnote]European Commission. (2021). Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act). Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN[/footnote] Under these measures, smaller companies are free to decide whether to make use of interoperability features that their largest competitors may be obliged to support.

Sources of market power for large firms/platforms   Proposed interoperability or related remedies Potential problems
Access to individual customer data (including cross-use of data from multiple services) Real-time and continuous user-controlled data portability/data interoperability

Requirement to support user data stores

(Much) stricter enforcement of data minimisation and purpose limitation requirements under data protection law, alongside meaningful transparency about reasons for data collection (or prohibiting certain data uses cross-platform)

Need for multiple accounts with all services, and take-it-or-leave-it contract terms

Incentive for mass collection, processing and sharing of data, including profiling

Access to large-scale raw customer data for analytics/product improvement Mandated competitor access to statistical data[footnote]For example, search query and clickstream data.[/footnote]

*Mandated competitor access to raw data is dismissed because of significant data protection issues

Reduced incentives for data collection
Access to large-scale aggregate/statistical customer data Mandated competitor access to models, or specific functionality of models via APIs Reduced incentives for data collection and model training
Ability to restrict competitor interaction with customers Requirement to support open/publicly accessible APIs or standardised communications protocols Complexity of designing APIs/standards, while preventing anticompetitive exclusion
Availability and use of own core platform services to increase ‘stickiness’ Government coordination and funding for development of open infrastructural standards and components

Requirement for platforms to support/integrate these standard components

Technical complexity of full integration of standard/competitor components into services/design of APIs while preventing anticompetitive exclusion

Potential pressure for incorporation of government surveillance functionality in standards

Ability to fully control user interface, such as advertising, content recommendation, specific settings, or self-preferencing own services Requirement to support competitors’ monetisation and filtering/recommendation services via open APIs[footnote]Similar to ‘must carry’ obligations in media law, requiring, for example, a cable or satellite TV distributor to carry public service broadcasting channels.[/footnote]

Requirement to present competitors’ services to users on an equal basis[footnote]Requiring, for example, a cable or satellite TV distributor to show competitors’ channels equally prominently in Electronic Programme Guides as their own.[/footnote]

Requirement to recognise specific user signals

Open APIs to enable alternative software clients

Technical complexity of full integration of competitor components into services/design of APIs while preventing anticompetitive exclusion

Food for thought

In the previous section strong data protection and security provisions were emphasised as essential for building an open ecosystem that enables more choice for users, respects individual rights and facilitates competition.

Going a step further, there is a discussion to be had about boundaries of system transformation that seem achievable with interoperability. What are the ‘border’ cases, where the cost of transformation outweighs its benefits? What immediate technical, societal and economic challenges can be identified, when imagining more radical implementations of interoperability than those that have already been tested or are being proposed in EU policy?

In order to trigger further discussion, a set of problems and questions are offered as provocations:

  1. Going further, imagine a fully interoperable ecosystem, where different platforms can talk to each other. What would it mean to apply a full interoperability mandate across different digital services and what opportunities would it bring? For example, provided that technical challenges are overcome, what new dynamics would emerge if a Meta Platforms (Facebook) user could exchange messages with Twitter, Reddit or TikTok users without leaving the platform?
  2. More modular and customisable platform functionalities may change dynamics between users and platforms and lead to new types of ecosystems. How would the data ecosystem evolve if core platform functionalities were opened up? For example, if users could choose to replace core functionalities such as content moderation or news feed curation algorithms with alternatives offered by independent service providers, would this bring more value for individual users and/or societal benefit, or further entrench the power of large platforms (becoming indispensable infrastructure)? What other policy measures or economic incentives can complement this approach in order to maximise its transformative potential and prevent harms?
  3. Interoperability measures have produced important effects in other sectors and present a great potential for digital markets. What lessons can be learned from introducing mandatory interoperability in the telecommunications and banking sectors? Is there a recipe for how to open up ecosystems with a ‘people-first’ approach that enables choice while preserving data privacy and security, and provides new opportunities and innovative services that benefit all?
  4. Interoperability rules need to be complemented and supported by measures that take into account inequalities and make sure that the more diverse portfolio of services that is created through interoperability is accessible to the less advantaged. Assuming more choice for consumers has already been achieved through interoperability mandates, what other measures need to be in place to reduce structural inequalities that are likely to keep less privileged consumers locked in the default service? Experience from the UK energy sector shows that it is often the consumers/users with the fewest resources who are least likely to switch services and benefit from the opportunity of choice (the ‘poverty premium’).[footnote]Davies, S. and Trend, L. (2020). The Poverty Premium: A Customer Perspective. University of Bristol Personal Finance Research Centre. Available at https://fairbydesign.com/wp-content/uploads/2020/11/The-poverty-premium-A-Customer-Perspective-Report.pdf[/footnote]

2. Reclaiming control of data from dominant companies

The vision

In this world, the primary purpose of generating, collecting, using, sharing and governing data is to create value for people and society. The power to make decisions about data has been removed from the few large technology companies who controlled the data ecosystem in the early twenty-first century, and is instead delegated to public institutions with civic engagement at a local and national level.

To ensure that data creates value for people and society, researchers and public-interest bodies oversee how data is generated, and are able to access and repurpose data that traditionally has been collected and held by private companies. This data can be used to shape economic and social policy, or to undertake research into social inequalities at the local and national level. Decisions around how this data is collected, shared and governed are overseen by independent data access boards.

The use of this data for societally beneficial purposes is also carefully monitored by regulators, who provide checks and balances on both private companies to share this data under high standards of security and privacy, and on public agencies and researchers to use that data responsibly.

In this world, positive results are being noticed from practices that have become the norm, such as developers of data-driven systems making their systems more auditable and accessible to researchers and independent evaluators. Platforms are now fully transparent about their decisions around how their services are designed and used. Designers of recommendation systems  publish essential information, such as the input variables and optimisation criteria used by algorithms and results of their impact assessments, which supports public scrutiny. Regulators, legislators, researchers, journalists and civil society organisations  easily interrogate algorithmic systems, and have a straightforward  understanding over what decisions systems may be rendering and how those decisions impact people and society.

Finally, national governments have launched ‘public-interest data companies’, which collect and use data under strict requirements for objectives that are in the public interest. Determining ‘public interest’ is a question these organisations routinely return to through participatory exercises that empower different members of society

The importance of data in the digital market triggers the question how control over data and algorithms can be shifted away from dominant platforms, to allow individuals and communities to be involved in decisions about how their data is used. The imaginary scenario above builds a picture of a world where data is used for public good, and not (only) for corporate gain.

Current exploitative data practices are based on access to large pools of personal and non-personal data and the capacity to efficiently use data to extract value by means of advanced analytics.[footnote]Ezrachi, A. and Reyna, A. (2019). ‘The role of competition policy in protecting consumers’ well-being in the digital era’. BEUC. Available at: https://www.beuc.eu/publications/beuc-x-2019-054_competition_policy_in_digital_markets.pdf[/footnote]

The insights into social patterns and trends that are gained by large companies through analysing vast datasets currently remain closed off and are used for extracting and maximising commercial gains, where they could have considerable social value.

Determining what constitutes uses of data for ‘societal benefit’ and ‘public interest’ is a political project that must be undertaken with due regard for transparency and accountability. Greater mandates to access and share data must be accompanied by strict regulatory oversight and community engagement to ensure these uses deliver actual benefit to individuals impacted by the use of this data.

The previous section discussed the need to transform infrastructure in order to rebalance power in the digital ecosystem. Another and interrelated foundational area where more profound legal and institutional change is needed is in control over data.

Why reclaim control over data?

 

For the purposes of this proposition, reflecting the focus on creating more societal benefit, the first goal of reclaiming control over data is to open up access to data and resources from companies and repurposing them for public-interest goals, such as developing public policies that take into consideration insights and patterns from large-scale datasets. A second purpose is to open up access to data and to machine-learning algorithms, in order to increase scrutiny, accountability and oversight over how proprietary algorithms function and to understand their impact at the individual, collective and societal level.

How to get from here to there

Proprietary siloing of data is currently one of the core obstacles to using data in societally beneficial ways. But simply making data more shareable, without specific purposes and strong oversight can lead to greater abuses rather than benefits. To counter this, there is a need for:

  • legal mandates that private companies make data and resources available for public interest purposes
  • regulatory safeguards to ensure this data is shared securely and with independent oversight.

Mandating companies share data and resources in the public interest

One way to reclaim control over data and repurpose it for societal benefits is to create legal mandates requiring companies to share data and resources that could be used in the public interest. For example:

  • Mandating the release from private companies of personal and non-personal aggregate data for public use (where aggregate data means a combination of individual data, which is anonymised through eliminating personal information).[footnote]While there is an emerging field around ‘structured transparency’ that seeks to use privacy-preserving techniques to provide access to personal data without a privacy trade-off, these methods have not yet been proven in practice. For a discussion around structured transparency, see: Trask, A., Bluemke, E., Garfinkel, B., Cuervas-Mons, C. G. and Dafoe, A. (2020). ‘Beyond Privacy Trade-offs with Structured Transparency’. arXiv, Available at https://arxiv.org/pdf/2012.08347.pdf[/footnote] These datasets would be used to inform public policies (e.g. use mobility patterns from a ride-sharing platform to develop better road infrastructure and traffic management).[footnote]In 2017, Uber launched the Uber Movement initiative, which releases free-of-charge aggregate datasets to help cities better understand traffic patterns and address transportation and infrastructure problems. See: https://movement.uber.com/[/footnote]
  • Requiring companies to create interfaces for running data queries on issues of public interest (for example public health, climate, pollution, etc). This model relies on using the increased processing and analytics capabilities inside a company, instead of asking for access to large ‘data dumps’, which might prove difficult and resource intensive for public authorities and researchers to process. Conditions need to be in place around what types of queries are allowed, who can run these and what are the company’s obligations around providing responses.
  • Providing access for external researchers and regulators to machine learning models and core technical parameters of AI systems, which could enable evaluation of an AI system’s performance and real optimisation goals (for example checking the accuracy and performance of content moderation algorithms for hate speech).

Some regulatory mechanisms are emerging at national and regional level in support of data access mandates. For example, in France, the 2016 Law for a Digital Republic (Loi pour une République numérique) introduces the notion of ‘data of general interest’ which includes access to data from private entities that have been delegated to run a public service (e.g. utility or transportation), access to data from entities whose activities are subsidised by public authorities, and access to certain private databases for the statistical purposes.[footnote]See: LOI n° 2016-1321 du 7 octobre 2016 pour une République numérique (1). Available at: https://www.legifrance.gouv.fr/jorf/id/JORFTEXT000033202746/[/footnote]

In Germany, the 2019 leader of the Social Democratic Party championed a ‘data for all’ law that advocated for a ‘data commons’ approach and breaking-up data monopolies through a data-sharing obligation for market-dominant companies.[footnote]Nahles, A. (2019). ‘Digital progress through a data-for-all law’. Social Democratic Party. Available at: https://www.spd.de/aktuelles/daten-fuer-alle-gesetz/[/footnote] In the UK, the Digital Economy Act provides a legal framework for the Office for National Statistics (ONS) to access data held within the public and private sectors in support of statutory functions to produce official statistics and statistical research.[footnote]See: Chapter 7 of Part 5 of the Digital Economy Act and UK Statistics Authority. ‘Digital Economy Act: Research and Statistics Powers’. Available at: https://uksa.statisticsauthority.gov.uk/digitaleconomyact-research-statistics/[/footnote]

The EU’s Digital Services Act (DSA) includes a provision on data access for independent researchers.[footnote]European Parliament. (2022). Digital Services Act, Article 31. Available at: https://www.europarl.europa.eu/doceo/document/TA-9-2022-0269_EN.html[/footnote]

Under the DSA, large companies will need to comply with a number of transparency obligations, such as creating a public database of targeted advertisement and providing more transparency around how recommender systems work. It also includes an obligation for large companies to perform systemic risk assessments and to implement steps to mitigate risk.

In order to ensure compliance with the transparency provisions in the regulation, the DSA mandates independent auditors and vetted researchers with access to the data that led to the company’s risk assessment conclusions and mitigation decisions. This provision ensures oversight over the self-assessment (and over the independent audit) that companies are required to carry out, as well as scrutiny over the choices large companies make around their systems.

Other dimensions of access to data mandates can be found in the EU’s Data Act proposal, which introduces compulsory access to company data for public-sector bodies in exceptional situations (such as public emergencies or where it is needed to support public policies and services).[footnote]European Commission. (2021). Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act). Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN[/footnote] The Data Act also provides for various data access rights, such as a right for individuals and businesses to access the data generated from the products or related service they use and share the data with a third party continuously and in real-time[footnote]European Commission. (2021). Articles 4 and 5.[/footnote] (companies which fall under the category of ‘gatekeepers’ are not eligible to receive this data).[footnote]European Commission. (2021). Proposal for a Regulation on harmonised rules on fair access to and use of data (Data Act), Article 5 (2). Available at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A68%3AFIN[/footnote]

This forms part of the EU general governance framework for data sharing in business-to-consumer, business-to-business and business-to-government relationships created by the Data Act. It complements the recently adopted Data Governance Act (focusing on voluntary data sharing by individuals and businesses and creating common ‘data spaces’) and the Digital Markets Act (which strengthens access by individual and business users to data provided or generated through the use of core platform services such as marketplaces, app stores, search, social media, etc.).[footnote]European Parliament and Council of the European Union. (2022). Digital Markets Act, Article 6 (9) and (10). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.2022.265.01.0001.01.ENG&toc=OJ%3AL%3A2022%3A265%3ATOC[/footnote]

Independent scrutiny of data sharing and AI systems

Sharing data for the ‘public interest’ will require novel forms of independent scrutiny and evaluation, to ensure such sharing is legitimate, safe, and has positive societal impact. In cases where access to data is involved, concerns around privacy and data security need to be acknowledged and accounted for.

In order to mitigate some of these risks, one recent model proposes a system of governance in which a new independent entity would assess the researchers’ skills and capacity to conduct research within ethical and privacy standards.[footnote]Benesch, S. (2021). ‘Nobody Can See Into Facebook’. The Atlantic. Available at: https://www.theatlantic.com/ideas/archive/2021/10/facebook-oversight-data-independent-research/620557/[/footnote] In this model, an independent ethics board would review the project proposal and data protection practices for both the datasets and the people affected by the research. Companies would be required to ‘grant access to data, people, and relevant software code in the form researchers need’ and refrain from influencing the outcomes of research or suppressing findings.[footnote]Benesch, S. (2021).[/footnote]

An existing model for gaining access to platform data is Harvard’s SocialScienceOne project,[footnote]See: Harvard University. Social Science One. Available at: https://socialscience.one/[/footnote] which partnered with Meta Platforms (Facebook) in the wake of the Cambridge Analytica scandal to control access to a dataset containing public URLs shared and clicked by Facebook users globally, along with metadata including Facebook likes. Researchers requests for access to the dataset go to an academic advisory board that is independent from Facebook, and which reviews and approves applications.

While initiatives like SocialScienceOne are promising, it has faced its share of criticism for failing to provide timely access to requests,[footnote]Silverman, C. (2019). ‘Exclusive: Funders Have Given Facebook A Deadline To Share Data With Researchers Or They’re Pulling Out’. BuzzFeed. Available at: https://www.buzzfeednews.com/article/craigsilverman/funders-are-ready-to-pull-out-of-facebooks-academic-data[/footnote] and concerns that the dataset Meta Platforms (Facebook) shared has significant gaps.[footnote]Timberg, C. (2021). ‘Facebook made big mistake in data it provided to researchers, undermining academic work’. Washington Post. Available at: https://www.washingtonpost.com/technology/2021/09/10/facebook-error-data-social-scientists/[/footnote]

The programme also relies on the continued voluntary action of Meta Platforms (Facebook), and therefore lacks any guarantees that the corporation (or others like it) will provide this data in years to come. Future regulatory proposals should explore ways to create incentives for firms to share data in a privacy-preserving way, but not use them as shields and excuses to prevent algorithm inspection.

A related challenge is developing novel methods for ensuring external oversight and evaluation of AI systems and models that are trained on data shared in this way. Two approaches to holding platforms and digital services accountable to the users and communities they serve are algorithmic impact assessments, and algorithm auditing.

Algorithmic impact assessments look at how to identify possible societal impacts of a system before it is in use, and ongoing once it is. They have been proposed primarily in the public sector,[footnote]Ada Lovelace Institute. (2021). Algorithmic accountability for the public sector. Available at: https://www.adalovelaceinstitute.org/report/algorithmic-accountability-public-sector/[/footnote] with a focus on public participation in the identification of harms and publication of findings. Recent work has seen them explored in a data access context, making them a condition of access.[footnote]Ada Lovelace Institute. (2022). Algorithmic impact assessment: a case study in healthcare. Available at: https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/[/footnote]

Algorithm auditing involves looking at the behaviour of an algorithmic system (usually by examining inputs and outputs) to identify whether risks and potential harms are occurring, such as discriminatory outcomes,[footnote]A famous example is ProPublica’s bias audit of a criminal risk assessment algorithm. See: Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). ‘Machine Bias – There’s software used across the country to predict future criminals. And it’s biased against blacks’. ProPublica. Available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing[/footnote] or the prevalence of certain types of content.[footnote]A recent audit of Twitter looked at how its algorithm amplifies certain political opinions. See: Huszár, F., Ktena, S. I., O’Brien, C., Belli, L., Schlaikjer, A., and Hardt, M. (2021). ‘Algorithmic amplification of politics on Twitter’. Proceedings of the National Academy of Sciences of the United States of America, 119(1). Available at: https://www.pnas.org/doi/10.1073/pnas.2025334119[/footnote]

The Ada Lovelace Institute’s work identified six technical inspection methods that could be applied in scrutinising social media platforms, each with its own limitations and challenges.[footnote]Ada Lovelace Institute. (2021). Technical methods for regulatory inspection of algorithmic systems in social media platforms. Available at: https://www.adalovelaceinstitute.org/report/technical-methods-regulatory-inspection/[/footnote] Depending on the method used, access to data is not always necessary, however important elements for enabling auditing are: access to documentation about the dataset’s structure and purpose, the system’s design and functionality, and access to interviews with developers of that system. 

In recent years, a number of academic and civil society initiatives to conduct third-party audits of platforms have been blocked because of barriers to accessing data held by private developers. This has led to repeated calls for increased transparency and access to the data that platforms hold.[footnote]Kayser-Bril, N. (2020). ‘AlgorithmWatch forced to shut down Instagram monitoring project after threats from Facebook’. AlgorithmWatch. Available at: https://algorithmwatch.org/en/instagram-research-shut-down-by-facebook/ and Albert, J., Michot, S., Mollen, A. and Müller, A. (2022). ‘Policy Brief: Our recommendations for strengthening data access for public interest research’. AlgorithmWatch. Available at: https://algorithmwatch.org/en/policy-brief-platforms-data-access/[/footnote] [footnote]Benesch, S. (2021). ‘Nobody Can See Into Facebook’. The Atlantic. Available at: https://www.theatlantic.com/ideas/archive/2021/10/facebook-oversight-data-independent-research/620557/[/footnote]

There is also growing interest in the role of regulators, who, in a number of jurisdictions, will be equipped with new inspection and information-gathering powers over social media and search platforms, which could overcome access challenges experienced by research communities.[footnote]Ada Lovelace Institute and Reset. (2021). Inspecting algorithms in social media platforms. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2020/11/Inspecting-algorithms-in-social-media-platforms.pdf[/footnote] One way forward may be for regulators to have the power to issue ‘access to platform data’ mandates for independent researchers, who can collect and analyse data about potential harms or societal trends under strict data protection and security conditions, for example minimising the type of data collected and with a clear data retention policy.

Further considerations and provocative concepts

Beyond access to data: grappling with fundamental issues

Jathan Sadowski

 

To reclaim resources and rights currently controlled by corporate platforms and manage them in the public’s interests and for societally beneficial purposes, ‘a key enabler would be a legal framework mandating private companies to grant access to data of public interest to public actors under conditions specified in the law.’[footnote]Micheli, M., Ponti, M., Craglia, M. and Suman A.B. (2020). ‘Emerging models of data governance in the age of datafication’. Big Data & Society. doi: 10.1177/2053951720948087[/footnote]

 

One aspect that needs to be considered is whether this law should establish requirements around data collected by large companies to become part of the public domain after a reasonable number of years.

 

Another proposal suggested the possibility of allowing companies to use the data that they gather only for a limited period (e.g. five years), after which it is reverted to a ‘national charitable corporation that provides access to certified researchers, who would both be held to account and be subject to scrutiny to ensure the data is used for the common good’. [footnote]Shah, H. (2018) ‘Use our personal data for the common good’. Nature, 556(7699). doi: 10.1038/d41586-018-03912-z[/footnote]

 

These ideas will have to consider various issues, such as the need to ensure that individual’s data is not released into the public domain, and the fact that commercial competitors might not see any benefit in using ‘old’ data. Nevertheless, we should draw inspiration from these efforts and seek to expand their purview.

 

To that point, policies aimed at making data held by private companies into a common resource should go further than simply allowing other companies to access data and build their own for-profit products from it.

To rein in the largely unaccountable power of big technology companies who wield enormous, and often black-boxed, influence over people’s lives,[footnote]Martinez, M. and Kirchner, L. (2021). ‘The Secret Bias Hidden in Mortgage-Approval Algorithms’. The Markup. Available at https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms[/footnote] these policies must grapple with fundamental issues related to who gets to determine how data is made, what it means, and why it is used. 

 

Furthermore, the same policies should extend their target beyond monopolistic digital platforms. Data created and controlled by, for example, transportation services, energy utilities and credit rating agencies ought also to be subjected to public scrutiny and democratic decisions about the most societally beneficial ways to use it or discard it.

Further to these considerations, in this section provocative concepts are shared, which show different implementation models that can be set up in practice to re-channel the use of data and resources from companies towards societal good.

Public interest divisions with public oversight

Building on the Uber Movement model, which releases aggregate datasets on a restricted, non-commercial basis to help cities with urban planning,[footnote]See: Uber Movement initiative. Available at: https://movement.uber.com[/footnote]

relevant companies could be obliged to form a well-resourced public interest division, running as part of the core organisational structure with full access to the company’s capabilities (such as computational infrastructure and machine learning models).

This division would be in charge of building aggregate datasets to support important public value. Key regulators could issue ‘data-sharing mandates’, to identify which types of datasets would be most valuable and run queries against them. Through this route, the computational resources and the highly skilled human resources of the company would be used for achieving societal benefits and informing public policy.

The aggregate datasets could be used to inform policymaking and public service innovation. Potential examples could include food delivery apps informing health nutrition policies, or ride-sharing apps informing street planning, traffic congestion, housing and environmental policies. There would be limitations to use: for example insights from social media companies could be used for identifying the most pressing social issues in one area, and this information should not be used by the political class in the electoral cycle or for winning popularity by gaining political insight and using it in political campaigns.

Publicly run corporations (the ‘BBC for data’)

Another promising avenue for repurposing data in the public interest and increasing accountability is to introduce a publicly run competitor to specific digital platforms (e.g. social media). This model could be established by mandating the sharing of data from particular companies operating in a given jurisdiction to a public entity, which uses the data for projects that are in service of the public good.[footnote]Coyle, D. (2022). ‘The Public Option’. Royal Institute of Philosophy Supplement, 91, pp. 39–52. doi:10.1017/S1358246121000394[/footnote]

The value proposition behind such an intervention in the digital market would be similar to the effect of the British Broadcasting Corporation (BBC) in the UK broadcast market, where it competes with other broadcasters. The introduction of the BBC supported competition in dimensions other than audience numbers, and provided a platform for more types of content providers (for example music and independent production) that otherwise may not have existed, or not at a scale enabling them to address global markets.

Operating as a publicly run corporation has the benefit of establishing a different type of incentive structure, one that is not narrowly focused on profit-making. This could avoid the more extractive, commercially oriented business models and practices that result from the need to generate profits for shareholders and demonstrate continuous growth.

One business model that dominates the digital ecosystem, and is the primary incentive for many of the largest technology companies, is online advertising. This model has underpinned the development of mature, developed platforms, which means that, while individuals may support the concept of a business model that does not rely on extractive practices, in practice it may be difficult to get users to switch to services that do not offer equivalent levels of convenience and functionality. The success of this model is dependent on the ‘BBC for data’ competitor offering broad appeal and well-designed, functional services, so it can scale to operate at a significant level in the market.

The introduction of a democratically accountable competitor alone would not be enough to shape new practices, or to establish political and public support. It would need committed investment in the performance of its services and in attracting users. Citizens should be engaged in shaping the practices of the new public competitor, and these should reflect – in market terms – what choices, services and approaches they expect.

Food for thought

As argued above, reclaiming control over data and resources to public authorities, researchers, civil society organisations and other bodies that work in the public interest has a transformative potential. The premise of this belief is simple: if data is power, making data accessible to new actors, with non-commercial goals and agendas, will shift the power balance and change the dynamic within the data ecosystem. However, without deeper questioning, the array of practical problems and structural inequalities will not disappear with the arrival of new actors and their powers to access data.

Enabling data sharing is no simple feat – it will require extensive consideration of privacy and security issues, and oversight from regulators to prevent the misuse, abuse or concealing of data. The introduction of new actors and powers to access and use data will, inevitably, trigger other externalities and further considerations that are worthy of greater attention from civil society, policymakers and practitioners.

In order to trigger further discussion, a set of problems and questions are offered as provocations:

  1. Discussions around ‘public good’ need mechanisms to address questions of legitimacy and accountability in a participatory and inclusive way. Who should decide what uses of data serve the public good and how these decisions should be reached in order to maintain their legitimacy as well as social accountability? Who decides what constitutes ‘public good’ or ‘societal benefit,’ and how can such decisions be made justly?
  2. Enabling data sharing and access needs to be accompanied by robust privacy and security measures. What legal requirements and conditions need to be designed for issuing ‘data sharing’ mandates from companies?
  3. Data sharing and data access mandates imply that the position of large corporations is still a strong one, and they are still playing a substantial role in the ecosystem. In what ways might data-sharing mandates entrench the power of large technology platforms, or exacerbate different kinds of harm? What externalities are likely to arise with mandating data sharing for public interest goals from private companies?
  4. The notion of ‘public good’ opens important questions about what type of ‘public’ is involved in discussions and who gets left out. How can determinations of public good be navigated in inclusive ways across different jurisdictions, and accounting for structural inequalities?

3. Rebalancing the centres of digital power with new (non-commercial) institutions

The vision

In this world, new forms of data governance institutions made up of collectives of citizens control how data is generated, collected, used and governed. These intermediaries, such as data trusts and data cooperatives, empower ‘stewards’ of data to collect and use data in ways that support their beneficiaries (those represented in and affected by that data).

These models of data governance have become commonplace, enabling people to be more aware and exert more control over who has access to their data, and engendering a greater sense of security and trust that their data will only be used for purposes that they approve.

Harmful uses of data are more easily identifiable and transparent, and efficient forms of legal redress are available in cases where a data intermediary acts against the interests of their beneficiary.

The increased power of data collectives balances the power of dominant platforms, and new governance architectures offer space for civil society organisations to hold to account any ungoverned or unregulated, private or public exercises of power.

There is a clear supervision and monitoring regime ensuring ‘alignment’ to the mandate that data intermediaries have been granted by their beneficiaries. Data intermediaries are discouraged and prevented from monetising data. Data markets have been prohibited by law, understanding that the commodification of data creates room for abuse and exploitation.

The creation and conceptualisation of new institutions that manage data for non-commercial purposes is necessary to reduce power and information asymmetries.

Large platforms and data brokers currently collect and store large pools of data, which they are incentivised to use for corporate rather than societal benefit. Decentring and redistributing the concentration of power away from large technology corporations and towards individuals and collectives requires explorations around new ways of governing and organising data (see the text box on ‘Alternative data governance models’ below).

Alternative data governance models could offer a promising pathway for ensuring data subjects have rights and preferences over how their data is used are enforced. If designed properly, these governance methods could also help to address structural power imbalances.

However, until power is shifted away from large companies, and market dynamics are redressed to allow more competition and choice, there is a high risk of data intermediaries being captured.

New vehicles representing collective power, such as data unions, data trusts, data cooperatives or data-sharing initiatives based on corporate or contractual mechanisms, could help individuals and organisations position themselves better in relation to more powerful private or public organisations, offering new possibilities for enabling choices related to how data is being used.[footnote]Ada Lovelace Institute. (2021). Exploring lLegal Mechanisms mechanisms for Data data Stewardshipstewardship. Available at: https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/[/footnote]

There are many ways in which these models can be set up. For example, some models put more emphasis on individual gains, such as a ‘data union’ or a data cooperative that works in the individual interest of its members (providing income streams for individuals who pool their personal data, which is generated through the services they use or available on their devices).

These structures can also work towards wider societal aspirations, when members see this as their priority. Another option might be for members to contribute device-generated data to a central database, with ethically minded entrepreneurs invited to build businesses on top of these databases, owned collectively by the ‘data commons’ and feeding its revenues back into the community, instead of to the individual members.

A detailed discussion on alternative data governance models is presented in the Ada Lovelace Institute report Exploring legal mechanisms for data stewardship, which discusses three legal mechanisms – data trusts, data cooperatives, and corporate and contractual mechanisms – that could help facilitate the responsible generation, collection, use and governance of data in a participatory and rights-preserving way.[footnote]Ada Lovelace Institute. (2021).[/footnote]

Alternative data governance models
  • Data trusts: stemming from the concept of UK trust law, individuals pool data rights (such as those provided by the GDPR) into an organisation – a trust – where the data trustees are tasked with exercising data rights under fiduciary obligations.
  • Data cooperatives: individuals voluntarily pool data together, and the benefits are shared by members of the cooperative. A data cooperative is distinct from a ‘data commons’ because a data cooperative grows or shrinks as resources are brought in or out (as members join or leave), whereas a ‘data commons’ implies a body of data whose growth or decline is independent of the membership base.
  • Corporate and contractual agreements: legally binding agreements between different organisations that facilitate data sharing for a defined set of aims or an agreed purpose.

Many of the proposed models for data intermediaries need to be tested and further explored to refine their practical implementation, and the considerations below offer a more critical perspective highlighting how the different transformations of the data ecosystem discussed in this chapter are interconnected, and how one institutional change (or failure) determines the conditions for a change in another area.

Decentralised intermediaries need adequate political, economic, and infrastructural support, to fulfil their transformative function and deliver the value expected from them. The text box below, by exploring the shortcomings of existing data intermediaries, gives an idea of the economic and political conditions that would provide a more enabling environment.

Critical overview of existing data intermediaries models

Jathan Sadowski

 

There are now a number of emerging proposals for alternative data intermediaries that seek to move away from the presently dominant, profit-driven model and towards varying degrees of individual ownership, legal oversight or social stewardship of data.[footnote]Ada Lovelace Institute. (2021). Exploring legal mechanisms for data stewardship. Available at: https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/ and Micheli, M., Ponti, M., Craglia, M. and Suman, A. B. (2020). ‘Emerging models of data governance in the age of datafication’. Big Data & Society. doi: 10.1177/2053951720948087[/footnote]

 

These proposals include relatively minor reforms to the status quo, such as legally requiring companies to act as ‘information fiduciaries’ and consider the interests of stakeholders who are affected by the company, alongside the interests of shareholders who have ownership in the company.

 

In a recent Harvard Law Review article, David Pozen and Lina Khan[footnote]Pozen, D. and Khan, L. (2019). ‘A Skeptical View of Information Fiduciaries’. Harvard Law Review, 133, pp. 497–541. Available at: https://harvardlawreview.org/2019/12/a-skeptical-view-of-information-fiduciaries/[/footnote] provide detailed arguments for why designating a company like Meta Platforms (Facebook) – ‘a loyal caretaker for the personal data of millions’ does not actually pose a serious challenge to the underlying business model or corporate practices. In fact, such reforms may even entrench the company’s position atop the economy. ‘Facebook-as-fiduciary is no longer a public problem to be solved, potentially through radical reform. It is a nexus of sensitive private relationships to be managed, nurtured, and sustained [by the government]’.[footnote]Pozen, D. and Khan, L. (2019).[/footnote]

 

Attempts to tweak monopolistic platforms, without fundamentally restructuring the institutions and distributions of economic power, are unlikely to produce – and may even impede – the meaningful changes needed.

 

Other models take a more decentralised solution in the form of ‘data-sharing pools’[footnote]Shkabatur, J. (2018). ‘The Global Commons of Data’. Social Science Research Network. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3263466[/footnote] and ‘data cooperatives’[footnote]Miller, K. (2021). ‘Radical Proposal: Data Cooperatives Could Give Us More Power Over Our Data’. Human-Centered Artificial Intelligence (HAI), Stanford University. Available at: https://hai.stanford.edu/news/radical-proposal-data-cooperatives-could-give-us-more-power-over-our-data[/footnote] that would create a vast new ecosystem of minor intermediaries for data subjects to choose from. As a different way of organising the data economy, this would be, in principle, a preferable democratic alternative to the extant arrangement.

 

However, in practical terms, this approach risks putting the cart before the horse, by acting as if the political, economic and infrastructural support for these decentralised intermediaries already existed. In fact, it does not: with private monopolies sucking all the oxygen out of the economy, there’s no space for an ecosystem of smaller alternatives to blossom. At least, that is, without relying on the permission and largesse of profit-driven giants.

 

Under present market conditions – where competition is low and capital is hoarded by a few – it seems much more likely that start-ups for democratic data governance would either fizzle/fail or be acquired/crushed.

How to get from here to there

Alternative data governance proposals listed above represent novel and unexplored models that require better understanding and testing to demonstrate proof of concept. The success of these alternative data governance models will require (aside from a fundamental re-conceptualisation of market power and political, economic and infrastructural support; see more in the text box on ‘Paving the way for a new ecosystem of decentralised intermediaries’), strong regulations and enforcement mechanisms, to ensure data is stewarded in the interests of their beneficiaries.

The role, responsibilities and standards of practice remain to be fully defined and should include aspects of:

  • enforcing data rights and obligations (e.g. compliance with data protection legislation),
  • achieving a level of maturity of expertise and competence in the administration of a data intermediary, especially if its mission requires it to negotiate with large companies
  • establishing clear management decision-making around delegation and scrutiny, and setting out the overarching governance of the ‘data steward’, which could be a newly established professional role (a data trustee or capable managers and administrators in a data cooperative) or a governing board (for example formed by individuals that have shares in a cooperative based on the data contributed). The data contributed may define the role of an individual in the board and the decision-making power regarding data use.

Supportive regulatory conditions are needed, to ease the process of porting individual and collective data into alternative governance models, such as a cooperative. Today, it is a daunting – if not impossible – task to ask a person to move all their data over to a new body (data access requests can take a long time to be processed, and often the data received needs to be ‘cleaned’ and restructured in order to be used elsewhere).

Legal mechanisms and technical standards must evolve to make that process easier. Ideally, this would produce a process that cooperatives, trusts and data stewardship bodies could undertake on behalf of individuals (the service they provide could include collecting and pooling data; see below on the Data Governance Act). Data portability, as defined by the GDPR, is not sufficient as a legal basis because it covers only data provided by the data subject and relies heavily on individual agency, whereas in the current data ecosystem, the most valuable data is generated about individuals without their knowledge or control.

Alternative data governance models have already made their way into legislation. In particular, the recently adopted EU Data Governance Act (DGA) creates a framework for voluntary data sharing via data intermediation services, and a mechanism for sharing and pooling data for ‘data altruism’ purposes.[footnote]European Parliament and Council of the European Union. (2022). Regulation 2022/868 on European data governance (Data Governance Act). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0868&qid=1657575745441[/footnote] The DGA mentions a specific category of data intermediation services that could support data subjects in exercising their data rights under the GDPR, however this option is only briefly offered in one of the recitals as one of the options, and lacks detail as to the practical implementation.[footnote]European Parliament and Council of the European Union. (2022). Recital 30. For a more detailed discussion on the mandatability of data rights, see: Giannopoulou, A., Ausloos, J., Delacroix, S. and Janssen, H. (2022). ‘Mandating Data Rights Exercises’. Social Science Research Network. Available at: https://ssrn.com/abstract=4061726[/footnote]

The DGA also emphasises the importance of neutral and independent data-sharing intermediaries and sets out the criteria for entities that want to provide data-sharing services (organisations that provide only data intermediation services, and companies that offer data intermediation services in addition to other services, such as data marketplaces).[footnote]European Commission. (2022). Data Governance Act explained. Available at: https://digital-strategy.ec.europa.eu/en/policies/data-governance-act-explained[/footnote] One of the criteria is that service providers may not use the data for purposes other than to put it at the disposal of data users, and must separate its data intermediation services structurally from any other value-added services it may provide. At the same time, data intermediaries will bear fiduciary duties towards individuals, to ensure that they act in the best interests of the data holders.[footnote]European Parliament and Council of the European Union. (2022). Data Governance Act, Recital 33. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0868&qid=1657575745441[/footnote]

Today there is a basic legal framework for data portability under the GDPR, which has been complemented with new portability rules in legislation, such as in the DMA. More recently, a new framework has been adopted that encourages voluntary data sharing and defines the criteria and conditions for entities that want to serve as a data steward or data intermediary. What are still needed are the legal, technical and interoperability mechanisms for individuals as well as collectives to effectively reclaim their data (including behavioural observations and statistical patterns that not only convey real economic value but can also serve individual and collective empowerment) from private entities (either directly or via trusted intermediaries), and a set of safeguards protecting these individuals and collectives from being, once again, exploited by another powerful agent (i.e. making sure that a data intermediary will remain independent and trustworthy, and is able to perform their mandate effectively in the wider data landscape).

Further considerations and provocative concepts

The risk of amplifying collective harm

Jef Ausloos, Alexandra Giannopoulou and Jill Toh

 

So-called ‘data intermediaries’ have been framed as one practical way through which the collective dimension of data rights could be given shape in practice.[footnote]For example, Workers Info Exchange’s plan to set up a ‘data trust’, to help workers access and gain insight from data collected from them at work. Available at: https://www.workerinfoexchange.org/. See more broadly: Ada Lovelace Institute. (2021). Exploring legal mechanisms for data stewardship. Available at: https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/ and Ada Lovelace Institute. (2021). Participatory data stewardship. Available at: https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/[/footnote] While they show some promise for more effectively empowering people and curbing collective data harms,[footnote]See: MyData. Declaration of MyData Principles, Version 1.0. Available at: https://mydata.org/declaration/[/footnote] their growing popularity in policy circles mainly stems from their assumed economic potential.

 

Indeed, the political discourse at EU level, particularly in relation to the Data Governance Act (DGA) focuses on the economic objectives of data intermediaries, framing them in terms of their supposedly ‘facilitating role in the emergence of new data-driven ecosystems’.[footnote]European Parliament and Council of the European Union. (2022). Data Governance Act, Recital 27. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022R0868&qid=1657575745441[/footnote] People’s rights, freedoms and interests are only considered to the extent that the data intermediaries empower individual data subjects. 

 

This focus on the (questionable) economic potential of data intermediaries and individual empowerment of data subjects raises significant concerns. Without clear constraints on the type of actors that can perform the role of intermediaries, their model can easily be usurped by the interests of those with (economic and political) power, at the cost of both individual and collective rights, freedoms and interests. Even more, their legal entrenching in EU law, risks amplifying collective data-driven harms. Arguably, for ‘data intermediaries’ to positively contribute to curbing collective harm and constraining power asymmetries, it will be important to move beyond the dominant narrative focusing on the individual and economic potential. Clear legal and organisational support in exercising data rights in a coordinated manner are a vital step in this regard.

To begin charting out the role of data intermediaries in the digital landscape, there is a need to explore questions such as: What are the first steps towards building alternative forms of data governance? How to undermine the power of companies that now enclose and control the data lifecycle? What is the role of the public sector in reclaiming power over data? How to ensure legitimacy of new data governance institutions? The text below offers some food for thought by exploring these important questions.

Paving the way for a new ecosystem of decentralised intermediaries

Jathan Sadowski

 

Efforts to build alternative forms of data governance should focus on changing its political economic foundations. We should focus on advancing two related strategies for reform that would pave the way for a new ecosystem of decentralised intermediaries.

 

The first strategy is to disintermediate the digital economy by limiting private intermediaries’ ability to enclose the data lifecycle – the different phases of data management, including construction, collection, storage, processing, analysis, use, sharing, maintenance, archiving and destruction.

The digital economy is currently hyper-intermediated. We tend to think of the handful of massive monopolistic platforms that have installed themselves as necessary middlemen in production, circulation, and consumption processes. But there is also an overabundance of smaller, yet powerful, companies that insert themselves into every technical, social and economic interaction to extract data and control access.

 

Disintermediation means investigating what kind of policy and regulatory tools can constrain and remove the vast majority of these intermediaries whose main purpose is to capture – often without creating – value.[footnote]Sadowski, J. (2020). ‘The Internet of Landlords: Digital Platforms and New Mechanisms of Rentier Capitalism’. Antipode, 52(2), pp.562–580.[/footnote] For example, disintermediation would require clamping down on the expansive secondary market for data, such as the one for location data,[footnote]Keegan, J. and Ng, A. (2021). ‘There’s a Multibillion-Dollar Market for Your Phone’s Location Data’. The Markup. Available at https://themarkup.org/privacy/2021/09/30/theres-a-multibillion-dollar-market-for-your-phones-location-data[/footnote] which incentivises many companies to engage in the collection and storage of all possible data, for the purposes of selling and sharing with, or servicing, third parties such as advertisers. 

 

Even more fundamental reforms could target the rights of control and access that companies possess over data assets and networked devices, which are designed to shut out regulators and researchers, competitors and consumers from understanding, challenging and governing the power of intermediaries. Establishing such limits is necessary for governing the lifecycle of data, while also making space for different forms of intermediaries designed with different purposes in mind.

 

In a recent example, after many years of fighting against lobbying by technology companies, the US Federal Trade Commission has voted to enforce ‘right to repair’ rules that grant users the ability to fix and modify technologies like smartphones, home appliances and vehicles without going through repairs shops ‘authorised’ by the manufacturers.[footnote]Kavi, A. (2021). ‘The F.T.C. votes to use its leverage to make it easier for consumers to repair their phones’. The New York Times. Available at: https://www.nytimes.com/2021/07/21/us/politics/phones-right-to-repair-FTC.html[/footnote] This represents a crucial transference of rights away from intermediaries and to the public. 

 

The second strategy consists of the construction of new public institutions for democratic governance of data.

 

Achieving radical change requires advocating for forms of large-scale intervention that actively aim to undermine the current conditions of centralised control by corporations.  In addition to pushing to expand the enforcement of data rights and privacy protections, efforts should be directed at policies for reforming government procurement practices and expanding public capacities for data governance.

 

The political and financial resources already exist to create and fund democratic data intermediaries. But funds are currently directed at outsourcing government services to technology companies,  rather than insourcing the development of capacities through new and existing institutions. Corporate executives have been happy to cash the cheques of public investment, and a few large companies have managed to gain a substantial hold on public administration procurement worldwide.

 

Ultimately, strong legal and institutional interventions are needed in order to foundationally transform the existing arrangements of data control and value. Don’t think of alternative data intermediaries  (such as public data trusts in the model advocated for in this article)[footnote]Sadowski, J., Viljoen, S. and Whittaker, M. (2021). ‘Everyone Should Decide How Their Digital Data Are Used — Not Just Tech Companies’. Nature, 595, pp.169–171. Available at https://www.nature.com/articles/d41586-021-01812-3[/footnote] as an endpoint, but instead as the beginning for a new political economy of data – one that will allow and nurture the growth of more decentralised models of data stewardship. 

 

Public data trusts would be well positioned to provide alternative decentralised forms of data intermediaries with the critical resources they need – e.g. digital infrastructure, expert managers, financial backing, regulatory protections and political support – to first be feasible and then to flourish. Only then can we go beyond rethinking and begin rebuilding a political economy of data that works for everybody.[footnote]Sadowski, J. (2022). ‘The political economy of data intermediaries’. Ada Lovelace Institute. Available at https://www.adalovelaceinstitute.org/blog/political-economy-data-intermediaries/[/footnote]

Food for thought

In order to trigger further discussion, a set of problems and questions, which arise around alternative data governance institutions and the role they can play in generating transformative power shifts, are offered as provocations:

  1. Alternative data governance models can play a role at multiple levels. They can work both for members that have direct contributions (e.g. members pooling data in a data cooperative and being actively engaged in running the cooperative), as well as for indirect members (e.g. when the scope of a data cooperative is to have wider societal effects). This raises questions such as: How are ‘beneficiaries’ of data identified and determined? Who makes those determinations, and by what method?
  2. Given the challenges of the current landscape, there are questions about what is needed in order for data intermediaries to play an active and meaningful role that leads to responsible data use and management in practice. What would it take for these new governance models to actually increase control around the ways data is used currently (e.g. to forbid certain data uses)? Would organisations have to be mandated to deal with such new structures or adhere to their wishes even for data not pooled inside the model?
  3. In practice, there can be multiple types of data governance structures, potentially with competing interests. For example some of them could be set up to restrict and to protect data, while others could be set up to maximise income streams for members from data use. If potential income streams are dependent on the use of data, what are the implications for privacy and data protection? How can potential conflicts between data intermediaries be addressed and by whom? What kinds of incentives structures might arise and what type of legal underpinnings do these alternative data governance models need to function correctly?
  4. The role of the specific parties involved in managing data intermediaries, their responsibilities and qualifications need to be considered and balanced. Under what decision-making and management models would these structures operate, and how are decisions being made in practice? If things go wrong, who is held responsible, and by what means?
  5. The particularities of different digital environments across the globe lead to questions of applicability in different jurisdictions. Can these models be translated/work in different regions around the world, including the less developed?
What about Web3?

 

Some readers might ask why this report does not discuss ‘Web3’ technologies – a term coined by Gavin Wood in his 2014 essay, which envisions a reconfiguration of the web’s technical, governance and payments/transactions infrastructure that moves away from ‘entrusting our information to arbitrary entities on the internet’.[footnote]Wood, G. (2014) ‘ĐApps: What Web 3.0 Looks Like’. Available at: http://gavwood.com/dappsweb3.html[/footnote]

 

The original vision of Web3 aimed to decentralise parts of the online web experience and remove middlemen and intermediaries. It proposed four core components for a Web 3.0 or a ‘post-Snowden’ web:

  • Content publication: a decentralised, encrypted information publication system that ensures the downloaded information hasn’t been interfered with. This system could be built using principles that have been previously used in technologies such as the Bittorrent[footnote]See: BitTorrent. Available at: https://www.bittorrent.com/[/footnote] protocol for peer-to-peer content distribution and HTTPS for secure communication over a computer network.
  • Messaging: a messaging system that ensures communication is encrypted and traceable information is not revealed (e.g. IP addresses).
  • Trustless transactions: a means of agreeing the rules of interaction within a system and ensuring automatic enforcement of these rules. A consensus algorithm prevents powerful adversaries from derailing the system. Bitcoin is the most popular implementation of this technology and establishes a peer-to-peer system for validating transactions without a centralised authority. While blockchain technology is associated primarily with payment transactions, the emergence of smart contracts has extended the set of use cases to more complex financial arrangements and non-financial interactions such as voting, exchange, notarisation or providing evidence.
  • Integrated user interface: a browser or user interface that provides a similar experience to traditional web browsers, but uses a different technology for name resolution. In today’s internet, the domain name system (DNS) is controlled by the Internet Corporation of Assigned Names and Numbers (ICANN) and delegated registrars. This would be replaced by a decentralised, consensus-based system which allows users to navigate the internet pseudonymously, securely and trustlessly (an early example of this technology is Namecoin).

 

Most elements of this initial Web3 vision are still in their technological infancy. Projects that focus on decentralised storage (for example BitTorrent, Swarm, IPFS) and computation (e.g. Golem, Ocean) face important challenges on multiple fronts – performance, confidentiality, security, reliability, regulation – and it is doubtful that the current generation of these technologies are able to provide a long-term, feasible alternative to existing centralised solutions for most practical use cases.

 

Bitcoin and subsequent advances in blockchain technology have achieved wider adoption and considerably more media awareness, although the space has been rife with various forms of scams and alarming business practices, due to rapid technological progress and lagging regulatory intervention.

 

Growing interest in blockchain networks has also contributed to the ‘Web3 vision’ being gradually co-opted by venture capital investors, to promote a particular niche of projects.  This has popularised Web3 as an umbrella term for alternative financial infrastructure – such as payments, collectibles (non-fungible tokens or NFTs) and decentralised finance (DeFi) – and encouraged an overly simplistic perception of decentralisation.[footnote]Aramonte, S., Huang, W. and Schrimpf, A. (2021). ‘DeFi risks and the decentralisation illusion.’ Bank for International Settlements. Available at: https://www.bis.org/publ/qtrpdf/r_qt2112b.pdf[/footnote] It is not often discussed nor widely acknowledged that the complex architecture of these systems can (and often does) lead to centralisation of power re-emerging in the operational, incentive, consensus, network and governance layers.[footnote]Sai, A. R., Buckley, J., Fitzgerald, B., Le Gear, A. (2021). ‘Taxonomy of centralization in public blockchain systems: A systematic literature review’. Information Processing & Management, 58(4). Available at: https://www.sciencedirect.com/science/article/pii/S0306457321000844?via%3Dihub[/footnote]

 

The promise of Web3 is that decentralisation of infrastructure will necessarily lead to decentralisation of digital power. There is value in this argument and undoubtedly some decentralised technologies, after they reach a certain level of maturity and if used in the right context, can offer benefits over existing centralised alternatives.

 

Acknowledging the current culture and state of development around Web3, at this stage there are few examples in this space where values such as decentralisation and power redistribution are front and centre. It would be interesting to see whether progressive alternatives will deliver on their promise in the near to medium term and take these values to the core.

4. Ensuring public participation in technology policy making

The vision

This is a world in which everybody who wants to participate in decisions about data and its governance can do so – there are mechanisms for engagement to legitimate needs and expectations of those affected by technology. Through a broad range of participatory approaches – from citizens’ councils and juries that directly inform local and national data policy and regulation, to public representation on technology company governance boards people are better represented, more supported and empowered to make data systems and infrastructures work for them, and policymakers are better informed about what people expect and desire from data, technologies and their uses.

Through these mechanisms for participatory data and technology policymaking and stewardship, individuals who wish to be active citizens can participate directly in data governance and innovation, whereas those who want their interests to be better represented have mechanisms where their voices and needs are represented through members of their community or through organisations.

Policymakers are more empowered through the legitimacy of public voice to act to curb the power of large technology corporations, and equipped with credible evidence to underpin approaches to policy, regulation and governance.

Public participation, engagement and deliberation have emerged in recent years as fundamental components in shaping future approaches to regulation across a broad spectrum of policy domains.[footnote]OECD. (2020). Innovative Citizen Participation and New Democratic Institutions: Catching the Deliberative Wave. doi:10.1787/339306da-en[/footnote] However, despite their promising potential to facilitate more effective policymaking and regulation, the role of public participation in data and technology-related policy and practice remains remarkably underexplored, if compared – for example – to public participation in city planning and urban law. 

There is, however, a growing body of research that aims to understand the theoretical and practical value of public participation approaches for governing the use of data, which is described in our 2021 report, Participatory data stewardship.[footnote]Ada Lovelace Institute. (2021). Participatory data stewardship. Available at: https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/[/footnote]

What is public participation?

Public participation describes a wide range of methods that bring members of the public’s voices, perspectives, experiences and representation to social and policy issues. From citizen panels to deliberative polls, surveys to community co-design, these methods have important benefits, including informing more effective and inclusive policymaking, increasing representation and accountability in decision making, and enabling more trustworthy governance and oversight.[footnote]Gastil, J. (ed.). (2005). The deliberative democracy handbook: strategies for effective civic engagement in the twenty-first century. Hoboken, N.J: Wiley.[/footnote]

 

Participation often involves providing members of the public with information about particular uses of data or technology, including access to experts, and time and space to reflect and develop informed opinions. Different forms of public participation are often described on a spectrum from ‘inform’, ‘consult’ and ‘involve’, through to ‘collaborate’ and ‘empower’.[footnote]IAP2 International Federation. (2018). Spectrum of Participation. Available at: https://www.iap2.org/page/pillars[/footnote] In our report Participatory data stewardship, the Ada Lovelace Institute places this spectrum into the context of responsible data use and management.

How to get from here to there

Public participation, when implemented meaningfully and effectively, ensures that the values, experiences and perspectives of those affected by data-driven technologies are represented and accounted for in policy and practices related to those technologies.

This has multiple positive impacts. Firstly, it offers a more robust evidence base for developing technology policies and practices that meet the needs of people and society, by building a better understanding of people’s lived experiences and helping to better align the development, deployment and oversight of technologies with societal values. Secondly, it provides policy and practice with greater legitimacy and accountability by ensuring those who are affected have their voices and perspectives taken into account.

Taken together, the evidence base and legitimacy offered by public participation can support a more responsible data and technology ecosystem that earns the trust of the public, rather than erodes and undermines it. Possible approaches to this include:

  1. Members of the public could be assigned by democratically representative random lottery to independent governance panels that provide oversight of dominant technology firms and public-interest alternatives. Those public representatives could be supported by a panel of civil society organisations that interact with governing boards and scrutinise the activity of different entities involved in data-driven decision-making processes.
  2. Panels or juries of citizens could be coordinated by specialised civil society organisations to provide input on the audit and assessment of datasets and algorithms that have significant societal impacts and effects.[footnote]Ada Lovelace Institute. (2022). Algorithmic impact assessment: a case study in healthcare. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2022/02/Algorithmic-impact-assessment-a-case-study-in-healthcare.pdf[/footnote]
  3. Political institutions could conduct region-wide public deliberation exercises to gather public input and shape future regulation and enforcement of technology platforms. For example, a national or regional-wide public dialogue exercise could be conducted to consider how a novel technology application might be regulated, or to evaluate the implementation of different legislative proposals.
  4. Participatory co-design or deliberative assemblies could be used to help articulate what public interest data and technology corporations might look like (see the ‘BBC for Data’ above), as alternatives to privatised and multinational companies.

These four suggestions represent just a selection of provocations, and are far from exhaustive. The outcomes of public participation and deliberation can vary, from high-level sets of principles on how data is used, to detailed recommendations that policymakers are expected to implement. But in order to be successful, such initiatives need political will, support and buy-in, to ensure that their outcomes are acknowledged and adopted. Without this, participatory initiatives run the risk of ‘participation washing’, whereby public involvement is merely tokenistic.

Additionally, it is important to note that public participation is not about shifting responsibility back to people and civil society to decide on intricate matters, or to provide the justifications or ‘mandates’ for uses of data and technology that haven’t been ethically, legally or morally scrutinised. Rather it is about the institutions and organisations that develop, govern and regulate data and technology making sure they act in the best interests of the people who are affected by the use of data and technology.

Further considerations and provocative concepts

Marginalised communities in democratic governance

Jef Ausloos, Alexandra Giannopoulou and Jill Toh

 

As Europe and other parts of the world set out plans to regulate AI and other technology services, it is more urgent than ever to reflect critically on the value and practical application of those legally designed mechanisms in protecting social groups and individuals that are affected by high-risk AI systems and other technologies. The question of who has access to decision-making processes, and how these decisions are made, is crucial to address the harms caused by technologies.

 

The #BrusselsSoWhite conversations (a social media hashtag expounding on the lack of racial diversity in EU policy conversations)[footnote]Islam, S. (2021). ‘“Brussels So White” Needs Action, Not Magical Thinking’. EU Observer. Available at: https://euobserver.com/opinion/153343 and Azimy, R. (2020). ‘Why Is Brussels so White?’. Euro Babble. Available at: https://euro-babble.eu/2020/01/22/dlaczego-bruksela-jest-taka-biala/[/footnote] have clearly shown the absence and lack of marginalised people in discussions around European technology policymaking,[footnote]Çetin, R. B. (2021). ‘The Absence of Marginalised People in AI Policymaking’. Who Writes The Rules. Available at: https://www.whowritestherules.online/stories/cetin[/footnote] despite the EU expressing its commitment to anti-racism and inclusion.[footnote]European Commission. (2020). EU Anti-Racism Action Plan 2020-2025. Available at: https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-anti-racism-action-plan-2020-2025_en[/footnote]

Meaningful inclusion requires moving beyond the rhetoric, performativity and tokenisation of marginalised people. It requires looking inwards to assess if the existing work environment, internal practices, hiring and retention requirements are barriers to entry and exclusionary-by-design.[footnote]Çetin, R. B. (2021). ‘The Absence of Marginalised People in AI Policymaking’. Who Writes The Rules. Available at: https://www.whowritestherules.online/stories/cetin[/footnote] Additionally, mere representation is insufficient. This also requires a shift to recognise the value of different types of expertise, and seeing marginalised people’s experiences and knowledge as legitimate, and equal.

 

There are a few essential considerations for achieving this.

 

Firstly, legislators and civil society – particularly those active in the field of ‘technology law’ – should consider a broader ambit of rights, freedoms and interests at stake in order to capture the appropriate social rights and collective values generally left out from market-driven logics. This ought to be done by actively engaging with the communities affected and interfacing more thoroughly with respective pre-existing legal frameworks and value systems.[footnote]Meyer, L. (2021). ‘Nothing About Us, Without Us: Introducing Digital Rights for All’. Digital Freedom Fund. Available at: https://digitalfreedomfund.org/nothing-about-us-without-us-introducing-digital-rights-for-all/; Niklas, J. and Dencik, L. (2021). ‘What rights matter? Examining the place of social rights in the EU’s artificial intelligence policy debate’. Internet Policy Review, 10(3). Available at: https://policyreview.info/articles/analysis/what-rights-matter-examining-place-social-rights-eus-artificial-intelligence; and Taylor, L. and Mukiri-Smith, H. (2021). ‘Human Rights, Technology and Poverty’. Research Handbook on Human Rights and Poverty. Available at: https://www.elgaronline.com/view/edcoll/9781788977500/9781788977500.00049.xml[/footnote]

 

Secondly, the dominant narrative in EU techno-policymaking frames all considered fundamental rights and freedoms from the perspective of protecting ‘the individual’ against ‘big tech’. This should be complemented with a wider concern for the substantial collective and societal harm generated and exacerbated by the development and use of data-driven technologies by private and public actors.

 

Thirdly, in consideration of the flurry of regulatory proposals, there should be more effective rules on lobbying, related to transparency and funding requirements and funding sources for thinktanks and other organisations. The revolving door between European institutions and technology companies continues to remain highly problematic and providing independent oversight with investigative powers is crucial.[footnote]Corporate Europe Observatory. (2021). The Lobby Network: Big Tech’s Web of Influence in the EU. Available at: https://corporateeurope.org/en/2021/08/lobby-network-big-techs-web-influence-eu[/footnote]

 

Lastly, more (law) is not always better. Especially, civil society and academia ought to think more creatively on how legal and non-legal approaches may prove to be productive in tackling the collective hams produced by (the actors controlling) data-driven technologies. Policymakers and enforcement agencies should proactively support such efforts.

Further to these considerations, one approach to embedding public participation into technology policymaking is to facilitate meaningful and diverse deliberation on the principles and values that should guide new legislation and inform technology design.

For example, to facilitate public deliberation on the rules governing how emerging technologies are developed, the governing institutions responsible for overseeing new technologies – be it local, national or supranational government – could establish a citizens’ assembly.[footnote]For more information about citizens’ assemblies see: Involve. (2018). Citizens’ Assembly. Available at: https://www.involve.org.uk/resources/methods/citizens-assembly. For an example of how public deliberation about complex technologies can work in practice, see: Ada Lovelace Institute. (2021). The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/[/footnote]

Citizens’ assemblies can take various forms, from small groups of citizens in a local community discussing a single issue over a few days, to many hundreds of citizens from across regions considering a complex topic across a series of weeks and months.

Citizens’ assemblies must include representation of a demographically diverse cross-section of people in the region. Those citizens should come together in a series of day-long workshops, hosted across a period of several months, and independently facilitated. During those workshops, the facilitators should provide objective and accessible information about the technological issue concerned and the objectives of legislative or technical frameworks.

The assembly must be able to hear from and ask questions to experts on the topic, representing a mix of independent professionals and those holding professional or official roles with associated parties – such as policymakers and technology developers.

At the end of their deliberations, the citizens in the assembly should be supported to develop a set of recommendations – free from influence of any vested parties – with the expectation that these recommendations will be directly addressed or considered in the design of any legislative or technical frameworks. Such citizens’ assemblies can be an important tool, in addition to grassroot engagement in political parties and civil society, for bringing people into work on societal issues.

Food for thought

As policymakers around the world develop and implement novel data and technology regulations, it is essential that public participation forms a core part of this drafting process. At a time when trust in governments and technology companies is reaching record lows in many regions, policymakers must experiment with richer forms of public engagement beyond one-way consultations. By empowering members of the public to co-create the policy that impacts their lives, policymakers can create more representative and more legitimate laws and regulations around data.

In order to trigger further discussion, a set of questions are offered as provocations for thinking about how to implement public participation and deliberation mechanisms in practice:

  1. Public participation requires a mobilisation of resources and new processes throughout the cycle of technology policymaking. What incentives, resources and support do policymakers and governments need, to be able to undertake public engagement and participation in the development of data and AI policy?
  2. Public participation methods need strategic design, and limits need to be taken into consideration. Given the ubiquitous and multi-use nature of data and AI, what discrete topics and cases can be meaningfully engaged with and deliberated on by members of the public?
  3. Inclusive public participation is essential, to ensuring a representative public deliberation process that delivers outcomes for those affected by technology policymaking. Which communities and groups are the most disproportionately harmed or affected by data and AI, and what mechanisms can ensure their experiences and voices are included in dialogue?
  4. It is important to make sure that public participation is not used as a ‘stamp of approval’ and does not become merely a tick-box exercise. To avoid ‘participation washing’, what will encourage governments, industry and other power holders to engage meaningfully with the public, whereby recommendations made by citizens are honoured and addressed?

Chapter 3: Conclusions and open questions

In this report, we started with two questions: What is a more ambitious vision for data use and regulation that can deliver a positive shift in the digital ecosystem? And what are the most promising interventions to create a more balanced system of power and a people-first approach for data?

In Chapter 1, we defined the central problem: that today’s digital economy is built on deep-rooted exploitative and extractive data practices and forms of ‘data rentiership,’ which have resulted in the accrual of vast amounts of power to a handful of large platforms.

We explained how this power imbalance has prevented benefits to people, who are largely unable to control how their data is collected and used, and are increasingly disempowered from engaging in, seeking redress or contesting data-driven decisions that affect their lives.

In Chapter 2 we outlined four cross-cutting interventions concerning infrastructure, data governance, institutions and participation that can help redress that power imbalance in the current digital ecosystem. We recognise that these interventions are not sufficient to solve the problems described above, but we propose them as a realistic first step towards a systemic change.

From interventions, framed as objectives for policy and institutional change, we moved to provocative concepts: more tangible examples of how changing the power balance could work in practice. While we acknowledge that, in the current conditions, these concepts open up more questions than they give answers, we hope other researchers and civil society organisations will join us in an effort to build evidence that validates or establishes limitations to their usefulness.

Before we continue the exploration of specific solutions (legal rules, institutional arrangements, technical standards) that have the potential to transform the current digital ecosystem towards what we have called ‘a people-first approach’, we reiterate how important it is to think about this change in a systemic way.

A systemic vision envisages all four interventions as interconnected, mutually reinforcing and dependent on one another. And requires consideration of external ‘preconditions’ that could prevent or impede this systemic reform. We identify the preconditions for the interventions to deliver results as: the efficiency and values of the enforcement bodies, increasing the possibilities for individual and collective legal action, and reducing the dependency of key political stakeholders on (the infrastructure and expertise of) large technology companies.

In this last chapter we not only acknowledge political, legal and market conditions that determine the possibilities for transformation of the digital ecosystem, but also propose questions to guide further discussion about these – very practical – challenges:

1. Effective regulatory enforcement

Increased regulatory enforcement, in the context of both national and international cooperation, is a necessary precondition to the success of the interventions described above. As described in Chapter 1, resolving the regulatory enforcement problem will help create meaningful safeguards and regulatory guardrails to support change.

An important aspect of regulatory enforcement and cooperation measures includes the ability of one authority to supply timely information to other authorities from different sectors and from different jurisdictions, subject to relevant procedural safeguards. Some models of this kind of regulatory cooperation already exist – in the UK, the Digital Regulation Cooperation Forum (DRCF) is a cross-regulatory body formed in 2020 by the Competition and Markets Authority (CMA), and includes the Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO) and the Office of Communications (Ofcom).[footnote]Information Commissioner’s Office. (2020). ‘Digital Regulation Cooperation Forum’. Available at: https://ico.org.uk/about-the-ico/what-we-do/digital-regulation-cooperation-forum/[/footnote]

Where regulatory action is initiated against major platforms and global players, new measures should be considered as part of international regulators’ fora, that will provide the possibility to create ad hoc enforcement task forces across sectors and geographic jurisdictions, and to institutionalise such bodies, where necessary. The possibility of creating multi-sectoral and multi-geographic oversight and enforcement bodies focusing only on the biggest players in the global data and digital economy should be actively considered.

Moreover, it is necessary to create formal channels of communication between enforcement bodies, to be able to share sensitive information that might be needed in investigations. Currently, many enforcement authorities cannot share important information they have obtained in the course of their procedures with enforcement authorities that have a different area of competence or operate in a different jurisdiction. As data and all-purpose technologies are currently used by large platforms, any single enforcement body will not be able to see the full picture of risks and harms, leading to suboptimal enforcement of platforms and data practices. Coherent and holistic enforcement is needed.

 Questions that need to be addressed:

  • What would an integrated approach to regulation and enforcement be constituted in practice, embedding data protection, consumer protection and competition law objectives and mechanisms?
  • How can we uphold procedural rights, such as the right to good administration and to effective judicial remedy, in the context of transnational and trans-sectoral disputes?
  • How can enforcement authorities be made accountable where they fail to enforce the law effectively?
  • How to build more resilient enforcement structures that are less susceptible to corporate capture?
Taking into account collective harm

Jef Ausloos, Alexandra Giannopoulou and Jill Toh

 

Despite efforts to prevent it from being a mere checkbox exercise, GDPR compliance efforts often suffer from a narrow-focused framing, ignoring the multifarious issues that (can) arise in complex data-driven technologies and infrastructures. A meaningful appreciation of the broader context and the evaluation of potential impacts on (groups of) individuals and communities is necessary in order to move from ‘compliance’ narratives to fairer data ecosystems that are continuously evaluated and confronted with the potential individual or collective harms caused by data-driven technologies.

 

Public decision-makers responsible for deploying new technologies should start by questioning critically the very reason for adopting a specific data-driven technology in the first place. These actors should fundamentally be able to first demonstrate the necessity of the system itself, before assessing what data collection and processing the respective system would require. For instance, in the example of the migrant-monitoring system Centaur used in new refugee camps in Greece, authorities should be able to first demonstrate in general terms the necessity of a surveillance system, before assessing the inherent data collection and processing that Centaur would require and what would justify as necessary.

 

This deliberation is a complex exercise. Where the GDPR requires a data protection impact assessment, this deliberation is left to data controllers, before being subject to any type of questioning by relevant authorities.

 

One problem is that data controllers often define the legitimacy of a chosen system by stretching the meaning of GDPR criteria, or by benefitting from the lack of strict compliance processes for principles (such as data minimisation and data protection by design and by default) in order to demonstrate compliance. This can lead to a narrow norm-setting environment, because even if operating under rather flexible concepts (such as the respect of data protection principles as set out in the GDPR), the data controllers’ interpretation remains constricted in practice and neglects to consider new types of harms and impacts on a wider level.

 

While the responsibility to identify and mitigate harms is the responsibility of the data controller, civil society organisations could play an important facilitator role (without placing any formal burden to facilitate this process) in revealing collective harms that complex data-driven technological systems are likely to inflict on specific communities and groups, as well as sector-specific or community-specific interpretations of these harms.[footnote]And formalised through GDPR mechanisms such as codes of conduct (Article 40) and certification mechanisms (Article 42).[/footnote]

 

In practice, accountability measures would then require that the responsible actors need not only demonstrate the consideration of these possible broader collective harms, but also the active measures and steps taken to prevent them from materialising.

 

Put briefly, both data protection authorities and those controlling impactful data-driven technologies, need to recognise they can be held accountable for, and have to address, complex harms and impacts on individuals and communities. For instance, from a legal perspective, and as recognised under the GDPR’s data protection by design and by default requirement,[footnote]Article 25 of the GDPR.[/footnote] this means that compliance ought not to be seen as a one-off effort at the start of any complex data-driven technological system, but rather a continuous exercise considering the broader implications of data infrastructures on everyone involved.

 

Perhaps more importantly, and because not all harms and impacts can be anticipated, robust mechanisms should be in place enabling and empowering affected individuals and communities to challenge (specific parts of) data-driven technologies. While the GDPR may offer some tools for empowering those affected (e.g. data rights), they cannot be seen as goals in themselves, but need to be interpreted and accommodated in light of the context in which, and interests for which, they are invoked.

2. Legal action and representation

Another way to support the proposed interventions in Chapter 2 having their desired effect is to create more avenues for civil society organisations, groups and individuals to hold large platforms accountable for abuses of their data rights, as well as state authorities that do not adequately fulfil their enforcement tasks.

Mandating the exercise of data rights to intermediary entities is being explored as a way to address information and power asymmetries and systemic data-driven injustices at a collective level.[footnote]Giannopoulou, A., Ausloos, J., Delacroix, S and Janssen, H. (2022). ‘Mandating Data Rights Exercises’. Social Science Research Network. Available at https://ssrn.com/abstract=4061726[/footnote] The GDPR does not prevent the exercise of data rights through intermediaries, and rights delegation (as opposed to waiving the right to data protection, which is not possible under EU law since fundamental rights are inalienable), has started to be recognised in data protection legislation globally.

For example, in India[footnote]See: draft Indian Personal Data Protection Bill (2019). Available at: https://prsindia.org/files/bills_acts/bills_parliament/2019/Personal%20Data%20Protection%20Bill,%202019.pdf[/footnote] and Canada,[footnote]See: draft Canadian Digital Charter Implementation Act (2020). Available at: https://www.parl.ca/DocumentViewer/en/44-1/bill/C-11/first-reading[/footnote] draft data protection and privacy bills speak about intermediaries that can exercise the rights conferred by law. In the US, the California Consumer Privacy Act (CCPA)[footnote]See: California Consumer Privacy Act of 2018. Available at: https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5[/footnote] and the California Privacy Rights Act (CPRA)[footnote]See: California Privacy Rights Act of 2020. Available at: https://iapp.org/resources/article/the-california-privacy-rights-act-of-2020[/footnote] – which amends and expands the CCPA – both mention ‘authorised agents’, and the South Korean Personal Information Protection Act[footnote]See: Article 38 of the South Korean Personal Information Protection Act of 2020. Available in English at: https://elaw.klri.re.kr/kor_service/lawView.do?hseq=53044&lang=ENG[/footnote] also talks about ‘representatives’ who can be authorised by the data subject to exercise rights.

Other legal tools enabling legal action for individuals and collectives are Article 79 of the GDPR, which allows data subjects to bring compliance orders before courts, and Article 80(2) of the GDPR, which allows representative bodies to bring collective actions without the explicit mandate of data subjects. Both these mechanisms are underused and underenforced, receiving little court attention.

One step further would be to strengthen the capacity for civil society to pursue collective legal action for rights violations directly against the large players or against state authorities that do not adequately fulfil their enforcement tasks. The effort of reforming legal action and representation rules in order to make them more accessible for civil society actors and collectives needs to include measures to reduce the high costs for bringing court claims.[footnote]For example, in Lloyd v Google, the respondent is said to have secured £15.5m backing from Therium, a UK litigation funder, to cover legal costs. See: Thompson, B. (2017). ‘Google faces UK suit over alleged snooping on iPhone users’. Financial Times. Available at: https://www.ft.com/content/9d8c7136-d506-11e7-8c9a-d9c0a5c8d5c9. Lloyd v Google is a landmark case in the UK seeking collective claims on behalf of several millions of people against Google’s practices of tracking Apple iPhone users and collecting data for commercial purposes without the user’s knowledge or consent. The UK’s Supreme Court verdict was not to allow collective claims, which means that every individual would have to seek legal action independently and prove material damage or distress, bearing the full costs of litigation. The full judgement is available here: https://www.supremecourt.uk/cases/docs/uksc-2019-0213-judgment.pdf[/footnote] Potential solutions could be cost-capping for certain general actions when the claimant cannot afford the case. 

Questions that need to be addressed:

  • How can existing mechanisms for legal action and representation be made more accessible to civil society actors and collectives?
  • What new mechanisms and processes need to be designed for documenting abuses and proving harms, to address systemic data-driven injustices at a collective level?
  • How can cost barriers to legal action be reduced?

3. Removing industry dependencies

Finally, another way to ensure the interventions described above are successful is to lessen dependencies between regulators, civil society organisations and corporate actors. Industry dependencies can take many forms, including the sponsoring of major conferences for academia and civil society, and funding policy-oriented thinktanks that seek to advise regulators.[footnote]Solon, O. and Siddiqui, S. (2017). ‘Forget Wall Street – Silicon Valley is the new political power in Washington’. The Guardian. Available at: https://www.theguardian.com/technology/2017/sep/03/silicon-valley-politics-lobbying-washington[/footnote] [footnote]Stacey, K. and Gilbert, C. (2022). ‘Big Tech increases funding to US foreign policy think-tanks’. Financial Times. Available at https://www.ft.com/content/4e4ca1d2-2d80-4662-86d0-067a10aad50b[/footnote] While these dependencies do not necessarily lead to direct influence over research outputs or decisions, they do raise a risk of eroding independent critique and evaluation of large digital platforms. 

There are only a small number of specialist university faculties and research institutes working on data, digital and societal impacts that do not operate, in one way or another, with funding from large platforms.[footnote]Clarke, L., Williams, O. and Swindells, K. (2021). ‘How Google quietly funds Europe’s leading tech policy institutes’. The New Statesman. Available at: https://www.newstatesman.com/science-tech/big-tech/2021/07/how-google-quietly-funds-europe-s-leading-tech-policy-institutes[/footnote] This industry-resource dependency can risk jeopardising academic independence. A recent report highlighted that ‘[b]ig tech’s control over AI resources made universities and other institutions dependent on these companies, creating a web of conflicted relationships that threaten academic freedom and our ability to understand and regulate these corporate technologies.’[footnote]Whittaker, M. (2021). ‘The steep cost of capture’. ACM Interactions. Available at: https://interactions.acm.org/archive/view/november-december-2021/the-steep-cost-of-capture[/footnote]

This points to the need for a more systematic approach to countering corporate dependencies. Civil society, academia and the media play an important role in counterbalancing the narratives and actions of large corporations. Appropriate public funding, statutory rights and protection are necessary for them to be able to fulfil their function as balancing actors, but also as visionaries for alternative and potentially better ecosystems.

Questions that need to be addressed:

  • What would alternative funding models (such as public or philanthropic) that remove dependencies on industry be constituted?
  • Could national research councils (such as UKRI) and public funding play a bigger role in creating dedicated funding streams to support universities, independent media and civil society organisations, to shield them from corporate financing?
  • What type of mechanisms and legal measures need to be put in place, to establish endowment funds for specific purposes, creating sufficient incentives for founding members, but without compromising governance? (For example, donors, including large companies, could benefit from specific tax deductions but wouldn’t have any rights or decision-making power in how an endowment is governed, and capital endowments would be allowed but not recurring operating support, as that creates dependency).

Open invitation and call to action

A complete overturn of the existing data ecosystem cannot happen overnight. In this report, we acknowledge that a multifaceted approach is necessary for such a reform to be effective. Needless to say, there is no single, off-the-shelf solution that – on its own – will change the paradigm. Looking towards ideas that can produce substantial transformations can seem overwhelming, and it is also necessary to acknowledge and factor in the challenges that lie with adopting less revolutionary ideas into practice.

Acknowledging that there are many instruments that remain to be fully tested and understood in existing legislation, in this report we set off to develop the most promising tools for intervention that can take us towards a people-first digital ecosystem that’s fit for the middle of the twenty-first century.

In this intellectual journey, we explored a set of instruments, which carry transformative potential, and divided them into four areas that reflect the biggest obstacles we will face when imagining a deep reform of the digital ecosystem: control over technology infrastructure, power over how data is purposed and governed, balancing asymmetries with new institutions and more social accountability with inclusive participation in policymaking.

We unpacked some of the complexity of these challenges, and asked questions that we deem critical for the success of this complex reform. With this opening, we hope to fuel a collective effort to articulate ambitious aspirations for data use and regulation that work for people and society.

Reinforcing our invitation in 2020 to ‘rethink data’, we call on policymakers, researchers, civil society organisations, funders and industry to build towards more radical transformations, reflecting critically, testing and further developing these proposed concepts for change.

Who What you can do
Policymakers ●      Transpose the proposed interventions into policy action and help build the pathway towards a comprehensive and transformative vision for data

●      Ensure that impediments to effective enforcement of existing regulatory regimes are identified and removed

●      Use evidence of public opinion to proactively develop policy, governance and regulatory mechanisms that work for people and society.

Researchers ●      Reflect critically on the goals, strengths and weaknesses of the proposed concepts for change

●      Build on the proposed concepts for change with further research into potential solutions.

Civil society organisations ●      Analyse the proposed transformations and propose ways to build a proactive (instead of reactive) agenda in policy

●      Be ambitious and bold, visualise a positive future for data and society

●      Advocate for transformative changes in data policy and practice and make novel approaches possible.

Funders ●      Include exploration of the four proposed interventions in your annual funding agenda, or create a new funding stream for a more radical vision for data

●      Support researchers and civil society organisations to remain independent of government and industry

●      Fund efforts that work towards advancing concepts for systemic change.

Industry ●      Support the development and implementation of open standards in a more inclusive way (incorporating diverse perspectives)

●      Contribute to developing mechanisms for the responsible use of data for social benefit

●      Incorporate transparency into practices, including being open about internal processes and insights, and allowing researcher access and independent oversight.

Final notes

Context for our work

One of the core conundrums that motivated the establishment of the Ada Lovelace Institute by the Nuffield Foundation in 2018 was how to construct a system for data use and governance that engendered public trust, enabled the protection of individual rights and facilitated the use of data as a public good.

Even before the Ada Lovelace Institute was fully operational, Ada’s originating Board members (Sir Alan Wilson, Hetan Shah, Professor Helen Margetts, Azeem Azhar, Alix Dunn and Professor Huw Price) had begun work on a prospectus to establish a programme of work, guided by a working group, to look ‘beyond data ownership’ at future possibilities for overhauling data use and management. This programme built on the foundations of the Royal Society and British Academy 2017 report, Data Use and Management, and grew to become Rethinking Data.

Ada set out an ambitious vision for a research programme, to develop a countervailing vision for data, which could make the case for its social value, tackle asymmetries of power and data injustice, and promote and enable responsible and trustworthy use of data. Rethinking Data aimed to examine and reframe the kinds of language and narratives we use when talking about data, define what ‘good’ looks like in practice when data is collected, shared and used, and recommend changes in regulations so that data rights can be effectively exercised, and data responsibilities are clear.

There has been some progress in changing narratives, practices and regulations: popular culture (in the form of documentaries such as The Social Dilemma and Coded Bias), corporate product choices (like Apple’s decision to restrict tracking by default on iPhone apps) and high-profile news stories (such as the Ofqual algorithm fiasco, which saw students take to British streets to protest ‘F**k the algorithm’), have contributed to an evolving and more informed narrative about data.

The potential of data-driven technologies has been front and centre in public health messaging around the pandemic response, and debates around contact tracing apps have revealed a rich and nuanced spectrum of public attitudes to the trade-off between individual privacy and the public interest. The Ada Lovelace Institute’s own public deliberation research during the pandemic showed that the ‘privacy vs the pandemic’ arguments entrenched in media and policy narratives are contested by the public.[footnote]Ada Lovelace Institute. (2020). No green lights, no red lines – Public perspectives on COVID-19 technologies. Available at: https://www.adalovelaceinstitute.org/wp-content/uploads/2020/07/No-green-lights-no-red-lines-final.pdf and Parker, I. (2020). ‘It’s complicated: what the public thinks about COVID-19 technologies’. Ada Lovelace Institute. Available at: https://www.adalovelaceinstitute.org/blog/no-green-lights-no-red-lines/[/footnote]

There is now an emerging discourse around ‘data stewardship’, the responsible and trustworthy management of data in practice, to which the Ada Lovelace Institute has contributed via research which canvasses nascent legal mechanisms and participatory approaches for improving ethical data practices.[footnote]Ada Lovelace Institute. (2021). Exploring legal mechanisms for data stewardship. Available at: https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/ and Ada Lovelace Institute. (2021). Participatory data stewardship. Available at: https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/[/footnote] The prospect of new institutions and mechanisms for empowering individuals in the governance of their data is gaining ground, and the role of new data intermediaries is being explored in legislative debates in Europe, India and Canada,[footnote]Data Trusts. (2020). International approaches to data trusts: recent policy developments from India, Canada and the EU. Available at: https://datatrusts.uk/blogs/international-policy-developments[/footnote] as well as in the data reform consultation in the UK.[footnote]See: Department for Digital, Culture, Media & Sport (DCMS). (2021). Data: A new direction, Section 7. Available at: https://www.gov.uk/government/consultations/data-a-new-direction[/footnote]

Methodology

The underlying research for this project was primarily informed by the range of expert perspectives in the Rethinking data working group. It was supplemented by established and emerging research in this landscape and refined by several research pieces commissioned from leading experts on data policy.

Like most other things, the COVID-19 pandemic made the task of the Rethinking data working group immensely more difficult, not least because we had envisaged the deliberation of the group (which spans three continents) would take place in person. Despite this, the working group persisted and managed 10 meetings over a 12 month period.

To start with, the working group met to identify and analyse themes and tensions in the current data ecosystem. In the first stage of these deliberations, they singled out the key questions and challenges they felt were most important, such as questions around the infrastructure used to collect and store data, emerging regulatory proposals for markets and data-driven technologies, and the market landscape that major technology companies operate in.

Once these challenges were identified, the working group used a horizon-scanning methodology, to explore the underlying assumptions, power dynamics and tensions. To complement the key insights from the working group discussion, a landscape overview on ‘future technologies’ – such as privacy-enhancing techniques, edge computing, and others – was commissioned from the University of Cambridge.

The brief looked at emerging trends that present more pervasive, targeted or potentially intrusive data capture, focusing only on the more notable or growing models. The aim was to identify potential glimpses into how power will operate in new settings created by technology, and how the big business players’ approach to people and data might evolve as a result of these new developments, without the intention to predict or to forecast how trends will play out.

Having identified power and centralisation of large technology companies as two of the major themes for concern, in the second stage of the deliberations, the two major questions the working group considered were: What are the most important manifestations of power? And what are the most promising interventions to enabling an ambitious vision for the future of data use and regulation?

Speculative thinking methodologies, such as speculative scenarios, were used as provocations for the working group, to think beyond the current challenges, allowing different concepts for interventions to be discussed. The three developed scenarios highlighted potential tensions and warned about fallacies that could emerge if a simplistic view around regulation was employed.

In the last stage of our process, the interventions suggested by the working group were mapped into an ecosystem of interventions that could support positive transformations to emerge. Commissioned experts were invited to surface further challenges, problems and open questions associated with different interventions.

Acknowledgements

This report was lead authored by Valentina Pavel, with substantive contributions from Carly Kind, Andrew Strait, Imogen Parker, Octavia Reeve, Aidan Peppin, Katarzyna Szymielewicz, Michael Veale, Raegan MacDonald, Orla Lynskey and Paul Nemitz.

Working group members

Name Affiliation (where appropriate)
Diane Coyle (co-chair) Bennett Professor of Public Policy, University of Cambridge
Paul Nemitz (co-chair) Principal Adviser on Justice Policy, EU Commission, visiting Professor of Law at College of Europe
Amba Kak Director of Global Policy & Programs, AI Now Institute
Amelia Andersdotter Data Protection Technical Expert and Founder, Dataskydd
Anne Cheung Professor of Law, University of Hong Kong
Martin Tisné Managing Director, Luminate
Michael Veale Lecturer in Digital Rights and Regulation, University College London
Natalie Hyacinth Senior Research Associate, University of Bristol
Natasha McCarthy Head of Policy, Data, The Royal Society
Katarzyna Szymielewicz President, Panoptykon Foundation
Orla Lynskey Associate Professor of Law, London School of Economics
Raegan MacDonald Tech-policy expert
Rashida Richardson Assistant Professor of Law and Political Science, Northeastern University School of Law & College of Social Sciences and Humanities
Ravi Naik Legal Director, AWO
Steven Croft Founding board member, Centre for Data Ethics and Innovation (CDEI)
Taylor Owen Associate Professor, McGill University – Max Bell School of Public Policy

Commissioned experts

Name Affiliation (where appropriate)
Ian Brown Leading specialist on internet regulation and pro-competition mechanisms such as interoperability
Jathan Sadowski Senior research fellow, Emerging Technologies Research Lab, Monash University
Jef Ausloos Institute for Information Law (IViR), University of Amsterdam
Jill Toh Institute for Information Law (IViR), University of Amsterdam
Alexandra Giannopoulou Institute for Information Law (IViR), University of Amsterdam

External reviewers

Name Affiliation (where appropriate)
Agustín Reyna Director, Legal and Economic Affairs, BEUC
Jeni Tennison Executive Director, Connected by data
Theresa Stadler Doctoral assistant, Security and Privacy Engineering Lab, at Ecole Polytechnique Fédérale de Lausanne (EPFL)
Alek Tarkowski Director of Strategy, Open Future Foundation

Throughout the working group deliberations we also received support from Annabel
Manley, research assistant at the University of Cambridge, and Jovan Powar and Dr Jat
Singh, Compliant & Accountable Systems Group at the University of Cambridge.

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 110

Skip to content

Executive summary

‘Where should I go for dinner? What should I read, watch or listen to next? What should I buy?’ To answer these questions, we might go with our gut and trust our intuition. We could ask our friends and family, or turn to expert reviews. Recommendations large and small can come from a variety of sources in our daily lives, but in the last decade there has been a critical change in where they come from and how they’re used.

Recommendations are now a pervasive feature of the digital products we use. We are increasingly living a world of recommendation systems, a type of software designed to sift through vast quantities of data to guide users towards a narrower selection of material, according to a set of criteria chosen by their developers.

Examples of recommendation systems include Netflix’s ‘Watch next’ and Amazon’s ‘Other users also purchased’; TikTok’s recommendation system drives its main content feed.

But what is the risk of a recommendation? As recommendations become more automated and data-driven, the trade-offs in their design and use are becoming more important to understand and evaluate.

Background

This report explores the ethics of recommendation systems as used in public service media organisations. These independent organisations have a mission to inform, educate and entertain the public, and are often funded by and accountable to the public.

In media organisations, producers, editors and journalists have always made implicit and explicit decisions about what to give prominence to, both in terms of what stories to tell and what programmes to commission, but also in how those stories are presented. Deciding what makes the front page, what gets the primetime slot, what makes top billing on the evening news – these are all acts of recommendation. While private media organisations like Netflix primarily use these systems to drive user engagement with their content, public service media organisations, like the British Broadcasting Corporation (BBC) in the UK, operate with a different set of principles and values.

This report also explores how public service media organisations are addressing the challenge of designing and implementing recommendation systems within the parameters of their mission, and identifies areas for further research into how they can accomplish this goal.

While there is an extensive literature exploring public service values and a separate literature around the ethics and operational challenges of designing and implementing recommendation systems, there are still many gaps in the literature around how public service media organisations are designing and implementing these systems. Addressing these gaps can help ensure that public service media organisations are better able to design these systems. With this in mind, this project has explored the following questions:

  • What are the values that public service media organisations adhere to? How do these differ from the goals that private-sector organisations are incentivised to pursue?
  • In what contexts do public service media use recommendation systems?
  • What value can recommendation systems add for public service media and how do they square with public service values?
  • What are the ethical risks that recommendation systems might raise in those contexts? And what challenges should teams consider?
  • What are the mitigations that public service media can implement in the design, development, and implementation of these systems?

In answering these questions, we focused on European public service media organisations and in particular on the BBC in the UK, who are project partners on this research.

The BBC is the world’s largest public service media organisation and has been at the forefront of public service broadcasters exploring the use of recommendation systems. As the BBC has historically set precedents that other public service media have followed, it is valuable to understand its work in depth in order to draw wider lessons for the field.

In this report, we explore an in-depth snapshot of the BBC’s development and use of several recommendation systems from summer and autumn 2021, alongside an examination of the work of several other European public service media organisations. We place these examples in the broader context of debates around 21st century public service media and use them to explore the motivations, risks and evaluation of the use of recommendation systems by public service media and their use more broadly.

The evidence for this report stems from interviews with 11 current staff from editorial, product and engineering teams involved in recommendation systems at the BBC, along with interviews with representatives of six other European public service broadcasters that use recommendation systems. This report also draws on a review of the existing literature on public service media recommendation systems and on interviews with experts from academia, civil society and government.

Findings

Across these different public service media organisations, our research has found five key findings:

  1. The contextual role of public service media organisations is a major driver for their increasing use of recommendation systems. The last few decades have seen public service media organisations lose market share of news and entertainment to private providers, putting pressure on public service media organisations to use recommendation systems to stay competitive.
  2. The values of public service media organisations create different objectives and practices to those in the private sector. While private-sector media organisations are primarily driven to maximise shareholder revenue and market share, with some consideration of social values, public service media organisations are legally mandated to operate with a particular set of public interest values at their core, including universality, independence, excellence, diversity, accountability and innovation.
  3. These value differences translate into different objectives for the use of recommendation systems. While private firms seek to maximise metrics like user engagement, ‘time on product’ and subscriber retention in the use of their recommendation systems, public service media organisations seek related but different objectives. For example, rather than maximising engagement with recommendation systems, our research found public service media providers want to broaden their reach to a more diverse set of audiences. Rather than maximising time on product, public service media organisations are more concerned with ensuring the product is useful for all members of society, in line with public interest values.
  4. Public service media recommendation systems can raise a range of well-documented ethical risks, but these will differ depending on the type of system and context of its use. Our research found that public service media recognise a wide array of well-documented ethical risks of recommendation systems, including risks to personal autonomy, privacy, misinformation and fragmentation of the public sphere. However, the type and severity of the risks highlighted depended on which teams we spoke with, with audio-on-demand and video-on-demand teams raising somewhat different concerns to those working on news.
  5. Evaluating the risks and mitigations of recommendation systems must be done in the context of the wider product. Addressing the risks of public service media recommendation systems should not just focus on technical fixes. Aligning product goals and other product features with public service values are just as important in ensuring recommendation systems positive contribute the experiences of audiences and to wider society.

Recommendations

Based on these key findings, we make nine recommendations for future research, experimentation and collaboration between public service media organisations, academics, funders and regulators:

  1. Define public service value for the digital age. Recommendation systems are designed to optimise against specific objectives. However, the development and implementation of recommendation systems is happening at a time when the concept of public service value and the role of public service media organisations is under question. Unless public service media organisations are clear about their own identities and purpose, it will be difficult for them to build effective recommendation systems. In the UK, significant work has already been done by Ofcom as well as the Department for Digital, Culture, Media and Sport’s parliamentary Select Committee to identify the challenges public service media face and offer new approaches to regulation. Their recommendations must be implemented so that public service media can operate within a paradigm appropriate to the digital age and build systems that address a relevant mission.
  2. Fund a public R&D hub for recommendation systems and responsible recommendation challenges. There is a real opportunity to create a hub for R&D of recommendation systems that are not tied to industry goals. This is especially important as recommendation systems are one of the prime use cases of behaviour modification technology but research into it is impaired by lack of access to interventional data.  Therefore, as part of UKRI’s National AI Research and Innovation (R&I) Programme set out in the UK AI Strategy, it should fund the development of a public research hub on recommendation technology.
  3. Publish research into audience expectations of personalisation. There was a striking consensus in our interviews with public service media teams working on recommendations that personalisation was both wanted and expected by the audience. However, there is limited publicly available evidence underlying this belief and more research is needed. Understanding audience’s views towards recommendation systems is an important part of ensuring those systems are acting in the public interest. Public service media organisations should not widely adopt recommendation systems without evidence that they are either wanted or needed by the public. Otherwise, public service media risk simply following a precedent set by commercial competitors, rather than defining a paradigm aligned to their own missions.
  4. Communicate and be transparent with audiences. Although most public service media organisations profess a commitment to transparency about their use of recommendation systems, in practice there is little effective communication with their audiences about where and how recommendation systems are being used. Public service media should invest time and research into understanding how to usefully and honestly articulate their use of recommendation systems in ways that are meaningful to their audiences. This communication must not be one way. There must be opportunities for audiences to give feedback and interrogate the use of the systems, and raise concerns.
  5. Balance user control with convenience. Transparency alone is not enough. Giving users agency over the recommendations they see is an important part of responsible recommendation. Simply giving users direct control over the recommendation system is an obvious and important first step, but it is not a universal solution. We recommend that public service media providers experiment with different kinds of options, including enabling algorithmic choice of recommendation systems and ‘joint’ recommendation profiles.
  6. Expand public participation. Beyond transparency or individual user choice and control over the parameters of the recommendation systems already deployed, users and wider society could also have greater input during the initial design of the recommendation systems and in the subsequent evaluations and iterations. This is particularly salient for public service media organisations as, unlike private companies which are primarily accountable to their customers and shareholders, public service media organisations have an obligation to serve the interests of society. Therefore, even those who are not direct consumers of content should have a say in how public service media recommendations are shaped.
  7. Standardise metadata. Inconsistent, poor quality metadata – an essential resource for training and developing recommendation systems – was consistently highlighted as a barrier to developing recommendation systems in public service media, particularly in developing more novel approaches that go beyond user engagement and try to create diverse feeds of recommendations. Each public service media organisation should have a central function that standardises the format, creation and maintenance of metadata across the organisation. Institutionalising the collection of metadata and making access to it more transparent across each individual organisation is an important investment in public service media’s future capabilities.
  8. Create shared recommendation system resources. Given their limited resources and shared interests, public service media organisations should invest more heavily in creating common resources for evaluating and using recommendation systems. This could include a shared repository for evaluating recommendation systems on metrics valued by public service media, including libraries in common coding languages.
  9. Create and empower integrated teams. When developing and deploying recommendation systems, public service media organisations need to integrate editorial and development teams from the start. This ensures that the goals of the recommendation system are better aligned with the organisation’s goals as a whole and ensure the systems augment and complement existing editorial expertise.

How to read this report

This report examines how European public service media organisations think about using automated recommendation systems for content curation and delivery. It covers the context in which recommendation systems are being deployed, why that matters, the ethical risks and evaluation difficulties posed by these systems and how public service media are attempting to mitigate these risks. It also provides ideas for new approaches to evaluation that could enable better alignment of their systems with public service values.

If you need an introduction or refresher on what recommendation systems are, we recommend starting with the ‘Introducing recommendation systems’.

If you work for a public service media organisation

  • We recommend the chapters on ‘Stated goals and potential risks of using recommendation systems in public service media’ and ‘Evaluation of recommendation systems’.
  • For an understanding of how the BBC has deployed recommendation systems, see the case studies.
  • For ideas on how public service media organisations can advance their responsible use of recommendation systems, see the chapter on ‘Outstanding questions and areas for further research and experimentation’.

If you are a regulator of public service media

  • We recommend you pay particular attention to the section on ‘Stated goals and potential risks of using recommendation systems in public service media’ and ‘How do public service media evaluate their recommendation systems?’.
  • In addition, to understand the practices and initiatives that we believe should be encouraged within and experimented with by public service media organisations to ensure responsible and effective use of recommendation systems, see ‘Outstanding questions and areas for further research and experimentation’.

If you are a regulator of online platforms

  • If you need an introduction or refresher on what recommendation systems are, we recommend starting with the ‘Introducing recommendation systems’. Understanding this context can help disentangle the challenges in regulating recommendation systems, by highlighting where problems arise from the goals of public service media versus the process of recommendation itself.
  • To understand the issues faced by all deployers of recommendation systems, see the sections on the ‘Stated goals of recommendation systems’ and ‘Potential risks of using recommendation systems’.
  • To better understand how these risks change due to the context and choices of public service media, relative to other online platforms, and the difficulties even organisations explicitly oriented towards public value have in auditing their own recommendation systems to determine whether they are socially beneficial, beyond simple quantitative engagement metrics, see the section on ‘How these risks are viewed and addressed by public service media’ and the chapter on ‘Evaluation of recommendation systems’.

If you are a funder of research into recommendation systems or a researcher interested in recommendation systems

  • Public service media organisations, with mandates that emphasise social goals of universality, diversity and innovation over engagement and profit-maximising, can offer an important site of study and experimentation for new approaches to recommendation system design and evaluation. We recommend starting with the sections on ‘The context of public service values and public service media’ and ‘why this matters’, to understand the different context within which public service media organisations operate.
  • Then, the sections on ‘How do public service media evaluate their recommendation systems?’ and ‘How could evaluations be done differently?’, followed by the chapter on ‘Outstanding questions and areas for further research and experimentation’, could provide inspiration for future research projects or pilots that you could undertake or fund.

Introduction

Scope

Recommendation systems are tools designed to sift through the vast quantities of data available online and use algorithms to guide users towards a narrower selection of material, according to a set of criteria chosen by their developers. Recommendation systems sit behind a vast array of digital experiences. ‘Other users also purchased…’ on Amazon or ‘Watch next’ on Netflix guide you to your next purchase or night on the sofa. Deliveroo will suggest what to eat, LinkedIn where to work and Facebook who your friends might be.

These practices are credited with driving the success of companies like Netflix and Spotify. But they are also blamed for many of the harms associated with the internet, such as the amplification of harmful content, the polarisation of political viewpoints (although the evidence is mixed and inconclusive)[footnote]Cobbe, J. and Singh, J. (2019). ‘Regulating Recommending: Motivations, Considerations, and Principles’. European Journal of Law and Technology, 10(3), pp. 8–10. Available at: https://ejlt.org/index.php/ejlt/article/view/686; Steinhardt, J. (2021). ‘How Much Do Recommender Systems Drive Polarization?’. UC Berkeley. Available at: https://jsteinhardt.stat.berkeley.edu/blog/recsys-deepdive; Stray, J. (2021). ‘Designing Recommender Systems to Depolarize’, p. 2. arXiv. Available at: http://arxiv.org/abs/2107.04953[/footnote] and the entrenchment of inequalities.[footnote]Born, G. Morris, J. Diaz, F. and Anderson, A. (2021). Artificial intelligence, music recommendation, and the curation of culture: A white paper, pp. 10–13. Schwartz Reisman Institute for Technology and Society. Available at: https://static1.squarespace.com/static/5ef0b24bc96ec4739e7275d3/t/60b68ccb5a371a1bcdf79317/1622576334766/Born-Morris-etal-AI_Music_Recommendation_Culture.pdf[/footnote] Regulators and policymakers worldwide are paying increasing attention to the potential risks of recommendation systems, with proposals in China and Europe to regulate their design, features and uses.[footnote]See: European Union. (2022). Digital Services Act, Article 27. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:L:2022:277:TOC; For details of Article 17 of the Cybersecurity Administration of China (CAC)’s Internet Information Service Algorithm Recommendation Management Regulations, see: Huld, A. (2022). ‘China Passes Sweeping Recommendation Algorithm Regulations’. China Briefing News. Available at: https://www.china-briefing.com/news/china-passes-sweeping-recommendation-algorithm-regulations-effect-march-1-2022/[/footnote]

Public service media organisations are starting to follow the example of their commercial rivals and adopt recommendation systems. Like the big digital streaming service providers, they sit on huge catalogues of news and entertainment content, and can use recommendation systems to direct audiences to particular options.

But public service media organisations face specific challenges in deploying these technologies. Recommendation systems are designed to optimise for certain objectives: a hotel’s website is aiming for maximum bookings, Spotify and Netflix want you to renew your subscription.

Public service media serve many functions. They have a duty to serve the public interest, not the company bottom line. They are independently financed and are controlled by, if not answerable to, the public.[footnote]Conseil mondial de la radiotélévision. (2001). Public broadcasting: why? how? pp. 11–15. UNESCO Digital Library. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000124058[/footnote] Their mission is to inform, educate and entertain. Public service media are committed to values including independence, excellence and diversity.[footnote]European Broadcasting Union. (2012). Empowering Society: A Declaration on the Core Values of Public Service Media. Available at: https://www.ebu.ch/files/live/sites/ebu/files/Publications/EBU-Empowering-Society_EN.pdf[/footnote] They must fulfil an array of duties and responsibilities set down in legislation that often predates the digital era. How do you optimise for all that?

Developing recommendation systems for public service media is not just about finding technical fixes. It requires an interrogation of the organisations’ role in democratic societies in the digital age. How do the public service values that have guided them for a century translate to a context where the internet has fragmented the public sphere and audiences are defecting to streaming services? And how can public service media use this technology in ways that serve the public interest?

These are questions that resonate beyond the specifics of public service media organisations. All public institutions that wish to use technologies for societal benefit must grapple with similar issues. And all organisations – public or private – have to deploy technologies in ways that align with their values. Asking these questions can be helpful to technologists more generally.

In a context where the negative impacts of recommendation systems are increasingly apparent, public service media must tread carefully when considering their use. But there is also an opportunity for public service media to do what, historically, it has excelled at – innovating in the public interest.

A public service approach to building recommendation systems that are both engaging and trustworthy could not only address the needs of public service media in the digital age, but provide a benchmark for scrutiny of systems more widely and create a challenge to the paradigm set by commercial operators’ practices.

In this report, we explore how public service media organisations are addressing the challenge of designing and implementing recommendation systems within the parameters of their organisational mission, and identify areas for further research into how they can accomplish this goal.

While there is an extensive literature exploring public service values and a separate literature around the ethics and operational challenges of designing and implementing recommendation systems, there are still many gaps in the literature around how public service media organisations are designing and implementing these systems. Addressing that gap can help ensure that public service media organisations are better able to design these systems. With that in mind, this report explores the following questions:

  • What are the values that public service media organisations adhere to? How do these differ from the goals that private-sector organisations are incentivised to pursue?
  • In what contexts do public service media use recommendation systems?
  • What value can recommendation systems add for public service media and how do they square with public service values?
  • What are the ethical risks that recommendation systems might raise in those contexts? And what challenges should different teams within public service media organisations (such as product, editorial, legal and engineering) consider?
  • What are the mitigations that public service media can implement in the design, development and implementation of these systems?

In answering these questions, this report:

  • provides greater clarity about the ethical challenges that developers of recommendation systems must consider when designing and maintaining these systems
  • explores the social benefit of recommendation systems by examining the trade-offs between their stated goals and their potential risks
  • provides examples of how public service broadcasters are grappling with these challenges, which can help inform the development of recommendation systems in other contexts.

This report focuses on European public service media organisations and in particular on the British Broadcasting Corporation (BBC) in the UK, who are project partners on this research. The BBC is the world’s largest public service media organisation and has been at the forefront amongst public service broadcasters of exploring the use of recommendation systems. As the BBC has historically set precedents that other public service media have followed, it is valuable to understand its work in depth in order to draw wider lessons for the field.

In this report, we explore an in-depth snapshot of the BBC’s development and use of several recommendation systems as it stood in 2021, alongside an examination of the work of several other European public service media organisations. We place these examples in the broader context of debates around 21st century public service media and use them to explore the motivations, risks and evaluation of the use of recommendation systems by public service media and their use more broadly.

The evidence for this report stems from interviews with 11 current staff from editorial, product and engineering teams, involved in recommendation systems at the BBC, along with interviews with representatives of six other European public service broadcasters that use recommendation systems. This report also draws on a review of the existing literature on public service media recommendation systems and on interviews with experts from academia, civil society and regulation who work on the design, development, and evaluation of recommendation systems.

Although a large amount of the academic literature focuses on the use of recommendations in news provision, we look at the full range of public service media content, as we found more of the advanced implementations of recommendation systems lie in other domains. We have drawn on published research about recommendation systems from commercial platforms, however, internal corporate studies are unavailable to independent researchers and our requests to interview both researchers and corporate representatives of platforms were unsuccessful.

Background

In this chapter, we set out the context for the rest of the report. We outline the history and context of public service media organisations, what recommendation systems are and how they are approached by public service media organisations, and what external and internal processes and constraints govern their use.

The context of public service values and public service media

The use of recommendation systems in public service media is informed by their history, values and remit, their governance and the landscape in which they operate. In this section we situate the deployment of recommendation systems in this context.

Broadly, public service media are independent organisations that have a mission to inform, educate and entertain. Their values are rooted in the founding vision for public service media organisations a century ago and remain relevant today, codified into regulatory and governance frameworks at organisational, national and European levels. However the values that public service media operate under are inherently qualitative and, even with the existence of extensive guidelines, are interpreted through the daily judgements of public service media staff and the mental models and institutional culture built up over time.

Although public service media have been resilient to change, they currently face a trio of challenges:

  1. Losing audiences to online digital content providers including Netflix, Amazon, YouTube and Spotify.
  2. Budget cuts and outdated regulation, framed around analogue broadcast commitments, hampering their ability to respond to technological change.
  3. Populist political movements undermining their independence.

Public service media are independent media organisations financed by and answerable to the publics they serve.[footnote]Conseil mondial de la radiotélévision. (2001). Public broadcasting: why? how? pp. 11–15. UNESCO Digital Library. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000124058[/footnote] Their roots lie in the 1920s technological revolution of radio broadcasting when the BBC was established as the world’s first public service broadcaster, funded by a licence fee, and with the ambition to ‘bring the best of everything to the greatest number of homes’.[footnote]BBC. (2022). The BBC Story – 1920s factsheet. Available at: http://downloads.bbc.co.uk/historyofthebbc/1920s.pdf[/footnote] Other national broadcasters were soon founded across Europe and also adopted the BBC’s mission to ‘inform, educate and entertain’. Although there are now public service media organisations in almost every country in the world, this report focuses on European public service media, which share comparable social, political and regulatory developments and therefore a similar context when considering the implementation of recommendation systems.

Public service media organisations have come to play an important institutional role within democratic societies in Europe, creating a bulwark against the potential control of public opinion either by the state or by particular interest groups.[footnote]Tambini, D. (2021). ‘Public service media should be thinking long term when it comes to AI’. Media@LSE. Available at: https://blogs.lse.ac.uk/medialse/2021/05/12/public-service-media-should-be-thinking-long-term-when-it-comes-to-ai/[/footnote] The establishment of public service broadcasters for the first time created a universally accessible public sphere where, in the words of the BBC’s founding chairman Lord Reith, ‘the genius and the fool, the wealthy and the poor listen simultaneously’. They aimed to forge a collective experience, ‘making the nation as one man’.[footnote]Higgins, C. (2014). ‘What can the origins of the BBC tell us about its future?’. The Guardian. Available at: https://www.theguardian.com/media/2014/apr/15/bbc-origins-future[/footnote] At the same time public service media are expected to reflect the diversity of a nation, enabling the wide representation of perspectives in a democracy, as well as giving people sufficient information and understanding to make decisions on issues of public importance. These two functions create an inherent tension between public service media as an agonistic space where different viewpoints compete and a consensual forum where the nation comes together. 

Public service values

The founding vision for public service media has remained within the DNA of organisations as their public service values – often called Reithian principles, in reference to the influence of the BBC’s founding chairman.

The European Broadcasting Union (EBU), the membership organisation for public service media in Europe, has codified the public service mission into six core values: universality, independence, excellence, diversity, accountability and innovation, and member organisations commit to strive to uphold these in practice.[footnote]European Broadcasting Union. (2012). Empowering Society: A Declaration on the Core Values of Public Service Media. Available at: https://www.ebu.ch/files/live/sites/ebu/files/Publications/EBU-Empowering-Society_EN.pdf[/footnote]

 

Public service value Meaning
Universality ·  reach all segments of society, with no-one excluded

· share and express a plurality of views and ideas

· create a public sphere, in which all citizens can form their own opinions and ideas, aiming for inclusion and social cohesion

· multi-platform

· accessible for everyone

· enable audiences to engage and participate in a democratic society.

Independence · trustworthy content

· act in the interest of audiences

· completely impartial and independent from political, commercial and other influences and ideologies

· autonomous in all aspects of the remit such as programming, editorial decision-making, staffing

· independence underpinned by safeguards in law.

Excellence · high standards of integrity professionalism and quality; create benchmarks within the media industries

·  foster talent

· empower, enable and enrich audiences

· audiences are also participants.

Diversity · reflect diversity of audiences by being diverse and pluralistic in the genres of programming, the views expressed, and the people employed

· support and seek to give voice to a plurality of competing views – from those with different backgrounds, histories and stories. Help build a more inclusive, less fragmented society.

Accountability · listen to audiences and engage in a permanent and meaningful debate

· publish editorial guidelines. Explain. Correct mistakes. Report on policies, budgets, editorial choices

· be transparent and subject to constant public scrutiny

· be efficient and managed according to the principles of good governance.

Innovation · enrich the media environment

· be a driving force of innovation and creativity

· develop new formats, new technologies, new ways of connectivity with audiences

· attract, retain and train our staff so that they can participate in and shape the digital future, serving the public.

As well as signing up to these common values, each individual public service media organisation has its own articulation of its mission, purpose and values, often set out as part of its governance.[footnote]Statutory governance of public service media also varies from country to country and reflects national political and regulatory norms. The BBC is regulated by the independent broadcasting regulator Ofcom. The European Union’s revised Audio Visual Service Directive requires member states to have an independent regulator but this can take different forms. See: European Commission. (2018). Digital Single Market: updated audiovisual rules. Available at: https://ec.europa.eu/commission/presscorner/detail/en/MEMO_18_4093. For example, France has a central regulator, the Conseil Supérieur de l’Audiovisuel. But in Germany, although public service media objectives are defined in the constitution, oversight is provided by a regional broadcasting council, Rundfunkrat, reflecting the country’s federal structure. In Belgium too, regulation is devolved to two separate councils representing the country’s French and Flemish speaking regions.[/footnote] Ultimately these will align with those described by the EBU but may use different terms or have a different emphasis. Policymakers and practitioners operating at a national level are more likely to refer to these specific expressions of public values. The overarching EBU values are often referenced in academic literature as the theoretical benchmark for public service values. 

In the case of the BBC, the Royal Charter between the Government and the BBC is agreed for a 10 year period.[footnote]BBC. (2017). ‘Mission, values and public purposes’. Available at: https://www.bbc.com/aboutthebbc/governance/bbc.com/aboutthebbc/governance/mission/. For comparison, ARD, the German public service media organisation articulates its values as: ‘Participation, Independence, Quality, Diversity, Localism, Innovation, Value Creation, Responsibility’. See: ARD. (2021). Die ARD – Unser Beitrag zum Gemeinwohl. Available at: https://www.ard.de/die-ard/was-wir-leisten/ARD-Unser-Beitrag-zum-Gemeinwohl-Public-Value-100[/footnote]

The BBC: governance and values

 

Mission: to act in the public interest, serving all audiences through the provision of impartial, high-quality and distinctive output and services which inform, educate and entertain.

 

Public purposes:

  1. To provide impartial news and information to help people understand and engage with the world around them.
  2. To support learning for people of all ages.
  3. To show the most creative, highest quality and distinctive output and services.
  4. To reflect, represent and serve the diverse communities of all of the United Kingdom’s nations and regions and, in doing so, support the creative economy across the United Kingdom.
  5. To reflect the United Kingdom, its culture and values to the world.

 

Additionally, the BBC has its own set of organisational values that are not part of the governance agreement but that ‘represent the expectations we have for ourselves and each other, they guide our day-to-day decisions and the way we behave’:

  • Trust: Trust is the foundation of the BBC – we’re independent, impartial and truthful.
  • Respect: We respect each other – we’re kind, and we champion inclusivity.
  • Creativity: Creativity is the lifeblood of our organisation.
  • Audiences: Audiences are at the heart of everything we do.
  • One BBC: We are One BBC – we collaborate, learn and grow together.
  • Accountability: We are accountable and deliver work of the highest quality.

These kinds of regulatory requirements and values are then operationalised internally through organisations’ editorial guidelines which again will vary from organisation to organisation, depending on the norms and expectations of their publics. Guidelines can be extensive and their aim is to help teams put public service values into practice. For example, the current BBC guidelines run to 220 pages, covering everything from how to run a competition, to reporting on wars and acts of terror.

Nonetheless, such guidelines leave a lot of room for interpretation. Public service values are, by their nature, qualitative and difficult to measure objectively. For instance, consider the BBC guidelines on impartiality – an obligation that all regulated broadcasters in the UK must uphold – and over which the BBC has faced intense scrutiny:

‘The BBC is committed to achieving due impartiality in all its output. This commitment is fundamental to our reputation, our values and the trust of audiences. The term “due” means that the impartiality must be adequate and appropriate to the output, taking account of the subject and nature of the content, the likely audience expectation and any signposting that may influence that expectation.’

‘Due impartiality usually involves more than a simple matter of ‘balance’ between opposing viewpoints. We must be inclusive, considering the broad perspective and ensuring that the existence of a range of views is appropriately reflected. It does not require absolute neutrality on every issue or detachment from fundamental democratic principles, such as the right to vote, freedom of expression and the rule of law. We are committed to reflecting a wide range of subject matter and perspectives across our output as a whole and over an appropriate timeframe so that no significant strand of thought is under-represented or omitted.’ 

It’s clear that impartiality is a question of judgement and may not even be expressed in a single piece of content but over the range of BBC output over a period of time. In practice, teams internalise these expectations and make decisions based on institutional culture and internal mental models of public service value, rather than continually checking the editorial guidelines or referencing any specific public values matrix.[footnote]Mazzucato, M., Conway, R., Mazzoli, E., Knoll E. and Albala, S. (2020). Creating and measuring dynamic public value at the BBC, p.22. UCL Institute for Innovation and Public Purpose. Available at: https://www.ucl.ac.uk/bartlett/public-purpose/sites/public-purpose/files/final-bbc-report-6_jan.pdf[/footnote]

How public service media differ from other media organisations

Public service media are answerable to the publics they serve.[footnote]Not all public service media are publicly funded. Channel 4 in the UK for example is financed through advertising but owned by the public (although the UK Government has opened a consultation on privatisation).[/footnote] They should be independent from both government influence and from the influence of commercial owners. They operate to serve the public interest.

Commercial media, however, serve the interests of their owners or shareholders. Success for Netflix for example is measured in numbers of subscribers which then translates into revenues.[footnote]Circulation and profits for print media have declined in recent years but in some cases promote their proprietors’ interests through political influence – for instance the Murdoch-owned Sun in the UK or the Axel Springer-owned Bild Zeitung in Germany.[/footnote]

The activities of commercial media are nonetheless limited by regulation. In the UK the independent regulator Ofcom’s Broadcasting Code requires all broadcasters (not just public service media) to abide by principles such as fairness and impartiality.[footnote]Ofcom. (2020). The Ofcom Broadcasting Code (with the Cross-promotion Code and the On Demand Programme Service Rules). Available at: https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code[/footnote] Russia Today for example has been investigated for allegedly misleading reporting on the conflict in Ukraine.[footnote]Ofcom. (2022). ‘Ofcom launches 15 investigations into RT’. Available at: https://www.ofcom.org.uk/news-centre/2022/ofcom-launches-investigations-into-rt[/footnote] Streaming services are subject to more limited regulation which covers child protection, incitement to hatred and product placement,[footnote]Ofcom. (2021). Guide to video on demand. Available at: https://www.ofcom.org.uk/tv-radio-and-on-demand/advice-for-consumers/television/video-on-demand[/footnote] while the press – both online and in print – are largely lightly self-regulated through the Independent Press Standards Organisation, with some publications regulated by IMPRESS.[footnote]Independent Press Standards Organisation (IPSO). (2022). ‘What we do’. Available at: https://www.ipso.co.uk/what-we-do/; IMPRESS. ‘Regulated Publications’. Available at: https://impress.press/regulated-publications/[/footnote]

However, public service media have extensive additional obligations, amongst others to ‘meet the needs and satisfy the interests of as many different audiences as practicable’ and ‘reflect the lives and concerns of different communities and cultural interests and traditions within the United Kingdom, and locally in different parts of the United Kingdom’,[footnote]UK Government. Communications Act 2003, section 265. Available at: https://www.legislation.gov.uk/ukpga/2003/21/section/265[/footnote] 

These regulatory systems vary from country to country but hold broadly the same characteristics. In all cases, the public service remit entails far greater duties than in the private sector and broadcasters are more heavily regulated than digital providers.

These obligations are also framed in terms of public or societal benefit. This means public service media are striving to achieve societal goals that may not be aligned with a pure maximisation of profits, while commercial media pursue interests more aligned with revenue and the interests of their shareholders.

Nonetheless, public service media face scrutiny about how well they meet their objectives and have had to create proxies for these intangible goals to demonstrate their value to society.

‘[Public service media] is fraught today with political contention. It must justify its existence and many of its efforts to governments that are sometimes quite hostile, and to special interest groups and even competitors. Measuring public value in economic terms is therefore a focus of existential importance; like it or not diverse accountability processes and assessment are a necessity.’[footnote]Lowe, G. and Martin, F. (eds.). (2014). The Value and Values of Public Service Media.[/footnote]

In practice this means public service media organisations measure their services against a range of hard metrics, such as audience reach and value for money, as well as softer measures like audience satisfaction surveys.[footnote]BBC. (2021). BBC Annual Plan 2021-22, Annex 1. Available at: http://downloads.bbc.co.uk/aboutthebbc/reports/annualplan/annual-plan-2021-22.pdf[/footnote] In the mid-2000s the BBC developed a public value test to inform strategic decisions that has since been adopted as a public interest test which remains part of the BBC’s governance. Similar processes have been created in other public service media systems, such as the ‘Three Step Test’ in German broadcasting.[footnote]The 12th Inter-State Broadcasting Treaty, the regulatory framework for public service and commercial broadcasting across Germany’s federal states, introduced a three-step test for assessing whether online services offered by public service broadcasters met their public service remit. Under the three-step test, the broadcaster needs to assess: first, whether a new or significantly amended digital service satisfies the democratic, social and cultural needs of society; second, whether it contributes to media competition from a qualitative point of view and; third, the associated financial cost. See: Institute for Media and Communication Policy. (2009). Drei-Stufen-Test. Available at: http://medienpolitik.eu/drei-stufen-test/[/footnote] These methods have their own limitations, drawing public media into a paradigm of cost-benefit analysis and market fixing, rather than articulating wider values to individuals, society and industry.[footnote]Mazzucato, M., Conway, R., Mazzoli, E., Knoll E. and Albala, S. (2020). Creating and measuring dynamic public value at the BBC, p.22. UCL Institute for Innovation and Public Purpose. Available at: https://www.ucl.ac.uk/bartlett/public-purpose/sites/public-purpose/files/final-bbc-report-6_jan.pdf[/footnote] 

This does not mean commercial media are devoid of values. Spotify for example says its mission ‘is to unlock the potential of human creativity—by giving a million creative artists the opportunity to live off their art and billions of fans the opportunity to enjoy and be inspired by it’,[footnote]Spotify. (2022). ‘About Spotify’. Available at: https://newsroom.spotify.com/company-info/[/footnote] while Netflix’s organisational values are judgment, communication, curiosity, courage, passion, selflessness, innovation, inclusion, integrity and impact.[footnote]Netflix. (2022). ‘Netflix Culture’. Available at: https://jobs.netflix.com/culture[/footnote] Commercial media are also sensitive to issues that present reputational risk, for instance the outcry over Joe Rogan’s Spotify podcast propagating disinformation about COVID-19 or Jimmy Carr’s joke about the Holocaust.[footnote]Silberling, A. (2022). ‘Spotify adds COVID-19 content advisory’. TechCrunch. Available at: https://social.techcrunch.com/2022/03/28/spotify-covid-19-content-advisory-joe-rogan/; Jackson, S. (2022). ‘Jimmy Carr condemned by Nadine Dorries for “shocking” Holocaust joke about travellers in Netflix special His Dark Material’. Sky News. Available at: https://news.sky.com/story/jimmy-carr-condemned-for-disturbing-holocaust-joke-about-travellers-in-netflix-special-his-dark-material-12533148[/footnote]

However, commercial media harness values in service of their business model, whereas for public service media the values themselves are the organisational objective. Therefore, while the ultimate goal of a commercial media organisation is quantitative (revenue) the ultimate goal of public service media is qualitative (public value) – even if this is converted into quantitative proxies.

This difference between public and private media companies is fundamental in how they adopt recommendation systems. We discuss this further later in the report when examining the objectives of using recommendation systems.

Current challenges for public service media

Since their inception, public service media and their values have been tested and reinterpreted in response to new technologies.

The introduction of the BBC Light Programme in 1945, a light entertainment alternative to the serious fare offered by the BBC Home Service, challenged the principle of universality (not everyone was listening to the same content at the same time) as well as the balance between the mission to inform, educate and entertain (should public service broadcasting give people what they want or what they need?). The arrival of the video recorder, and then new channels and platforms, gave audiences an option to opt out of the curated broadcast schedule –where editors determined what should be consumed. While this enabled more and more personalised and asynchronous listening and viewing, it potentially reduced exposure to the serendipitous and diverse content that is often considered vital to the public service remit.[footnote]van Es, K. F. (2017). ‘An Impending Crisis of Imagination : Data‐Driven Personalization in Public Service Broadcasters’. Media@LSE. Available at: https://dspace.library.uu.nl/handle/1874/358206[/footnote] The arrival and now dominance of digital technologies comes amid a collision of simultaneous challenges which, in combination, may be existential.

Audience

Public service media have always had a hybrid role. They are obliged to serve the public simultaneously as citizens and consumers.[footnote]BBC Trust. (2012). BBC Trust assessment processes Guidance document. Available at: http://downloads.bbc.co.uk/bbctrust/assets/files/pdf/about/how_we_govern/pvt/assessment_processes_guidance.pdf[/footnote]

Their public service mandate requires them to produce content and serve audiences that the commercial market does not provide for. At the same time, their duty to provide a universal service means they must aim to reach a sizeable mainstream audience and be active participants in the competitive commercial market.

Although people continue to use and value public service media, the arrival of streaming services such as Netflix, Amazon and Spotify, as well as the availability of content on YouTube, has had a massive impact on public service media audience share.

In the UK, the COVID-19 pandemic has seen people return to public service media as a source of trusted information, and with more time at home they have also consumed more public service content.[footnote]BBC. (2021). Annual Plan 2021-22. Available at: http://downloads.bbc.co.uk/aboutthebbc/reports/annualplan/annual-plan-2021-22.pdf[/footnote]

But lockdowns also supercharged the uptake of streaming. By September 2020, 60% of all UK households subscribed to an on-demand service, up from 49% a year earlier. Just under half (47%) of all adults who go online now consider online services to be their main way of watching TV and films, rising to around two-thirds (64%) among 18–24 year olds.[footnote]Ofcom. (2021). Small Screen: Big Debate – Recommendations to Government on the future of Public Service Media. Available at: https://www.smallscreenbigdebate.co.uk/__data/assets/pdf_file/0023/221954/statement-future-of-public-service-media.pdf[/footnote]

Public service media are particularly concerned about their failure to reach younger audiences.[footnote]Lowe, G.F. and Maijanen, P. (2019). ‘Making sense of the public service mission in media: youth audiences, competition, and strategic management’. Journal of Media Business Studies. doi: 10.1080/16522354.2018.1553279; Schulz, A., Levy, D. and Nielsen, R.K. (2019). ‘Old, Educated, and Politically Diverse: The Audience of Public Service News’, pp. 15–19, 29–30. Reuters Institute for the Study of Journalism. Available at: https://reutersinstitute.politics.ox.ac.uk/our-research/old-educated-and-politically-diverse-audience-public-service-news[/footnote] Although this group still encounters public service media content, they tend to do so on external services: younger viewers (16–34 year olds) are more likely to watch BBC content on subscription video-on-demand (SVoD) services rather than through BBC iPlayer (4.7 minutes per day on SVoD vs. 2.5 minutes per day on iPlayer).[footnote]Ofcom. (2021). Small Screen: Big Debate – Recommendations to Government on the future of Public Service Media. Available at: https://www.smallscreenbigdebate.co.uk/__data/assets/pdf_file/0023/221954/statement-future-of-public-service-media.pdf[/footnote] They are not necessarily aware of the source of the content and do not create an emotional connection with the public service media as a trusted brand. Meanwhile, platforms gain valuable audience insight data through this consumption which they do not pass onto the public service media organisations.[footnote]House of Commons Digital, Culture, Media and Sport Committee. (2021). The future of public service broadcasting, HC 156. Available at: https://publications.parliament.uk/pa/cm5801/cmselect/cmcumeds/156/156.pdf[/footnote]

Regulation

Legislation has not kept pace with the rate of technological change. Public service media are trying to grapple with the dynamics of the competitive digital landscape on stagnant or declining budgets, while continuing to meet their obligations to provide linear TV and radio broadcasting to a still substantial legacy audience.

The UK broadcasting regulator Ofcom published recommendations in 2021, repeating its previous demands for an urgent update to the public service media system to make it sustainable for the future. These include modernising the public service objectives, changing licences to apply across broadcast and online services and allowing greater flexibility in commissioning across platforms.[footnote]Ofcom. (2021). Small Screen: Big Debate – Recommendations to Government on the future of Public Service Media. Available at: https://www.smallscreenbigdebate.co.uk/__data/assets/pdf_file/0023/221954/statement-future-of-public-service-media.pdf[/footnote]

The Digital, Culture, Media and Sport Select Committee of the House of Commons has also demanded regulatory change. It warned that ‘hurdles such as the Public Interest Test inhibit the ability of [public service broadcasters] to be agile and innovate at speed in order to compete with other online services’ and that the core principle of universality would be threatened unless public service media were better able to attract younger audiences.[footnote]House of Commons Digital, Culture, Media and Sport Committee. (2021). The future of public service broadcasting, HC 156. Available at: https://publications.parliament.uk/pa/cm5801/cmselect/cmcumeds/156/156.pdf[/footnote]

Although there has been a great deal of activity around other elements of technology regulation, particularly the Online Safety Bill in the UK and the Digital Services Act in the European Union, the regulation of public service media has not been treated with the same urgency. There is so far no Government white paper for a promised Media Bill that would address this in the UK and the European Commission’s proposals for a European Media Freedom Act are in the early stages of consultation.[footnote]European Commission. (2022). ‘European Media Freedom Act: Commission launches public consultation’. Available at: https://ec.europa.eu/commission/presscorner/detail/en/ip_22_85[/footnote]

Political context

Public service media have always been a political battleground and have often had fractious relationships with the government of the day. But the rise of populist political movements and governments has created new fault lines and made public service media a battlefield in the culture wars. The Polish and Hungarian Governments have moved to undermine the independence of public service media, while the far-right AfD party in eastern Germany refused to approve funding for public broadcasting.[footnote]The Economist. (2021). ‘Populists are threatening Europe’s independent public broadcasters’. Available at: https://www.economist.com/europe/2021/04/08/populists-are-threatening-europes-independent-public-broadcasters[/footnote] In the UK, the Government has frozen the licence fee for two years and has said future funding arrangements are ‘up for discussion’. It has also been accused of trying to appoint an ideological ally to lead the independent media regulator Ofcom. Elsewhere in Europe, journalists from public service media have been attacked by anti-immigrant and COVID-denial protesters.[footnote]The Economist. (2021).[/footnote]

At the same time, public service media are criticised as unrepresentative of the publics they are supposed to serve. In the UK, both the BBC and Channel 4 have attempted to address this by moving parts of their workforce out of London.[footnote]The Sutton Trust. (2019). Elitist Britain, pp. 40–42. Available at: https://www.suttontrust.com/our-research/elitist-britain-2019/; Friedman, S. and Laurison, D. (2019). ‘The class pay gap: why it pays to be privileged’. The Guardian. Available at: https://www.theguardian.com/society/2019/feb/07/the-class-pay-gap-why-it-pays-to-be-privileged[/footnote] As social media has removed traditional gatekeepers to the public sphere, there is less acceptance of and deference towards the judgement of media decision-makers. In a fragmented public sphere, it becomes harder for public service media to ‘hold the ring’ – on issues like Brexit, COVID-19, race and transgender rights, public service media find themselves distrusted by both sides of the argument.

Although the provision of information and educational resources through the COVID-19 pandemic has given public service media a boost, both in audiences and in levels of trust, they can no longer take their societal value or even their continued existence for granted.[footnote]BBC. (2021). Annual Plan 2021-22. Available at: http://downloads.bbc.co.uk/aboutthebbc/reports/annualplan/annual-plan-2021-22.pdf[/footnote] Since the arrival of the internet, their monopoly on disseminating real-time information to a wide public has been broken and so their role in both the media and democratic landscape is up for grabs.[footnote]Interview with Jannick Kirk Sørensen, Associate Professor in Digital Media, Aalborg University (2021).[/footnote] For some, this means public service media is redundant.[footnote]Booth, P. (2020). New Vision: Transforming the BBC into a subscriber-owned mutual. Institute of Economic Affairs. Available at: https://iea.org.uk/publications/new-vision[/footnote] For others, its function should now be to uphold national culture and distinctiveness in the face of the global hegemony of US-owned platforms.[footnote]Department for Digital, Culture, Media & Sport and John Whittingdale OBE MP. (2021). John Whittingdale’s speech to the RTS Cambridge Convention 2021. UK Government. Available at: https://www.gov.uk/government/speeches/john-whittingdales-speech-to-the-rts-cambridge-convention-2021[/footnote]

The Institute for Innovation and Public Purpose has proposed reimagining the BBC as a ‘market shaper’ rather than a market fixer, based on a concept of dynamic public value,[footnote]Mazzucato, M., Conway, R., Mazzoli, E., Knoll E. and Albala, S. (2020). Creating and measuring dynamic public value at the BBC, p.22. UCL Institute for Innovation and Public Purpose. Available at: https://www.ucl.ac.uk/bartlett/public-purpose/sites/public-purpose/files/final-bbc-report-6_jan.pdf[/footnote] while the Media Reform Coalition calls for the creation of a Media Commons of independent, democratic and accountable media organisations, including a People’s BBC and Channel 4.[footnote]Grayson, D. (2021). Manifesto for a People’s Media. Media Reform Coalition. Available at: https://drive.google.com/file/u/1/d/1_6GeXiDR3DGh1sYjFI_hbgV9HfLWzhPi/view?usp=embed_facebook[/footnote] The wide range of ideas in play demonstrates how open the possible futures of public service media could be.

Introducing recommendation systems

The main steps in the development of a recommendation: user engagement with the platform, data gathering, algorithmic analysis and recommendation generation.

Day-to-day, we might turn to friends or family for their recommendations when it comes to decisions large and small. From dining out and entertainment, to big purchases. We might also look at expert reviews. But in the last decade, there has been a critical change in where recommendations come from and how they’re used. Recommendations have now become a pervasive feature of the digital products we use.

Recommendation systems are a type of software that filter information based on contextual data and according to criteria set by its designers. In this section, we briefly outline how recommendation systems operate and how they are used in practice by European public service media. At least a quarter of European public service media have begun deploying recommendation systems. They are mainly used on video platforms but they are only applied on small sections of services – the vast majority of public service content continues to be manually curated by editors.

In media organisations, producers, editors and journalists have always made implicit and explicit decisions about what to give prominence to, from what stories to tell and what programmes to commission, to – just as importantly – how those stories are presented. Deciding what makes the front page, what gets prime time, what makes top billing on the evening news – these are all acts of recommendation. For some, the entire institution is a system for recommending content to their audiences.

Public service media organisations are starting to automate these decisions by using recommendation systems.

Recommendation systems are context-driven information filtering systems. They don’t use explicit search queries from the user (unlike search engines) and instead rank content based only on contextual information.[footnote]Tennenholtz, M. and Kurland, O. (2019). ‘Rethinking Search Engines and Recommendation Systems: A Game Theoretic Perspective’. Communications of the ACM, December 2019, 62(12), pp. 66–75. Available at: https://cacm.acm.org/magazines/2019/12/241056-rethinking-search-engines-and-recommendation-systems/fulltext; Jannach, D. and Adomavicius, G. (2016), ‘Recommendations with a Purpose’. RecSys ’16: Proceedings of the 10th ACM Conference on Recommender Systems, pp7–10. Available at: https://doi.org/10.1145/2959100.2959186; Jannach, D., Zanker, M., Felfernig, and Friedrich, G. (2010). Recommender Systems: An Introduction. Cambridge University Press. doi: 10.1017/CBO9780511763113; Ricci, F., Rokach, L. and Shapira, B. (2015). Recommender Systems Handbook. Springer New York: New York. doi: 10.1007/978-1-4899-7637-6[/footnote]

This can include:

  • the item being viewed, e.g. the current webpage, the article being read, the video that just finished playing etc.
  • the item being filtered and recommended, e.g. the length of the content, when the content was published, characteristics of the content, e.g. drama, sport, news – often described as metadata about the content
  • the users, e.g. their location or language preferences, their past interactions with the recommendation system etc.
  • the wider environment, e.g. the time of day.

Examples of well-known products utilising recommendation systems include:

  • Netflix’s homepage
  • Spotify’s auto-generated playlists and auto-play features
  • Facebook’s ‘People You May Know’ and ‘News Feed’
  • YouTube’s video recommendations
  • TikTok’s ‘For You’ page
  • Amazon’s ‘Recommended For You’, ‘Frequently Bought Together’, ‘Items Recently Viewed’, ‘Customers Who Bought This Item Also Bought’, ‘Best-Selling’ etc.[footnote]Singh, S. (2020). Why Am I Seeing This? – Case study: Amazon. New America. Available at: https://www.newamerica.org/oti/reports/why-am-i-seeing-this[/footnote]
  • Tinder’s swiping page[footnote]Liu, S. (2017). ‘Personalized Recommendations at Tinder’ [presentation]. Available at: https://www.slideshare.net/SessionsEvents/dr-steve-liu-chief-scientist-tinder-at-mlconf-sf-2017[/footnote]
  • LinkedIn’s ‘Recommend for you’ jobs page.
  • Deliveroo or UberEats’ ‘recommended’ sort for restaurants.

Recommendation systems and search engines

It is worth acknowledging the difference between recommendation systems and search engines, which can be thought of as query-driven information filtering systems. They filter, rank and display webpages, images and other items primarily in response to a query from a user (such as Google searching for ‘restaurants near me’). This is then often combined with the contextual information mentioned above. Google Search is the archetypal search engine in most Western countries but other widely used search engines include Yandex, Baidu and Yahoo. Many public service media organisations offer a query-driven search feature on their services that enables users to search for news stories or entertainment content.

In this report, we have chosen to focus on recommendation systems rather than search engines as the context-driven rather than query-driven approach of recommendation systems is much more analogous to traditional human editorial judgment and content curation.

Broadly speaking, recommendation systems take a series of inputs, filter and select which ones are most important, and produce an output (the recommendation). The inputs and outputs of recommendation systems are subject to content moderation (in which the pool of content is pre-screened and filtered) and curation (in which content is selected, organised and presented).

This starts by deciding what to input into the recommendation system. The pool of content to draw from is often dictated by the nature of the platform itself, such as activity from your friends, groups, events, etc. alongside adverts, as in the case of Facebook. In the case of public service media, the pool of content is often their back catalogue of audio, video or news content.

This content will have been moderated in some way before it reaches the recommendation system, either manually by human moderators or editors, or automatically through software tools. On Facebook, this means attempts to remove inappropriate user content, such as misinformation or hate speech, from the platform entirely, according to moderation guidelines. For a public service media organisation, this will happen in the commissioning and editing of articles, radio programmes and TV shows by producers and editorial teams.

The pool of content will then be further curated as it moves through the recommendation system, as certain pieces of content might be deemed appropriate to publish but not to recommend in a particular context, e.g. Facebook might want to avoiding recommending you posts in languages you don’t speak. In the case of public service media, this generally takes the form of business rules, which are editorial guidelines implemented directly into the recommendation system.

Some business rules apply equally across all users and further constrain the set of content that the system recommends content from, such as only selecting content from the past few weeks. Other rules apply after individual user recommendations have been generated and filter those recommendations based on specific information about the user’s context, such as not recommending content the user has already consumed.

For example, below are business rules that were implemented in BBC Sounds’ Xantus recommendation system, as of summer 2021:[footnote]Note that the business rules are subject to change, and so the rules given here are intended to be an indicative example only, representing a snapshot of practice at one point in time. See: Al-Chueyr Martins, T. (2021). ‘From an idea to production: the journey of a recommendation engine’ [presentation recording]. MLOps London. Available at: https://www.youtube.com/watch?v=dFXKJZNVgw4[/footnote]

Non-personalised business rules Personalised business rules
Recency Already seen items
Availability Local radio (if not consumed previously)
Excluded ‘master brands’, e.g., particular radio channels[footnote]Smethurst, M. (2014). Designing a URL structure for BBC programmes. Available at: https://smethur.st/posts/176135860[/footnote] Specific language (if not consumed previously)
Excluded genres Episode picking from a series
Diversification (1 episode per brand/series)

How different types of recommendation systems work

Not all recommendation systems are the same. One major difference relates to what categories of items a system is filtering and curating for. This can include, but isn’t limited to:

  • content, e.g. news articles, comments, user posts, podcasts, songs, short-form video, long-form video, movies, images etc. or any combination of these content types
  • people, e.g. dating app profiles, Facebook profiles, Twitter accounts etc.
  • metadata, e.g. the time, data, location, category etc. of a piece of content or the age, gender, location etc. of a person.

In this report, we mainly focus on:

  1. Media content recommendation systems: these systems rank and display pieces of media content, e.g. news articles, podcasts, short-form videos, radio shows, television shows, movies etc. to users of news websites, video-on-demand and streaming services, music and podcast apps etc.
  2. Media content metadata recommendation systems: these rank and display suggestions for information to classify pieces of media content, e.g. genre, people or places which appear in the piece of media, or other tags, to journalists, editors or other members of staff at media organisations.

Another important distinction between applications of recommendation systems is the role of the provider in choosing which set of items the recommendation system is applied to. There are three categories of use for recommendation systems:

  1. Open recommending: The recommendation system operates primarily on items that are generated by users of the platform, or otherwise indiscriminately automatically aggregated from other sources, without the platform curating or individually approving the items. Examples include YouTube, TikTok’s ‘For You’ page, Facebook’s ‘News Feed’ and many dating apps.
  2. Curated recommending: The recommendation system operates on items which are curated, approved or otherwise editorialised by the platform operating the recommendation system. These systems still primarily rely on items generated by external sources, sometimes blended with items produced by the platform. Often these external items will come in the form of licensed or syndicated content such as music, films, TV shows, etc. rather than user-generated items. Examples include Netflix, Spotify and Disney+.
  3. Closed recommending: The recommendation system operates exclusively on items generated or commissioned by the platform operating the recommendation system. Examples include most recommendation systems used on the website of news organisations.

Lastly, there are different types of technical approaches that a recommendation system may use to sort and filter content. The approaches detailed below are not mutually exclusive and can be combined in recommendation systems in particular contexts:

Type of filtering Example What does it do?
Collaborative filtering ‘Customers Who Bought This Item Also Bought’ on Amazon The system recommends items to users based on the past interactions and preferences of other users who are classified as having similar past interactions and preferences. These patterns of behaviour from other users are used to predict how the user seeing the recommendation would rate new items. Those item rating predictions are used to generate recommendations of items that have a high level of similarity with content previously popular with similar users.
Matrix factorisation Netflix’s ‘Watch Next’ feature A subclass of collaborative filtering, this method codifies users and items into a small set of categories based on all the user ratings in a system. When Netflix recommends movies, a user may be codified by how much they like action, comedy, etc. and a movie might be codified by how much it fits into these genres. This codified representation can then be used to guess how much a user will like a movie they haven’t seen before, based on whether these codified summaries ‘match’.

 

Content-based filtering Netflix’s ‘Action Movies’ list These methods recommend items based on the codified properties of the item stored in the database. If the profile of items a user likes mostly consists of action films, the system will recommend other items that are tagged as action films. The system does not draw on user data or behaviour to make recommendations.

Of these typologies, the public service media that we surveyed only use closed recommendation systems as they are applying recommendations to content they have commissioned or produced. However, we found examples of public service media using all types of filtering approaches: collaborative filtering, content-based filtering and hybrid recommendation systems.

How do European public service media organisations use recommendation systems?

The use of recommendation systems is common but not ubiquitous among public service media organisations in Europe. As of 2021, at least a quarter of European Broadcasting Union (EBU) member organisations were using recommendation systems on at least one of their content delivery platforms.[footnote]See Annex 1 for more details.[/footnote] Video-on-demand platforms are the most common use case for recommendation systems, followed by audio-on-demand and news content. As well as these public-facing recommendation systems, some public service media also use recommendation systems for internal-only purposes, such as systems that assist journalists and producers with archival research.[footnote]Interview with Ben Fields, Lead Data Scientist, Digital Publishing, BBC (2021).[/footnote]

Figure 1: Recommendation system use by European public service media by platform (EBU, 2020)

Platform on which public service media offers personalised recommendations Number of European Broadcasting Union member organisations Examples
Video-on-demand At least 18 BBC iPlayer
Audio-on-demand At least 10 BBC Sounds, ARD Audiothek
News content At least 7 VRT NWS app

Among the EBU member organisations which reported using recommendation systems in a 2020 survey, recommendations were displayed:

  • in a dedicated section on the on-demand homepage (by at least 16 organisations)
  • in the player as ‘play next’ suggestions (by at least 10 organisations)
  • as ‘top picks’ on the on-demand homepage (by at least 9 organisations).

Even among organisations that have adopted recommendation systems, their use remains very limited. NPO in the Netherlands was the only organisation we encountered that aims to have a fully algorithmically driven homepage on its main platform. In most cases, the vast majority of content remains under human editorial control, with only small sub-sections of the interface offering recommended content.

As editorial independence is a key public service value, as well as a differentiator of public service media from its private-sector competitors, it is likely most public service media will retain a significant element of curation. The requirement for universality also creates a strong incentive to ensure that there is a substantial foundation of shared information to which everyone in society should be exposed.

Recommendation systems in the BBC

The BBC is significantly larger in staff, output and audience than other European public service media organisations. It has a substantial research and development department and has been exploring the use of recommendation systems across a range of initiatives since 2008.[footnote]See Annex 2 for more details.[/footnote]

In 2017, the BBC Datalab was established with the aim of helping audiences discover relevant content by bringing together data from across the BBC, augmented machine learning and editorial expertise.[footnote]BBC. (2019). ‘Join the DataLab team at the BBC!’. BBC Careers. Available at: https://careerssearch.bbc.co.uk/jobs/job/Join-the-DataLab-team-at-the-BBC/40012; BBC Datalab. ‘Machine learning at the BBC’. Available at: https://datalab.rocks/[/footnote] It was envisioned as a central capability across the whole of the BBC (TV, radio, news and web) which would build a data platform for other BBC teams that would create consistent and relevant experiences for audiences across different products. In practice, this has meant collaborating with different product teams to develop recommendation systems.

The BBC now uses several recommendation systems, at different degrees of maturity, across different forms of media, including:

  • written content, e.g. the BBC News app and some international news services, such as the Spanish-language BBC Mundo, recommending additional new stories[footnote]McGovern, A. (2019). ‘Understanding public service curation: What do “good” recommendations look like?’. BBC. Available at: https://www.bbc.co.uk/blogs/internet/entries/887fd87e-1da7-45f3-9dc7-ce5956b790d2[/footnote]
  • audio-on-demand, e.g. BBC Sounds recommending radio programmes and music mixes a user might like
  • short-form video, e.g. BBC Sport and BBC+ (now discontinued) recommending videos the user might like
  • long-form video, e.g. BBC iPlayer recommending TV shows or films the user might like.
Approaches to the development of recommendation systems

Public service media organisations have the choice to buy an external ‘off the shelf’ recommendation system or build it themselves.

The BBC initially used third-party providers of recommendation systems but, as part of a wider review of online services, began to test the pros and cons of bringing this function in-house. Building on years of their own R&D work, the BBC found they were able to build a recommendation system that not only matched but could outperform the bought-in systems. Once it was clear that personalisation would be central to the future strategy of the BBC, they decided to bring all systems in-house with the aim of being ‘in control of their destiny’.[footnote]Interview with Andrew McParland, Principal Engineer, BBC R&D (2021).[/footnote] The perceived benefits include building up technical capability and understanding within the organisation, better control and integration of editorial teams, better alignment with public service values and greater opportunity to experiment in the future.[footnote]Commercial (i.e. non public service) BBC services however still use external recommendation providers. See: Taboola. (2021). ‘BBC Global News Chooses Taboola as its Exclusive Content Recommendations Provider’. Available at: https://www.taboola.com/press-release/bbc-global-news-chooses-taboola-as-its-exclusive-content-recommendations-provider[/footnote]

The BBC has far greater budgets and expertise than most other public service media organisations to experiment with and develop recommendation systems. But many other organisations have also chosen to build their own products. Dutch broadcaster NPO has a small team of only four or five data scientists, focused on building ‘smart but simple’ recommendations in-house, having found third-party products did not cater to their needs. It is also important to them that they should be able to safeguard their audience data and be able to offer transparency to public stakeholders about the way their algorithms work, neither of which they felt confident about when using commercial providers.[footnote]Interview with Arno van Rijswijk, Head of Data & Personalization, and Sarah van der Land, Digital Innovation Advisor, Nederlandse Publieke Omroep (NPO) (2021).[/footnote]

Several public service media organisations have joined forces through the EBU to develop PEACH[footnote]European Broadcasting Union. PEACH. Available at: https://peach.ebu.io/[/footnote] – a personalisation system that can be adopted by individual organisations and adapted to their needs. The aim is to share technical expertise and capacity across the public service media ecosystem, enabling those without their own in-house development teams to still adopt recommendation systems and other data-driven approaches. Although some public service media feel this is still not sufficiently tailored to their work,[footnote]Interview with Arno van Rijswijk, Head of Data & Personalization, and Sarah van der Land, Digital Innovation Advisor, Nederlandse Publieke Omroep (NPO) (2021).[/footnote] others find it not only caters to their needs but that it embodies their public service mission through its collaborative approach.[footnote]Interview with Matthias Thar, Bayerische Rundfunk (2021).[/footnote]

Although we are aware that some public service media continue to use third-party systems, we did not manage to secure research interviews with any organisations that currently do so.

How are public service media recommendation systems currently governed and overseen?

The governance of recommendation systems in public service media is created through a combination of data protection legislation, media regulation and internal guidelines. In this section, we outline the present and future regulatory environment in the UK and EU, and how internal guidelines influence development in the BBC and other public service media. Some public service media have reinterpreted their existing guidelines for operationalising public service values to make them relevant to the use of recommendation systems.

The use of recommendation systems in public service media is not governed by any single piece of legislation or governance. Oversight is generated through a combination of the statutory governance of public service media, general data protection legislation and internal frameworks and mechanisms. This complex and fragmented picture makes it difficult to assess the effectiveness of current governance arrangements.

External regulation

The structures that have been established to regulate public service media are based around analogue broadcast technologies. Many are ill-equipped to provide oversight of public service media’s digital platforms in general, let alone to specifically oversee the use of recommendation systems.

For instance, although Ofcom regulates all UK broadcasters, including the particular duties of public service media, its remit only covers the BBC’s online platforms and not, for example, the ITV Hub or All 4. Its approach to the oversight of BBC iPlayer is to set broad obligations rather than specific requirements and it does not inspect the use of recommendation systems. Both the incentives and sanctions available to Ofcom are based around access to the broadcasting spectrum and so are not relevant to the digital dissemination of content. In practice this means that the use of recommendation systems within public service media are not subject to scrutiny by the communications regulator.

However, like all other organisations that process data, public service media within the European Union are required to comply with the General Data Protection Regulation (GDPR). The UK adopted this legislation before leaving the EU, though  a draft Data Protection and Digital Information Bill (‘Data Reform Bill’) introduced in July 2022 includes a number of important changes, including removing the prohibition on automated decision-making, and maintaining restrictions for automated decision-making only if special categories of data are involved. The draft bill also introduces a new ground to allow the processing of special categories of data for the purpose of monitoring and correcting algorithmic bias in AI systems. A separate set of provisions centred around fairness and explainability for AI systems is also expected as part of the Government’s upcoming white paper on AI governance.

The UK GDPR shapes the development and implementation of recommendation systems because it requires:

  • Consent: the UK GDPR requires that the use of personal data be made with freely-given, genuine and unambiguous consent from an individual. There are other lawful bases for processing personal data that do not require consent, including legal obligations, processing in a vital interest and processing for a ‘legitimate interest’ (a justification that public authorities cannot rely on if they are processing for their tasks as a public authority).
  • Data minimisation: under Article 5(1), the ‘data minimisation’ principle of the UK GDPR states that personal data should be ‘adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed’. Under Article 17 of the UK GDPR, the ‘right to erasure’ grants individuals the right to have personal data erased that is not necessary for the purposes of processing.
  • Automated decision-making, the right to be informed and explainability:  under the UK GDPR, data subjects have a right not to be subject to solely automated decisions that do not involve human intervention, such as profiling.[footnote]The Article 29 Working Group defines profiling in this instance as ‘automated processing of data to analyze or to make predictions about individuals’.[/footnote] Where such automated decision-making occurs, meaningful information about the logic involved, the significance and the envisaged consequences of such processing need to be provided to the data subject (Article 15 (1) h). Separate guidance from the Information Commissioner’s Office also touches on making AI systems explainable for users.[footnote]Information Commissioner’s Office and The Alan Turing Institute. (2021). Explaining decisions made with AI. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence/[/footnote]

Our interviews with practitioners indicated that GDPR compliance is foundational to their approach to recommendation systems, and that careful consideration must be paid to how personal data is collected and used. While the forthcoming Data Reform Bill makes several changes to the UK GDPR, most of these effects on the development and implementation of recommendation systems will likely continue under the current bill’s language.

GDPR regulates the use of data that a recommendation system draws on, but there is not currently any legislation that specifically regulates the ways in which recommendation systems are designed to operate on that data, although there are a number of proposals in train at national and European levels.

In July 2022, the European Parliament adopted the Digital Services Act, which includes (in Article 24a) an obligation for all online platforms to explain, in their terms and conditions, the main parameters of their recommendation system and the options for users to modify or influence those parameters. There are additional requirements imposed on very large online platforms (VLOPs) to provide at least one option for each of their recommendation systems which is not based on profiling (Article 29). There are also further obligations for VLOPs in Article 26 to perform systemic risk assessments, including taking into account the design of the recommendation systems (Article 26 (2) a) and to implement steps to mitigate risk by testing and adapting their recommendation systems (Article 27 (1) ca).

In order to ensure compliance with the transparency provisions in the regulation, the Digital Services Act includes a provision that enables independent auditors and vetted researchers to have access to the data that led to the company’s risk assessment conclusions and mitigation decisions (Article 31). This provision ensures oversight over the self-assessment (and over the independent audit) that companies are required to carry out, as well as scrutiny over the choices large companies make around their recommendation systems.

The draft AI Act proposed by the European Commission in 2021 also includes recommendation systems in its remit. The proposed rules require harm mitigations such as risk registers, data governance and human oversight but only make obligations mandatory for AI systems used in ‘high-risk’ applications. Public service media are not mentioned within this category, although due to their democratic significance it’s possible they might come into consideration. Outside the high-risk categories, voluntary adoption is encouraged. These proposals are still at an early stage of development and negotiation and are unlikely to be adopted until at least 2023.

In another move, in January 2022 the European Commission launched a public consultation on a proposed European Media Freedom Act that aims to further increase the ‘transparency, independence and accountability of actions affecting media markets, freedom and pluralism within the EU’. The initiative is a response to populist governments, particularly in Poland and Hungary attempting to control media outlets, as well as an attempt to bring media regulation up to speed with digital technologies. The proposals aim to secure ‘conditions for [media markets’] healthy functioning (e.g. exposure of the public to a plurality of views, media innovation in the EU market)’. Though there is little detail so far, this framing could allow for the regulation of recommendation systems within media organisations.

In the UK, public service media are excluded from the draft Online Safety Bill which imposes responsibilities on platforms to safeguard users from harm. Ofcom, as well as the Digital Culture Media and Sport Select Committee, have called for urgent reform to regulation that would update the governance of public service media for the digital age. As of this report, there has been no sign of progress on a proposed Media Bill that would provide this guidance.

Internal oversight

Public service media have well-established practices for operationalising their mission and values through the editorial guidelines described earlier. But the introduction of recommendation systems has led many of them to reappraise these and, in some cases, introduce additional frameworks to translate these values for the new context.

The BBC has brought together teams from across the organisation to discuss and develop a set of machine learning engine principles, which they believe will uphold the Corporation’s mission and values:[footnote]Macgregor, M. (2021). Responsible AI at the BBC: Our Machine Learning Engine Principles. BBC Research and Development. Available at: https://www.bbc.co.uk/rd/publications/responsible-ai-at-the-bbc-our-machine-learning-engine-principles[/footnote]

  • Reflecting the BBC’s values of trust, diversity, quality, value for money and creativity.
  • Using machine learning to improve our audience’s experience of the BBC
  • Carrying out regular review, ensuring data is handled securely and that algorithms serve our audiences equally and fairly
  • Incorporating the BBC’s editorial values and seeking to broaden, rather than narrow horizons.
  • Continued innovation and human-in-the-loop oversight.

These have then been adopted into a checklist for teams to use in practice:

‘The MLEP [Machine Learning Engine Principles] Checklist sections are designed to correspond to each stage of developing a ML project, and contain prompts which are specific and actionable. Not every question in the checklist will be relevant to every project, and teams can answer in as much detail as they think appropriate. We ask teams to agree and keep a record of the final checklist; this self-audit approach is intended to empower practitioners, prompting reflection and appropriate action.[footnote]Macgregor, M. (2021).[/footnote]

Reflecting on putting this into practice, BBC staff members observed that ‘the MLEP approach is having real impact in bringing on board stakeholders from across the organisation, helping teams anticipate and tackle issues around transparency, diversity, and privacy in ML systems early in the development cycle’.[footnote]Boididou, C., Sheng, D., Moss, M. and Piscopo, A. (2021), ‘Building Public Service Recommenders: Logbook of a Journey’. RecSys ’21: Proceedings of the 15th ACM Conference on Recommender Systems, pp. 538–540. Available at: https://doi.org/10.1145/3460231.3474614[/footnote]

Other public service media organisations have developed similar frameworks. Bayerische Rundfunk, the public broadcaster for Bavaria in Germany, found that their existing values needed to be translated into practical guidelines for working with algorithmic systems and developed ten core principles.[footnote]Bedford-Strohm, J., Köppen, U. and Schneider, C. (2020). ‘Our AI Ethics Guidelines’. Bayerisch Rundfunk. https://www.br.de/extra/ai-automation-lab-english/ai-ethics100.html[/footnote] These align in many ways to the BBC principles but have additional elements, including a commitment to transparency and discourse, ‘strengthening open debate on the future role of public service media in a data society’, support for the regional innovation economy, engagement in collaboration and building diverse and skilled teams.[footnote]Bedford-Strohm, J., Köppen, U. and Schneider, C. (2020).[/footnote]

In the Netherlands, public service broadcaster NPO along with commercial media groups and the Netherlands Institute for Sound and Vision drew up a declaration of intent.[footnote]Media perspectives. (2021). ‘Intentieverklaring voor verantwoord gebruik van KI in de media. [Letter of intent for responsible use of AI in the media]’. Available at: https://mediaperspectives.nl/intentieverklaring/[/footnote] Drawing on the European Union high-level expert group principles on ethics in AI, the declaration is a commitment to the responsible use of AI in the media sector. NPO are developing this into a ‘data promise’ that offers transparency to audiences about their practices. 

Other stakeholders

Beyond these formal structures, the use of recommendation systems in public service media is shaped by these organisations’ accountability to, and scrutiny by wider society.

All the public service media organisations we interviewed welcomed this scrutiny in principle and were committed to openness and transparency.  Most publish regular blogposts about their work, present at academic conferences and invite feedback about their work. These, however, reach a small and specialist audience.

There are limited opportunities for the broader public to understand and influence the use of recommendation systems. In practice, there is little accessible information about recommendation systems on most public service media platforms and even where it exists, teams admit that it is rarely read.

The Voice of the Listener and Viewer, a civil society group that represents audience interests in the UK, has raised concerns with the BBC about a lack of transparency in its approach to personalisation but has been dissatisfied with the response. The Media Reform Coalition has proposed that recommendations systems used in UK public service media should be co-designed with citizens’ media assemblies and that the underlying algorithms should be made public.[footnote]Grayson, D. (2021). Manifesto for a People’s Media. Media Reform Coalition. Available at: https://drive.google.com/file/u/1/d/1_6GeXiDR3DGh1sYjFI_hbgV9HfLWzhPi/view?usp=embed_facebook[/footnote]

Despite this low level of public engagement, public service media organisations were sensitive to external perceptions of their use of recommendation systems. Teams expected that, as public service media, they would be held to a higher standard than their commercial competitors. At the BBC in particular, staff frequently mentioned concerns about how their work might be seen by the press, the majority of which tends to take an anti-BBC stance. In practice, we have found little coverage of the BBC’s use of algorithms outside of specialist publications such as Wired.

Public service media have a dual role, both as innovators in the use of recommendation services and as scrutineers of the impacts of new technologies. The BBC believes it has a ‘critical contribution, as part of a mixed AI ecosystem, to the development of beneficial AI both technically, through the development of AI services, and editorially, by encouraging informed and balanced debate’.[footnote]BBC. (2017). Written evidence to the House of Lords Select Committee on Artificial Intelligence. Available at: https://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/70493.html[/footnote] At Bayerische Rundfunk, this combined responsibility has been operationalised by integrating the product team and data investigations team into an AI and Automation Lab. However, we are not aware of any instances where public service media have reported on their own products and subjected them to critical scrutiny. 

Why this matters

The history of public service media, their current challenges and the systems for their governance are the framing context in which these organisations are developing and deploying recommendation systems. As with any technology, organisations must consider how the tool can be used in ways that are consistent with their values and culture and whether it can address the problems they face.

In his inaugural speech, BBC Director-General Tim Davie identified increased personalisation as a pillar of addressing the future role of public service media in a digital world:[footnote]BBC Media Centre. (2020). Tim Davie’s introductory speech as BBC Director-General. Available at: https://www.bbc.co.uk/mediacentre/speeches/2020/tim-davie-intro-speech[/footnote]

‘We will need to be cutting edge in our use of technology to join up the BBC, improving search, recommendations and access. And we must use the data we hold to create a closer relationship with those we serve. All this will drive love for the BBC as a whole and help make us an indispensable part of everyday life. And create a customer experience that delivers maximum value.’

But recommendation systems also crystallise the current existential dilemmas of public service media. The development of a technology whose aim is optimisation requires an organisation to be explicit about what and who it is optimising for. A data-driven system requires an institution to quantify those objectives and evaluate whether or not the tool is helping them to achieve them.

This can seem relatively straightforward when setting up a recommendation system for e-commerce, for example, where the goal is to sell more units. Other media organisations may also have clear metrics around time spent on a platform, advertising revenues or subscription renewals.

In this instance, the broadly framed public service values that have proven flexible to changing contexts in the past are a hindrance rather than a help. A concept like ‘diversity’ is hard to pin down and feed into a system.[footnote]Hildén, J. (2021). ‘The Public Service Approach to Recommender Systems: Filtering to Cultivate’. Television & New Media, 23(7). Available at: https://doi.org/10.1177/15274764211020106[/footnote] Organisations that are supposed to serve the public as both citizens and consumers must decide which role gets more weight.

Recommendation systems might offer an apparently obvious solution to the problem of falling public service media audience share – if you are able to better match the vast amount of content in public service media catalogues to listeners and viewers, you should be able to hold and grow your audience. But is universality achieved if you reach more people but they don’t share a common experience of a service? And how do you measure diversity and ensure personalised recommendations still offer a balance of content?

‘The introduction of algorithmic systems will force [public service media] to express its values and goals as measurable key performance indicators, which could be useful and perhaps even necessary. But this could also create existential threats to the institution by undermining the core principles and values that are essential for legitimacy.’[footnote]Sørensen, J.K. and Hutchinson, J. (2018). ‘Algorithms and Public Service Media’. Public Service Media in the Networked Society: RIPE@2017, pp.91–106. Available at: http://www.nordicom.gu.se/sites/default/files/publikationer-hela-pdf/public_service_media_in_the_networked_society_ripe_2017.pdf[/footnote]

Recommendation systems force product teams within public service media organisations to settle on an interpretation of public service values, at a time when the regulatory, social and political context makes them particularly unclear.

It also means that this interpretation will be both instantiated and then systematised in a way that has never previously occurred. As we saw with the example of the impartiality guidelines of the BBC, individuals and teams have historically made decisions under a broad governance framework and founded on editorial judgement. Inconsistencies in those judgements could be ironed out through the multiplicity of individual decisions, the diversity of contexts and the number of different decision-makers. Questions of balance could be considered over a wider period of time and breadth of output. Evolving societal norms could be adopted as audience expectations change.

However, building a decision-making system sets a standardised response to a set of questions and repeats that every time. In this way it nails an organisation’s colours to one particular mast and then replicates that approach repeatedly.

Stated goals and potential risks of using recommendation systems in public service media

Organisations deploy recommendation systems to address certain objectives. However, these systems also bring potential risks. In this chapter, we look at what public service media aim to achieve through deploying recommendation systems and the potential drawbacks.

Stated goals of recommendation systems

In this section, we look at the stated objectives for the use of recommendation systems and the degree to which public service media reference those objectives and motivations when justifying their own use of recommendation systems.

Recommendation systems bring several benefits to different actors, including users who access the recommendations (in the case of public service media, audiences), as well as the organisations and businesses that maintain the platforms on which recommendation systems operate. Some of the effects of recommendation systems are also of broader societal interest, especially where the recommendations interact with large numbers of users, with the potential to influence their behaviour. Because they serve the interests of multiple stakeholders,[footnote]Milano, S., Taddeo, M. and Floridi, L. (2021). ‘Ethical aspects of multi-stakeholder recommendation systems’. The Information Society, 37(1). Available at: https://doi.org/10.1080/01972243.2020.1832636; Abdollahpouri, H., Adomavicius, G., Burke, R., et al. (2020). ‘Multistakeholder recommendation: Survey and research directions’. User Modeling and User-Adapted Interaction, pp.127–158. Available at: https://doi.org/10.1007/s11257-019-09256-1[/footnote] recommendation systems support data-based value creation in multiple ways, which can pull in different directions.[footnote]Tempini, N. (2017). ‘Till data do us part: Understanding data-based value creation in data-intensive infrastructures’. Information and Organization, 27(4). Available at: http://dx.doi.org/10.1016/j.infoandorg.2017.08.001 [/footnote]

Four key areas of value creation are:

  1. Reducing information overload for the receivers of recommendations: It would be overwhelming for individuals to trawl the entire catalogue of Netflix or Spotify, for example. Their recommendation systems reduce the amount of content to a manageable number of choices for the audience. This creates value for users.
  2. Improved discoverability of items: E-commerce sites can recommend items they are particularly keen to sell, or direct people to niche products for which there is a specific customer base. This creates value for businesses and other actors that provide the items in the recommender’s catalogue. It can also be a source of societal value, for example where improved discoverability increases the diversity of news items that are accessed by the audience.
  3. Attention capture: Targeted recommendations which cater to users’ preferences encourage people to spend more time on services, generating revenue through subscriptions or advertising. This is a source of economic value for platform providers, who monetise attention via advertising revenue or paid subscriptions. But it can also be a source of societal value, if it means that people pay more attention to content that has public service value, in line with the mandate for universality.
  4. Data gathering to derive business insights and analysis: For example, platforms gain valuable insights into their audience through A/B testing which enables them to plan marketing campaigns or commission content. This is a source of economic value, when it is used to derive business insights. But under appropriate conditions, it could be a source of societal value, for example by enabling socially responsible scientific research (see our recommendations below).

We explored how these objectives map to the motivations articulated by public service media organisations for their use of recommendation systems.

1. Reducing information overload

‘Under conditions of information abundance and attention scarcity, the modern challenges to the realisation of media diversity as a policy goal lie less and less in guaranteeing a diversity of supply and more in the quest to create the conditions under which users can actually find and choose between diverse content.’[footnote]Helberger, N., Karppinen, K. and D’Acunto, L. (2018). ‘Exposure diversity as a design principle for recommender systems’. Information, Communication & Society, 21(2). Available at: https://doi.org/10.1080/1369118X.2016.1271900[/footnote]

We heard from David Graus: ‘So finding different ways to enable users to find content is core there. And in that context, I think recommender systems really serve to be able to surface content that users may not have found otherwise, or may surface content that users may not know they’re interested in.’

We heard from David Graus: ‘So finding different ways to enable users to find content is core there. And in that context, I think recommender systems really serve to be able to surface content that users may not have found otherwise, or may surface content that users may not know they’re interested in.’

2. Improved discoverability

Public service media also deploy recommendation systems with the objective of showcasing much more of their vast libraries of content. BBC Sounds, for example, has more than 200,000 items available, of which only a tiny amount can be surfaced either through broadcast schedules or an editorially curated platform. Recommendation systems can potentially unlock the long tail of rarely viewed content and allow individuals’ specific interests to be met.

They can also, in the view of some organisations, meet the public service obligation of diversity by exposing audiences to a greater variety of content.[footnote]Interview with David Graus, Lead Data Scientist, Randstad Groep Nederland (2021). This point was also captured in separate studies of public service media organisations – see: Hildén, J. (2021). ‘The Public Service Approach to Recommender Systems: Filtering to Cultivate’. Television & New Media, 23(7). Available at: https://doi.org/10.1177/15274764211020106[/footnote] Recommendation systems need not simply cater to, or replicate people’s existing interests but can actively push new and surprising content.

This approach is also deployed in commercial settings, notably in Spotify’s ‘Discover’ playlists, as novelty is also required for audience retention. Additionally, some public service media organisations, such as Swedish Radio and NPO, are experimenting with approaches that promote content they consider particularly high in public value.

Traditional broadcasting provides one-to-many communication. Through personalisation, platforms have created a new model of many-to-many communication, creating ‘fragmented user needs’.[footnote]Interview with Uli Köppen, Head of AI + Automation Lab, Co-Lead BR Data, Bayerische Rundfunk (2021).[/footnote] Public service media must now grapple with how they create their own way of engaging in this landscape. The BBC’s ambition for the iPlayer is to make output, ‘accessible to the audience wherever they are, whatever devices they are using, finding them at the right moments with the right content’.[footnote]BBC. (2021). BBC Annual Plan 2021-22. Available at: http://downloads.bbc.co.uk/aboutthebbc/reports/annualplan/annual-plan-2021-22.pdf[/footnote]

Jonas Schlatterbeck, ARD (German public broadcaster), takes a similar view:

‘We can’t actually serve majorities anymore with one content. It’s not like the one Saturday night show that will attract like half of the German population […] but more like tiny mosaic pieces of different content that are always available to pretty much everyone but that are actually more targeted.’[footnote]Interview with Jonas Schlatterbeck, Head of Content ARD Online & Leiter Programmplanung, ARD (2021).[/footnote]

3. Attention capture

The need to maintain audience reach in a fiercely competitive digital landscape was mentioned by almost every public service media organisation we spoke to.

Universality, the obligation to reach every section of society, is central to the public service remit.

And if public service media lose their audience to their digital competitors, they cannot deliver the other societal benefits within their mission. As Koen Muylaert of Belgian VRT said: ‘we want to inspire people, but we also know that you can only inspire people if they intensively use your products, so our goal is to increase the activity on our platform as well. Because we have to fight for market share’.[footnote]Interview with Koen Muylaert, Project Lead, VRT data platform and data science initiative, Vlaamse Radio- en Televisieomroeporganisatie (VRT) (2021).[/footnote]

The assumption among most public service media organisations is that recommendation systems improve engagement, although there is still little conclusive evidence of this in academic literature. The BBC has specific targets for 16-34 year-olds to use the iPlayer and BBC Sounds, and staff consider recommendations as a route to achieving those metrics.[footnote]BBC. (2021). BBC Annual Plan 2021-22. Available at: http://downloads.bbc.co.uk/aboutthebbc/reports/annualplan/annual-plan-2021-22.pdf[/footnote]

From our interview with David Caswell, Executive Product Manager, BBC News Labs:

‘We have seen that finding in our research on several occasions that there’s sort of some transition that audiences and particularly younger audiences have gone through where there’s an expectation of personalization they don’t expect to be doing the same thing again and again and again, and in terms of active searching for things they expect they expect a personalized experience… There isn’t a lot of tolerance, increasingly with younger and digitally native audiences for friction in the experience. And so personalization is a major technique for removing friction from the experience because audience members don’t have to do all the work of discovery and selection and so on, they can have that done for them that this is.’[footnote]Interview with David Caswell, Executive Product Manager, BBC News Labs (2021).[/footnote]

Across the teams we interviewed from European public service media organisations there was widespread consensus that audiences now expect content to be personalised. Netflix and Spotify’s use of recommendation systems was described as a ‘gold standard’ for public service media organisations to aspire to. But few of our interviewees offered evidence to support this view of audience expectations.

‘I see the risk that when we are compared with some of our competitors that are dabbling with a much more sophisticated personalisation, there is a big risk of our services being perceived as not adaptable and not relevant enough.’[footnote]Interview with Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio (2021).[/footnote]

4. Data gathering and behavioural interventions

Recommendation systems collect and analyse a wealth of data in order to serve personalised recommendations to their users. The data collected often pertains to user interactions with the system, including data that is produced as a result of interventions on the part of the system that are intended to influence user behaviour (interventional data).[footnote]Greene, T., Martens, D. and Shmueli, G. (2022) ‘Barriers to academic data science research in the new realm of algorithmic behaviour modification by digital platforms’. Nature Machine Intelligence, 4(4), pp. 323–330. Available at: https://doi.org/10.1038/s42256-022-00475-7[/footnote] For example, user data collected by a recommendation system may include data about how different users responded to A/B tests, so that the system developers can track the effectiveness of different designs or recommendation strategies in stimulating some desired user behaviour. 

Interventional data can thus be used to support targeted behavioural interventions, as well as scientific research into the mechanisms that underpin the effectiveness of recommendations. This marks recommendation systems as a key instrument of what Shoshana Zuboff has called a system of ‘surveillance capitalism’.[footnote]Zuboff, S. (2015). ‘Big other: Surveillance Capitalism and the Prospects of an Information Civilization’. Journal of Information Technology, 30(1). Available at: https://doi.org/10.1057/jit.2015.5[/footnote] In this system, platforms extract economic value from personal data, usually in the form of advertising revenue or subscriptions, at the expense of the individual autonomy afforded to individual users of the technology.

As access to the services provided by the platforms becomes essential to daily life, users increasingly find themselves tracked in all aspects of their online experience, without meaningful options to avoid it. The possibility of surveillance constitutes a grave risk associated with the use of recommendation systems.

Because recommendation systems have been mainly researched and developed in commercial settings, many of the techniques and  types of data collected work within this logic of surveillance.[footnote]van Dijck, J. (2014). ‘Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology’. Surveillance & Society, 12(2). Available at: https://doi.org/10.24908/ss.v12i2.4776; Srnicek, N. (2017). Platform capitalism. Polity.[/footnote] However, it is also possible to envisage uses of recommendation systems that do not obey the same logic.[footnote]Lane, J. (2020). Democratizing Our Data: A Manifesto. MIT Press.[/footnote] Recommendation systems used by public service media are a case in point. Public service media organisations are in a position to decide which data to collect and use in the service of creating public value, scientific value and individual value for their audiences, instead of economic value that would be captured by shareholders.[footnote]Tempini, N. (2017). ‘Till data do us part: Understanding data-based value creation in data-intensive infrastructures’. Information and Organization, 27(4). Available at: http://dx.doi.org/10.1016/j.infoandorg.2017.08.001[/footnote]

Examples of public value that could be created from user data include insights into effective and impartial communication that serves the public interest and fosters community building. Social science research into the effectiveness of behavioural interventions, and basic research into the psychological mechanisms that underpin audience’s trust in recommendations would contribute to the creation of scientific value from behavioural data. From the perspective of the audience, value could be created by fostering user empowerment to learn more about their own interests and develop their tastes, letting users feel more in control and understand the value of the content that they can access.

We found little evidence of public service media deploying recommendation systems with the explicit aim of capturing data on their audiences and content or deriving greater insights. On the contrary, interviewees stressed the importance of data minimisation and privacy. At Bayerische Rundfunk for example, a product owner said that the collection of demographic data on the audience was a red line that they would not cross.[footnote]Interview with Matthias Thar, Bayerische Rundfunk (2021).[/footnote]

However, we did find that most public service media organisations introduced recommendation systems as part of a wider deployment of automated and data-driven approaches. In many cases, these are accompanied by significant organisational restructures to create new ways of working adapted to the technologies, as well as to respond to the budget cuts that almost all public service media are facing.

Public service media organisations are often fragmented, with teams separated by region and subject matter and with different systems for different channels and media that have evolved over time. The use of recommendation systems requires a consistent set of information about each item of content (commonly known as metadata). As a result, some public service media have started to better connect different services so that recommendation systems can draw on them.

For instance, Swedish Radio has overhauled its entire news output to improve its digital service, creating standalone items of content that do not need to be slotted into a particular programme or schedule but can be presented in a variety of contexts. Alongside this, it has introduced a scoring system to rank its content against its own public values, prompting a rearticulation of those values as well as a renewed emphasis on their importance.

Bayerische Rundfunk (BR) is creating a new infrastructure for the consistent use of data as a foundation for the future use of recommendation systems. This is already allowing for news stories to automatically upload data specific to different localities, as well as generating automated text on data-heavy stories such as sports results. This allows BR to cover a broader range of sports and cater to more specialist interests, as well as freeing up editorial teams from mundane tasks.

While there is not a direct objective of behavioural intervention and data capture at present, the introduction of recommendation systems is part of a wider orientation towards data-driven practices across public service media organisations. This has the potential to enable wider data collection and analysis to generate business insights in the future.

Conclusion

We find that public service media organisations articulate similar objectives to the field more broadly, in their motivations for deploying recommendation systems, although unlike commercial actors, they do not currently use recommendations for the explicit aim of data capture and behavioural intervention. In some respects they reframe these established motivations to align with their public service mission and values.

Many staff across public service media organisations display a belief that because the organisation is motivated by public service values, and produces content that adheres to those values, the use of recommendation systems to filter that content is a furtherance of their mission.

This has meant that staff at public service media organisations have not always critically examined whether the recommendation system itself is operating in accordance with public service values.

However, public service media organisations have begun to put in place principles and governance mechanisms to encourage staff to explicitly and systematically consider how the development of their systems furthers their public service values. For example, the BBC published its Machine Learning Engine Principles in 2019 and subsequently continues to iterate on a checklist for project teams to put those principles into practice.[footnote]Macgregor, M. (2021). Responsible AI at the BBC: Our Machine Learning Engine Principles. BBC Research and Development. Available at: https://www.bbc.co.uk/rd/publications/responsible-ai-at-the-bbc-our-machine-learning-engine-principles[/footnote]

Public service media organisations are also in the early stages of developing new metrics and methods to measure the public service value of the outputs of the recommendation systems, both with explicit measures of ‘public service value’ and implicitly through evaluation by editorial staff. We explore these more in our chapter on evaluation and in our case studies on the BBC’s use of recommendation systems.

Additionally, we found that alongside these stated motivations, public service media interviewees had internalised a set of normative values around recommendation systems. When asked to define what a recommendation system is in their own terms, they spoke of systems helping users to find ‘relevant’, ‘useful’, ‘suitable’, ‘valuable’ or ‘good’ content.[footnote]This is not unique to the BBC, and many academic papers and industry publications also reflect a similar implicit normative framework in their definitions of recommendation systems.[/footnote]

This framing around user benefit obscures the fact that the systems are ultimately deployed to achieve organisations’ goals, and so if they are ‘relevant’ or ‘useful’ this is because that helps achieve the organisations’ goals, not because of an inherent property of the system.[footnote]The organisations’ goals are not necessarily in tension with that of the users, e.g. helping audiences finding more relevant content might help audiences get better value for money (which is a goal of many public service media organisations) but that is still goal which shapes how the recommendation system is developed, rather than a necessary feature of the system.[/footnote] It also adopts the vocabulary of commercial recommendation systems (e.g. targeted advertising options encourage users to opt for more ‘relevant’ adverts) which the Competition and Markets Authority has identified as problematic. This indicates that public service media are essentially adopting the paradigm established by the use of commercial recommendation systems.

Potential risks from recommendation systems

In this section, we explore some of the ethical risks associated with the use of recommendation systems and how they might manifest in uses by public service media.

A review of the literature on recommendation systems helps identify some of the potential ethical and societal risks that have been raised in relation to their use beyond the specific context of public service media. Milano et al highlight six areas of concern for recommendation systems in general:[footnote]Milano, S., Taddeo, M. and Floridi, L. (2020). ‘Recommender systems and their ethical challenges’. AI & Society, 35, pp.957–967. Available at: https://doi.org/10.1007/s00146-020-00950-y[/footnote]

  1. Privacy risks to users of a recommendation system: including direct risks from non-compliance with existing privacy regulations and/or malicious use of personal data, and indirect risks resulting from data leaks, deanonymisation of public datasets or unwanted exposure of inferred sensitive characteristics to third parties.
  2. Problematic or inappropriate content could be recommended and amplified by a recommendation system.
  3. Opacity in the operation of a recommendation system could lead to limited accountability and lower the trustworthiness of the recommendations.
  4. Autonomy: recommendations could limit users’ autonomy by manipulating their beliefs or values, and by unduly restricting the range of meaningful options that are available to them.
  5. Fairness constitutes a challenge for any algorithmic system that operates using human-generated data and is therefore liable to (re)produce social biases. Recommendation systems are no exception, and can exhibit unfair biases affecting a variety of stakeholders whose interests are tied to recommendations.
  6. Social externalities such as polarisation, the formation of echo chambers, and epistemic fragmentation, can result from the operation of recommendation systems that optimise for poorly defined objectives.

How these risks are viewed and addressed by public service media

In this section, we examine the extent to which ethical risks of recommendation systems, identified in the literature, are present in the development and use of recommendation systems in practice by public service media.

1. Privacy

The data gathering and operation of recommendation systems can pose direct and indirect privacy risks. Direct privacy risks come from how personal data is handled by the platform, as its collection, usage and storage need to follow procedures to ensure prior consent from individual users. In the context of EU law, these stages are covered by General Data Protection Regulation (GDPR).

Indirect privacy risks arise when recommendation systems expose sensitive user data unintentionally. For instance, indirect privacy risks may come about as a result of unauthorised data breaches, or when a system reveals sensitive inferred characteristics about a user (e.g. targeted advertising for baby products could indicate a user is pregnant).

Privacy relates to a number of public service values: independence (act in the interest of audiences), excellence (high standards of integrity) and accountability (good governance).

Privacy was raised as a potential risk by every interviewee from a public service organisation. Specifically, public service media were concerned about users’ consent to the use of their data, emphasising data security as a key concern for the responsible collection and use of user data.[footnote]Interview with Jonas Schlatterbeck, Head of Content ARD Online & Leiter Programmplanung, ARD (2021). [/footnote] Several interviewees stressed that public service media organisations do not generally require mandatory sign-in for certain key products, such as news. Other services, focusing more on entertainment, such as BBC iPlayer, do require sign-on, but the amount of personal data collected is limited.

Sebastien Noir, Head of Software, Technology and Innovation at the European Broadcasting Union, emphasised how the need to comply with privacy regulations in practice means that projects have to jump through several hoops with legal teams before trials with user data are allowed. While this uses up time and resources in project development, it also means that robust measures are in place to protect users from direct threats to privacy. Koen Muylaert,  at Belgian VRT, also spoke to us about how there is a distinction between personal data, which poses privacy risks, and behavioural data, which may be safer to use for public service media recommendation systems and which they actively monitor.[footnote]Interview with Koen Muylaert, Project Lead, VRT data platform and data science initiative, Vlaamse Radio- en Televisieomroeporganisatie (VRT) (2021).[/footnote]

None of the organisations that we interviewed spoke to us about indirect threats to privacy or ways to mitigate them.

2. Problematic or inappropriate content

Open recommendation systems on commercial platforms that host limitless, user-generated content have a high risk of recommending low quality or harmful content. This risk is lower for public service media that deploy closed recommendation systems to filter their own catalogue of content which has already been extensively scrutinised for quality and adherence to editorial guidelines. Nonetheless, some risk may still exist for closed recommendation systems, such as the risk of recommended age-inappropriate content to younger users.

The risk of inappropriate content relates to the public service media values of excellence (high standards of integrity, professionalism and quality) and independence (completely impartial and independent from political commercial and other influences and ideologies).

In interviews, many members of public service media staff were generally confident that recommendations would be of high quality and represent public service values because the content pool had already passed that test. Nonetheless, some staff identified a risk that the system could surface inappropriate content, for example, archive items that include sexist or racist language that is no longer acceptable or through the juxtaposition of items that could be jarring.

However, a more commonly identified potential risk arises in connection to independence and impartiality. Many of the interviewees we spoke to mentioned that the algorithms used to generate user recommendations needed to be impartial. The BBC and other public service media organisations have traditionally operated a policy of ‘balance over time and output’, meaning a range of views on a subject or party political voices will be heard over a given period of programming on a specific channel. However, recommendation systems disrupt this. The audience is no longer exposed to a range of content broadcast through channels. Instead, individuals are served up specific items of content without the balancing context of other programming. In this way they may only encounter one side of an argument.

Therefore, some interviewees expressed that fine-tuning balanced recommendations are especially important in this context. This is an area where the close integration of editorial and technical teams was seen to be essential

3. Opacity of the recommendation

Like many other algorithmic systems, many recommendation systems operate as black boxes whose internal workings are sometimes difficult to interpret, even for their developers. The process by which a recommendation is generated is often not transparent to individual users or other parties that interact with a recommendation system. This can have negative effects, by limiting the accountability of the system itself, and diminishing the trust that audiences put in the good operation of the service.

Opacity is a challenge to the public service media values of independence (autonomous in all aspects of the remit) and accountability (be transparent and subject to constant public scrutiny). The issue of opacity and the risks that it raises was touched upon in several of our interviews.

The necessity to exert more control over the data and algorithms used for building recommendation systems was among the motivations for the BBC in bringing their development in house. The same is true of other public service media in Europe. While most European broadcasters did not choose to bring the development of recommendation systems in house, many of them now rely on PEACH, a recommendation system developed collaboratively by several public service media organisations under the umbrella of the European Broadcasting Union (EBU).

Previously, the BBC as well as other public service media had relied on external commercial contractors to build the recommendation systems they used. This however meant that they could exert little control over the data and algorithms used, which represented a risk. In the words of Sebastien Noir, Head of Software, Technology and Innovation at the EBU:

‘As a broadcaster, you are defined by what you promote to the people, that’s your editorial line. This is, in a way, also your brand or your user experience. If you delegate that to a third party company, […] then you have a problem, because you have given your very identity, the way you are perceived by the people to a third party company […] No black box should be your editorial line.’[footnote]Interview with Sébastien Noir, Head of Software, Technology and Innovation, and Dmytro Petruk, Developer, European Broadcasting Union (2021).[/footnote]

But bringing the development of recommendation systems in-house does not solve all the issues connected with the opacity of these systems. Jannick Sørenson, Associate Professor in Digital Media at Aalborg University, summarised the concern:

‘I think the problem of the accountability, first within the public service institution, is that editors, they have no real chance to understand what data scientists are doing. And data scientists, neither they do. […] And so the dilemma here is that it requires a lot of specialised knowledge to understand what is going on inside this process of computing recommendation[s]. Right. And, I mean, with Machine Learning, it’s become literally impossible to follow.’[footnote]Interview with Jannick Kirk Sørensen, Associate Professor in Digital Media, Aalborg University (2021).[/footnote]

Sørenson highlighted how the issue of opacity arises both internally and externally for public service media.

Internally to the institution, the opacity of the systems utilised to produce recommendations hinders the collaboration of editorial and technical staff. Some public service media organisations, such as Swedish Radio, have tried to tackle this issue by explicitly having both a technical and an editorial project lead, while Bayerische Rundfunk have established an interdisciplinary team with their AI and Automation Lab[footnote]We explore these examples in more detail later in the chapter.[/footnote]

Documentation is another approach taken by public service media organisations to reduce the opacity of the system. For example, the BBC’s Machine Learning Engine Principles checklist (as of version 2.0) explicitly asks teams to document what their model does and how it was created, e.g. via a data science decision log, and to create a Plain English explanation or visualisation of the model to communicate the model’s purpose and operation.

Externally, public service media struggle to provide effective explanations to audiences about the systems that they use. The absence of industry standards for explanation and transparency was identified as a risk. Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio, also expressed this worry:

‘One particular risk, I think, with all these kind of more automatic services, and especially with the introduction of […] AI powered services, is that the audience doesn’t understand what we’re doing. And […] I know that there’s a big discussion going on at the moment, for example, about Explainable AI. How should we explain in a better way what the services are doing? […] I think that there’s a very big need for kind of industry dialogue about setting standards here, you know.’[footnote]Interview with Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio (2021).[/footnote]

Other interviewees, however, highlighted that the use of explanations has limited efficacy in addressing the external opacity of individual recommendations, since users rarely pay attention to them. Sarah van der Land, Digital Innovation Advisor at NPO in the Netherlands, cited internally conducted consumer studies as evidence that audiences might not care about explanations:

‘Recently, we did some experiments also on data insight, into what extent our consumers want to have feedback on why they get a certain recommendation? And yeah, unfortunately, our research showed that a lot of consumers are not really interested in the why. […] Which was quite interesting for us, because we thought, yeah, of course, as a public value, we care about our consumers. We want to elaborate on why we do the things we do and why, based on which data, consumers get these recommendations. But yeah, they seem to be very little interested in that.’[footnote]Interview with Arno van Rijswijk, Head of Data & Personalization, and Sarah van der Land, Digital Innovation Advisor, Nederlandse Publieke Omroep (2021).[/footnote]

This finding indicates that pursuing this strategy has limited practical effects in improving the value of recommendations for audiences. David Graus, Lead Data Scientist, Randstad Groep Nederland, also told us that he is sceptical of the use of technical explanations, but that ‘what is more important is for people to understand what a recommender system is, and what it aims to do, and not how technically a recommendation was generated.’[footnote]Interview with David Graus, Lead Data Scientist, Randstad Groep Nederland (2021).[/footnote] This could be achieved by providing high-level explanations of the processes and data that were used to produce the recommendations, instead of technical details of limited interest to non-technical stakeholders.

4. Autonomy

Research on recommendation systems has highlighted how they could pose risks to user autonomy, by restricting people’s access to information and by potentially being used to shape preferences or emotions. Autonomy is a fundamental human value which ‘generally can be taken to refer to a person’s effective capacity for self-governance’.[footnote]Prunkl, C. (2022). ‘Human autonomy in the age of artificial intelligence’. Nature Machine Intelligence, 4, pp.99–101. Available at: doi: https://doi.org/10.1038/s42256-022-00449-9[/footnote] Writing on the concept of human autonomy in the age of AI, Prunkl distinguishes two dimensions of autonomy: one internal, relating to the authenticity of the beliefs and values of a person; and the other external, referring to the person’s ability to act, or the availability of meaningful options that enables them to express agency.

The risk to autonomy relates to the public service media value of universality (creating a public sphere, in which all citizens can form their own opinions and ideas, aiming for inclusion and social cohesion).

Public service media historically have made choices on behalf of their audiences in line with what the organisation has determined is in the public interest. In this sense audiences have limited autonomy due to public service media organisations restricting individuals’ access to information, albeit with good intentions.

The use of recommendation systems could, in one respect, be seen as increasing the autonomy of audiences. A more personalised experience, that is more tailored to the individual and their interests, could support the ‘internal’ dimension of autonomy, because it could enable a recommendation system to more accurately reflect the beliefs and values of an individual user, based on what other users of that demographic, region or age might like.

At the same time, public service media strive to ‘create a public sphere, in which all citizens can form their own opinions and ideas, aiming for inclusion and social cohesion’.[footnote]European Broadcasting Union. (2012). Empowering Society: A Declaration on the Core Values of Public Service Media, p. 4. Available at: https://www.ebu.ch/files/live/sites/ebu/files/Publications/EBU-Empowering-Society_EN.pdf[/footnote] There is a risk in using recommendation systems that public service media might filter information in such a way that they inhibit people’s autonomy to form their views independently.[footnote]Interview with David Caswell, Executive Product Manager, BBC News Labs (2021).[/footnote]

By design, recommendation systems tailor recommendations to a specific individual, often in such a way where these recommendations are not visible to other people. This means individual members of the audience may not share a common context or may be less aware of what information others have access to, a condition that Milano et al have called ‘epistemic fragmentation’.[footnote]Milano, S., Mittelstadt, B., Wachter, S. and Russell, C. (2021), ‘Epistemic fragmentation poses a threat to the governance of online targeting’. Nature Machine Intelligence. Available at: https://doi.org/10.1038/s42256-021-00358-3[/footnote] Coming to an informed opinion often requires being able to have meaningful conversations about a topic with other people. If recommendations isolate individuals from each other, then this may undermine the ability of audiences to form authentic beliefs and reason about their values. Since this ability is essential to having autonomy, epistemic fragmentation poses a risk.

Recommendations are also based on an assumption that there is such a thing as a single, legible individual for whom content can be personalised. In practice, people’s needs vary according to context and relationships. They may want different types of content at different times of day, whether they are watching videos with family or listening to the news in the car, for example. However, contextual information is difficult to factor in a recommendation, and doing so requires access to more user data which could pose additional privacy risks. Moreover, recommendations are often delivered via a user’s account with a service that uses recommendation systems. However, some people may choose to share accounts, create a joint one or maintain multiple personal accounts to compartmentalise different aspects of their information needs and public presence.[footnote]Milano, S., Taddeo, M. and Floridi, L. (2021). ‘Ethical aspects of multi-stakeholder recommendation systems’. The Information Society, 37(1). Available at: https://doi.org/10.1080/01972243.2020.1832636[/footnote]

Finally, the use of recommendation systems by public service media can pose a risk to autonomy when the categories that are used to profile users are not accurate, not transparent or not easily accessible and modifiable by the users themselves. This concern is linked to the opacity of the system, but it was not addressed explicitly as a risk to user autonomy in our interviews.

As above, several interviews highlighted that internal research indicates users do not want more explanations and control over the recommendation system, when this comes at the cost of a frictionless experience. If so, public service media need to consider whether there is a trade-off between supporting autonomy and the ease of use of a recommendation system, and research alternative strategies to provide audiences with more meaningful opportunities to participate in the construction of their digital profiles.

5. Fairness

Researchers have documented how the use of machine learning and AI in applications ranging from credit scoring to facial recognition,[footnote]Buolamwini, J. and Gebru, T. (2018). ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’. Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Conference on Fairness, Accountability and Transparency, PMLR, pp. 77–91. Available at: https://proceedings.mlr.press/v81/buolamwini18a.html[/footnote] medical triage to parole decisions,[footnote]Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). ‘Machine Bias’. ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing[/footnote] advert delivery[footnote]Sweeney, L. (2013). ‘Discrimination in online ad delivery’. arXiv. Available at: https://doi.org/10.48550/arXiv.1301.6822[/footnote] to automatic text generation[footnote]Noble, S. U. (2018). Algorithms of Oppression. New York: New York University Press; Bender, E.M., Gebru, T., McMillan-Major, A. and Shmitchell, S. (2021). ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’. FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp.610–623. Available at: https://doi.org/10.1145/3442188.3445922[/footnote] and many others, often leads to unfair outcomes which perpetuate historical social biases or introduce new, machine-generated ones. Given the pervasiveness of these systems in our societies, this has given rise to increasing pressure to improve their fairness, which has contributed to a burgeoning  area of research.

This risk relates to the public service media value of universality (reach all segments of society, with no-one excluded) and diversity (support and seek to give voice to a plurality of competing views – from those with different backgrounds, histories and stories. Help build a more inclusive, less fragmented society).

Developers of algorithmic systems today can draw on a growing array of technical approaches to addressing fairness issues; however, fairness remains a challenging issue that cannot be fully solved by technical fixes. Instead, as Wachter et al argue in the context of EU law, the best approach may be to recognise that algorithmic systems are inherently and inevitably biased, and to put in place accountability mechanisms to ensure that there are no biases that perpetuate unfair discrimination, but to the contrary biases are used to help to redress historical injustices.[footnote]Wachter, S., Mittelstadt, B. and Russell, C. (2020). ‘Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI’. Computer Law & Security Review, 41. Available at: http://dx.doi.org/10.2139/ssrn.3547922[/footnote]

Recommendation systems are no exception. Biases in recommendation can arise at a variety of levels and for different stakeholders. From the perspective of users, a recommendation system could be unfair if the quality of the recommendations varies across users. For example, if a music recommendation system is much worse at predicting the tastes of and serving interesting recommendations to a minority group, this could be unfair.

Recommendations could also be unfair from a provider perspective. For instance, one recent study found a film recommendation system trained on a well-known dataset (MovieLens 10M), and designed to optimise for relevance to users, systematically underrepresented films by female directors.[footnote]Boratto, L., Fenu, G. and Marras, M. (2021) ‘Interplay between upsampling and regularization for provider fairness in recommender systems’. User Modeling and User-Adapted Interaction, 31(3), pp. 421–455.Available at: https://doi.org/10.1007/s11257-021-09294-8[/footnote] This example illustrates a phenomenon that is more pervasive. Since recommendation systems are primarily built to optimise for user relevance, provider-side unfairness has been observed to emerge in a variety of settings, ranging from content recommendations to employment websites.[footnote]Biega, A. J., Gummadi, K. P. and Weikum, G. (2018). ‘Equity of Attention: Amortizing Individual Fairness in Rankings’. SIGIR ’18: The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 405–414. Available at: https://dl.acm.org/doi/10.1145/3209978.3210063[/footnote]

Because different categories of stakeholders derive different types of value from recommendation systems, issues of fairness can arise separately for each of them. In e-commerce applications, for example, users derive value from relevant recommendations for items that they might be interested in buying, while sellers derive value from their items being exposed to more potential buyers. Moreover, attempts to address unfair bias for one category of stakeholders might lead to making things worse for another category. In the case of e-commerce applications, for example, attempts to improve provider-side fairness could have negative effects on the relevance of recommendations for users. Bringing these competing interests together, comparing them and devising overarching fairness metrics remains an open challenge.[footnote]Abdollahpouri, H., Adomavicius, G., Burke, R., et al. (2020). ‘Multistakeholder recommendation: Survey and research directions’. User Modeling and User-Adapted Interaction, pp.127–158. Available at: https://doi.org/10.1007/s11257-019-09256-1[/footnote]

Issues of fairness were not prominently mentioned by our interview participants. When fairness was referenced, it was primarily with regards to fairness concerns for users and whether recommendation systems performed better for some demographics than others. However, the extent to which recommendation systems are currently used across public service media organisations we spoke to was low enough that the risk did not generate too much concern among many staff. Sebastien Noir, European Broadcasting Union, said that ‘Recommendation appears, at least for the moment more than something like [the] cherry on the cake, it’s a little bit of a personalised touch on the world where everything is still pretty much broadcast content where everyone gets to receive the same content.’[footnote]Interview with Sébastien Noir, Head of Software, Technology and Innovation, and Dmytro Petruk, Developer, European Broadcasting Union (2021).[/footnote] Since, for now, recommendations represent a very small portion of the content that users access on these platforms, the risk that this poses to fairness was deemed to be very low. 

However, if recommendations were to take a more prominent role in future, this would pose concerns that need to be addressed. Some of our BBC interviewees expressed a concern that some recommendations currently cater best to the interests of some demographics, while they work less well for others. Differential levels of accuracy and quality of experience across groups of users is a known issue in recommendation systems, although the way in which it manifests can be difficult to predict before the system is deployed.

In general, our respondents believed that ‘majority’ users, whose informational needs and preferences are closest to the average, and therefore more predictable, tend to be served best by a recommendation system – though many acknowledge this assertion has been difficult to empirically prove. If the majority of BBC users belong to a specific demographic, this could skew the system towards their interests and tastes, posing fairness issues with respect to other demographics. However, this can sometimes be reversed when other factors beyond user relevance, such as increasing the diversity of users and the diversity of content, are introduced. Therefore, the emerging patterns from recommendations are difficult to predict, but will need to be monitored on an ongoing basis. BBC interviewees reported that this issue is currently addressed by looping in more editorial oversight.

6. Social effects or externalities

One of the features of recommendation systems that has attracted most controversy in recent years is their apparent tendency to produce negative social effects. Social media networks that use recommendation systems to structure user feeds, for instance, have come under scrutiny for increasing polarisation by optimising for engagement. Other social networks have come under fire for facilitating the spread of disinformation.

The social externality risk relates to the public service media values of universality (create a public sphere, in which all citizens can form their own opinions and ideas, aiming for inclusion and social cohesion) and diversity (support and seek to give voice to a plurality of competing views – from those with different backgrounds, histories and stories. Help build a more inclusive, less fragmented society).

Pariser introduced the concept of a ‘filter bubble’, which can be understood as an informational ecosystem where individuals are only or predominantly exposed to certain types of content, while they never come into contact with other types.[footnote]Pariser, E. (2011). The filter bubble: what the Internet is hiding from you. Penguin Books.[/footnote] The philosopher C Thi Nguyen has offered an analysis of how filter bubbles might develop into echo chambers, where users’ beliefs are reflected at them and reinforced through interaction with media that validates them, leading to potentially dangerous escalation.[footnote]Nguyen, C. T. (2018). ‘Why it’s as hard to escape an echo chamber as it is to flee a cult’. Aeon. Available at: https://aeon.co/essays/why-its-as-hard-to-escape-an-echo-chamber-as-it-is-to-flee-a-cult[/footnote] However, some recent empirical research has cast doubt on the extent to which recommendation systems deployed on social media really give rise to filter bubbles and political polarisation in practice.[footnote]Arguedas, A. R., Robertson, C. T., Fletcher, R. and Nielsen R.K. (2022). ‘Echo chambers, filter bubbles, and polarisation: a literature review.’ Reuters Institute for the Study of Journalism. Available at: https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review[/footnote]

In one study, it was observed that consuming news through social media increases the diversity of content consumed, with users engaging with a larger and more varied selection of news sources.[footnote]Scharkow, M., Mangold, F., Stier, S. and Breuer, J. (2020). ‘How social network sites and other online intermediaries increase exposure to news’. Proceedings of the National Academy of Sciences, 117(6), pp. 2761–2763. Available at: https://doi.org/10.1073/pnas.1918279117[/footnote] These studies highlight how recommendation systems can be programmed to increase the diversity of exposure to varied sources of content.[footnote]A similar finding exists in other studies of public service media organisations – see: Hildén, J. (2021). ‘The Public Service Approach to Recommender Systems: Filtering to Cultivate’. Television & New Media, 23(7). Available at: https://doi.org/10.1177/15274764211020106[/footnote] However, they do not control for the quality of the sources or the individual reaction to the content (e.g. does the user pay attention or merely scroll down on some of the news items?). Without this information it is difficult to know what the effects are of exposure to different types of sources. More research is needed to probe the links between exposure to diverse sources and the influence this has on the evolution of political opinions. 

Another known risk for recommendation systems is exposure to manipulation by external agents. Various states, for example Russia and China, have been documented to engage in what has been called ‘computational propaganda’. This type of propaganda exploits some features of recommendation systems on social media to spread mis- or disinformation, with the aim of destabilising the political context of the countries targeted. State-sponsored ‘content farms’ have been documented to produce content that is engineered to be picked up by recommendation systems to go viral. This kind of hostile strategy is made possible by the vulnerability of the recommendation system, especially open ones, because the system is programmed to optimise for engagement.

The risk that the use of recommendation systems could increase polarisation and create filter bubbles was regarded as very low by our interviewees. Unlike social media that recommend content generated by users or other organisations, the BBC and other public service media that we spoke to operate closed content platforms. This means that all the content recommended on their platforms has already passed multiple editorial checks, including for balanced and truthful reporting.

The relatively minor role that recommendation systems play on the platform currently also means that they do not pose a risk of creating filter bubbles. Therefore, this was not recognised as a pressing concern.

However, many raised concerns that recommendation systems could undermine the principle of diversity by serving audiences homogenous content. Historically, programme schedulers have had mechanisms to expose audiences to content they might not choose of their own accord – for example by ‘hammocking’ programmes of high public value between more popular items on the schedule and relying on audiences not to switch channels. Interviewees also mentioned the importance of serendipity and surprise as part of the public service remit. This could be lost if audiences are only offered content based on their previous preferences. These concerns motivate ongoing research into new methods for producing more accurate and diversified recommendations.[footnote]Paudel, B., Christoffel, F., Newell, C. and Bernstein, A. (2017). ‘Updatable, Accurate, Diverse, and Scalable Recommendations for Interactive Applications’. ACM Transactions on Interactive Intelligent Systems, 7(1), pp.1–34. Available at: https://doi.org/10.1145/2955101[/footnote]

Conclusion

The categories of risk related to the use of recommendation systems, identified in the literature, can be applied to their use in the context of public service media. However, the way in which these risks manifest and the emphasis that organisations put on them can be quite different to a commercial context.

We found that public service media have, to a greater or lesser extent, mitigated their exposure to these risks through a number of factors such as the high quality of the content being recommended; the limited deployment of the systems; the substantial level of human curation; a move towards greater integration of technical and editorial teams; ethical principles; associated practice checklists and system documentation. It is not enough for public service media organisations to believe that having a public service mission will ensure that recommendation systems serve the public. If public service media are to use recommendation systems responsibly, they must interrogate and mitigate the potential risks.

We find these risks can also be seen in relation to the six core public service values of universality, independence, excellence, diversity, accountability and innovation.

We believe it is useful for public service media to consider both the known risks, as understood within the wider research field, as well as the risks in relation to public service values. By approaching the potential challenges of recommendation systems through this dual lens, public service media organisations should be able to develop and deploy systems in line with their public service remit.

An additional consideration, broader than any specific risk category, is that of audience trust in public service media. Trust doesn’t fall under any specific category because it is associated with the relationship between  public service media and their audience more broadly. But failure to address the risks identified by the categories can negatively affect trust. All public service media organisations place trust as central to their mission. In the context of a fragmented digital media environment, their trustworthiness has taken on increased importance and is now a unique quality that distinguishes them from other media and which is pivotal to the argument in favour of sustaining public service media. Many public service media organisations are beginning to recognise and address the potential risks of recommendation systems and it is vital that this continues in order to retain audience trust.

Additional challenges for public service media

As well as the ethical risks described above, public service media face practical challenges in implementing recommendation systems that stem from their mission, the make-up of their teams and their organisational infrastructure.

Quantifying values

Recommendation systems filter content according to criteria laid down by the system developers. Public service media organisations that want to filter content in ways that prioritise public service values first need to translate these values into information that is legible to an algorithmic system. In other words, the values must be quantified as data.

However, as we noted above, public service values are fluid, can change over time and depend on context. And as well as the stated mission of public service media, laid down in charters, governance and guidelines, there are a set of cultural norms and individual gut instincts that determine day-to-day decision making and prioritisation in practice. Over time, public service media have developed a number of ways to measure public value, through systems such as the public value test assessment and with metrics such as audience reach, value for money and surveys of public sentiment (see section above). However, these only account for public value at a macro level. Recommendation systems that are filtering individual items of content require metrics that quantify values at a micro level.

Swedish Radio is a pioneer in attempting to do this work of translation. Olle Zachrison of Swedish Radio summarised it as: ‘we have central tenets to our public service mission stuff that we have been talking about for decades and also stuff that is in the kind of gut of the news editors. But in a way, we had to get them out there in an open way and into a system also, that we in a way could convert those kinds of editorial values that have been sitting in these kind of really wise news assessments for years, but to get them out there into a system that we also convert them into data.’[footnote]Interview with Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio (2021).[/footnote]

Working across different teams and different disciplines

The development and deployment of recommendation systems for public service media requires expertise in both technical development and content creation and curation. This proves challenging in a number of ways.

Firstly, technology talent is hard to come by, especially when public service media cannot offer anything near the salaries available at commercial rivals.[footnote]Interview with Dietmar Jannach, Professor, University of Klagenfurt (2021).[/footnote] Secondly, editorial teams often do not trust or value the role of technologists, especially when the two do not work closely with each other.[footnote]Interview with Nic Newman, Senior Research Associate, Reuters Institute for the Study of Journalism (2021).[/footnote] In some organisations, the introduction of recommendation systems stalls because it is perceived as a direct threat to editorial jobs and an attempt to replace journalists with algorithms.[footnote]Interview with Sébastien Noir, Head of Software, Technology and Innovation, and Dmytro Petruk, Developer, European Broadcasting Union (2021).[/footnote]

Success requires bridging this gap and coordinating between teams of experts in technical development, such as developers and data scientists, and experts in content creation and curation, the journalists and editors.[footnote]Boididou, C., Sheng, D., Moss, M. and Piscopo, A. (2021), ‘Building Public Service Recommenders: Logbook of a Journey’. RecSys ’21: Proceedings of the 15th ACM Conference on Recommender Systems, pp. 538–540. Available at: https://doi.org/10.1145/3460231.3474614[/footnote]

As Sørensen and Hutchinson note: ‘Data analysts and computer programmers (developers) now perform tasks that are key determinants for exposure to public service media content. Success is no longer only about making and scheduling programmes. This knowledge is difficult to communicate to journalists and editors, who typically don’t engage in these development projects […] Deep understanding of how a system recommends content is shared among a small group of experts’.[footnote] Sørensen, J.K. and Hutchinson, J. (2018). ‘Algorithms and Public Service Media’. Public Service Media in the Networked Society: RIPE@2017, pp.91–106. Available at: http://www.nordicom.gu.se/sites/default/files/publikationer-hela-pdf/public_service_media_in_the_networked_society_ripe_2017.pdf[/footnote]

Some, such as Swedish Radio and BBC News Labs, have tried to tackle this issue by explicitly having two project leads, one with an editorial background and one with a technical background, to emphasise the importance of working together and symbolically indicate that this was a joint process.[footnote]Interview with Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio (2021); BBC News Labs. ‘About’. Available at: https://bbcnewslabs.co.uk/about[/footnote] Swedish Radio’s Olle Zachrison noted that: 

‘We had a joint process from day one. And we also deliberately had kind of two project managers, one, clearly from the editorial side, like a very experienced local news editor. And the other guy was the product owner for our personalization team. So they were the symbols internally of this project […] that was so important for the, for the whole company to kind of team up behind this and also for the journalists and the product people to do it together.’

If this coordination fails, this can ‘weaken the organisation strategically and, on a practical level, create problems caused by failing to include or correctly mark the metadata that is essential for findability’.

Bayerische Rundfunk has established a unique interdisciplinary team. The AI and Automation Lab has a remit to not only create products, but also produce data-driven reporting and coverage of the impacts of artificial intelligence on society. Building from the existing data journalism unit, the Lab fully integrates the editorial and technical teams under the leadership of Director Uli Köppen. Although she recognises the challenges of bringing together people from different backgrounds, she believes the effort has paid off:

‘This technology is so new, and it’s so hard to persuade the experts to work in journalism. We had the data team up and running, these are journalists that are already in the mindset at this intersection of tech and journalism. And I had the hope that they are able to help people from other industries to dive into journalism, and it’s easier to have this kind of conversation with people who already did this cultural step in this hybrid world.

‘It was astonishing how those journalists helped the new people to onboard and understand what kind of product we are. And we are also reinventing our role as journalists in the product world. And this really worked out so I would say it’s worth the effort.’

Metadata, infrastructure and legacy systems

In order to filter content, recommendation systems require clear information about what that content is. For example, if a system is designed to show people who enjoyed soap operas other series that they might enjoy, individual items of content must be labelled as being soap operas in a machine-readable format. This kind of labelling is called metadata.

However, public service media have developed their programming around the needs of individual channels and stations organised according to particular audiences and tastes (e.g. BBC Radio 1 is aimed at a younger audience around music, BBC Radio 4 at an older audience around speech content) or by a particular region (e.g. in Germany Bayerische Rundfunk serves Bavaria, WDR serves West Germany but both are members of the federal broadcaster ARD). Each of these channels will have evolved their own protocols and systems and may label content differently – or not at all. This means the metadata to draw on for the deployment of recommendation systems is often sparse and low quality, and the metadata infrastructure is often disjointed and unsystematic.

We heard from many interviewees across public service media organisations that access to high-quality metadata was one of the most significant barriers to implementing recommendation systems. This was particularly an issue when they wanted to go beyond the most simplistic approaches and experiment with assigning public service value to pieces of content or measuring the diversity of recommended content.

Recommendation system projects often required months of setting up systems for data collection, then assessing and cleaning that data, before the primary work of building a recommendation system could begin. To achieve this requires a significant strategic and financial commitment on the part of the organisation, as well as buy-in from the editorial teams involved in labelling.

Evaluation of recommendation systems

We’ve explored the possible benefits and harms of recommendation systems, and how those benefits and harms might manifest in a public service media context. To try to understand whether and when those benefits and harms occur, developers of recommendation systems need to evaluate their systems. Conversely, looking at how developers and organisations evaluate their recommendation systems can tell us what benefits and harms, and to whom, they prioritise and optimise for in their work.[footnote]Evaluation of recommendation systems in not limited to the developers and deployers of those systems. Other stakeholders such as users, government, regulators, journalists and civil society organisations may all have their own goals for what they think a particular recommendation system should be optimising for. Here however, we focus on evaluation as seen by the developer and deployer of the system, as this is where there is the tightest feedback loop between evaluation and changes to the system and the developers and deployers generally have privileged access to information about the system and a unique ability to run tests and studies on the system. For more on how regulators (and others) can evaluate social media companies in an online-safety context, see: Ada Lovelace Institute. (2021). Technical methods for regulatory inspection of algorithmic systems. Available at: https://www.adalovelaceinstitute.org/report/technical-methods-regulatory-inspection/[/footnote]

In this chapter, we look at:

  • how recommendation systems can be evaluated
  • how public service media organisations evaluate their own recommendation systems
  • how evaluation might be done differently in future.

How recommendation systems are evaluated

In this section, we lay out a framework for understanding the evaluation of recommendation systems as a three-stage process of:

  1. Setting objectives.
  2. Identifying metrics.
  3. Selecting methods to measure those metrics.

This framework is informed by three aspects of evaluation (objectives, metrics and methods) as identified by Francesco Ricci, Professor of Computer Science at the Free University of Bozen-Bolzano.

Objectives

Evaluation is a process of determining how well a particular system achieves a particular set of goals or objectives. To evaluate a system, you need to know what goals you are evaluating against.[footnote]Interview with Francesco Ricci, Professor of Computer Science, Free University of Bozen-Bolzano (2021).[/footnote]

However, this is not a straightforward exercise. There is no singular goal for a recommendation system and different stakeholders will have different goals for the system. For example, on a privately-owned social media platform:

  • the engineering team’s goal might be to create a recommendation system that serves ‘relevant’ content to users
  • the CEO’s goal might be to maximise profit while minimising personal reputational risk
  • the audience’s goal may be to discover new and unexpected content (or just avoid boredom).

If a developer wants to take into account the goals of all the stakeholders in their evaluation, they will need to decide how to prioritise or weigh these different goals.

Balancing goals is ultimately a ‘political’ or ‘moral’ question, not a technical one, and there will never be a universal answer about how to weigh these different factors, or even who the relevant stakeholders whose goals should be weighted are.

Any process of evaluation ultimately needs a process to determine the relevant stakeholders for a recommendation system and how their priorities should be weighted.

This is made more difficult because people are often confused or uncertain about their goals, or have multiple competing goals, and so the process of evaluation will need to help people clarify their goals and their own internal weightings between those goals.[footnote]Interview with Francesco Ricci.[/footnote]

Metrics

Furthermore, goals are often quite general and whether they have been met cannot be directly observed.[footnote]Interview with Francesco Ricci, Professor of Computer Science, Free University of Bozen-Bolzano (2021).[/footnote] Therefore, once a goal has been decided, such as ‘relevance to the user’, the goal needs to be operationalised into a set of specific metrics to judge the recommendation system against.[footnote]Operationalising is a process of defining how a vague concept, which cannot be directly measured, can nevertheless be estimated by empirical measurement. This process inherently involves replacing one concept, such as ‘relevance’, with a proxy for that concept, such as ‘whether or not a user clicks on an item’ and thus will always involve some degree of error.[/footnote] These metrics can be quantitative, such as the number of users who click on an item, or qualitative, such as written feedback from users about how they feel about a set of recommendations.

Whatever the metrics used, the choice of metrics is always a choice of a particular interpretation of the goal. The metric will always be a proxy for the goal, and determining a proxy is a political act that grants power to the evaluator to decide what metrics reflect their view of the problem to be solved and the goals to be achieved.[footnote]Beer, D. (2016). Metric Power. London: Palgrave Macmillan. Available at: https://doi.org/10.1057/978-1-137-55649-3[/footnote]

The people who define these metrics for the recommendation system are often the engineering or product teams. However, these teams are not always the same people who set the goals of an organisation. Furthermore, they may not directly interact with other stakeholders who have a role in setting the goals of the organisation or the goal of deploying the recommendation system.

Therefore, through misunderstanding, lack of knowledge or lack of engagement with others’ views, the engineering and product teams’ interpretation of the goal will likely never quite match the intention of the goal as envisioned by others.

Metrics will also always be a simplified vision of reality, summarising individual interactions with the recommendation system into a smaller set of numbers, scores or lines of feedback.[footnote]Raji, I. D., Bender, E. M., Paullada, A. et al. (2021). ‘AI and the Everything in the Whole Wide World Benchmark’, p2. arXiv. Available at: https://doi.org/10.48550/arXiv.2111.15366[/footnote] This does not mean metrics cannot be useful indicators of real performance; this very simplicity is what makes them useful in understanding the performance of the system. However, those creating the metrics need to be careful not to confuse the constructed metric with the reality underlying the interactions of people with the recommendation system. The metric is a measure of the interaction, not the interaction itself.

Methods

Evaluating is then the process of measuring these metrics for a particular recommendation system in a particular context, which requires gathering data about the performance of the recommendation system. Recommendation systems are evaluated in three main ways:[footnote]Gunawardana, A. and Shani, G. (2015). ‘Evaluating Recommender Systems’. Recommender Systems Handbook, pp 257–297. Available at: https://doi.org/10.1007/978-0-387-85820-3_8[/footnote]

  1. Offline evaluations test recommendation systems without real users interacting with the system, for example by measuring recommendation system performance on historical user interaction data or in a synthetic environment with simulated users.
  2. User studies test recommendation systems against a small set of users in a controlled environment with the users being asked to interact with the system and then typically provide explicit feedback about their experience afterwards.
  3. Online evaluations test recommendation systems deployed in a live environment, where the performance of the recommendation system is measured against interactions with real users.

These methods of evaluation are not mutually exclusive and a recommendation system might be tested with each method sequentially, as it moves from design to development to deployment.

Offline evaluation has been a historically popular way to evaluate recommendation systems. It is comparatively easy to do, due to the lack of interaction with real users or a live platform. In principle, they are reproducible by other evaluators, and allow standardised comparison of the results of different recommendation system.[footnote]Jannach, D. and Jugovac, M. (2019), ‘Measuring the Business Value of Recommender Systems’. ACM Transactions on Management Information Systems, 10(4), pp 1–23. Available at: https://doi.org/10.1145/3370082[/footnote]

However, there is increasing concern that offline evaluation results based on historical interaction data do not translate well into real-world recommendation system performance. This is because the training data is based on a world without the new recommendation system in it, and evaluations therefore cannot account for how that system might itself shift wider aspects of the service like user preferences.[footnote]Rohde, D., Bonner, S., Dunlop, T., et al. (2018). ‘RecoGym: A Reinforcement Learning Environment for the problem of Product Recommendation in Online Advertising’. arXiv. Available at: https://doi.org/10.48550/arXiv.1808.00720; Beel, J. and Langer, S. (2015)., ‘A Comparison of Offline Evaluations, Online Evaluations, and User Studies in the Context of Research-Paper Recommender Systems’. Proceedings of the 19th International Conference on Theory and Practice of Digital Libraries (TPDL), pp.153-168. Available at: doi: 10.1007/978-3-319-24592-8_12; Jannach, D., Pu, P., Ricci, F. and Zanker, M. (2021). ‘Recommender Systems: Past, Present, Future’. AI Magazine, 42 (3). Available at: https://doi.org/10.1609/aimag.v42i3.18139[/footnote] This limits their usefulness in evaluating which recommendation system would actually be the best performing in the dynamic live environments most stakeholders are interested in, such as a video-sharing website with an ever-growing set of videos and ever-changing set of viewers and content creators.

Academics we spoke to in the field of recommendation systems identified user studies in labs and simulations as the state of the art in academic recommendation system evaluation. Whereas in industry, common practice is to use online evaluation via A/B testing to optimise key performance indicators.[footnote]Interview with Dietmar Jannach, Professor, University of Klagenfurt (2021).[/footnote]

How do public service media evaluate their recommendation systems?

In this section, we use the framework of objectives, metrics and methods to examine how public service media organisations evaluate their recommendation systems in practice.

Objectives

As we discussed in the previous chapter, recommendation systems are ultimately developed and deployed to serve the goals of the organisation using them; in this case, public service media organisations. In practice, however, the objectives that recommendation systems are evaluated against are often multiple levels of operationalisation and contextualisation down from the overarching public service values of the organisation.

For example, as discussed previously, the BBC Charter agreement sets out the mission and public purposes of the organisation for the following decade. These are derived from the public service values, but are also shaped by political pressures as the Charter is negotiated with the British Government of the time.

The BBC then publishes an annual plan setting out the organisation’s strategic priorities for that year, drawing explicitly on the Charter’s mission and purposes. These annual plans are equally shaped by political pressures, regulatory constraints and challenges from commercial providers. The plan also sets out how each product and service will contribute towards meeting those strategic priorities and purposes, setting the goals for each of the product teams.

For example, the goals of BBC Sounds as a product team in 2021 were to:

  1. Increase the audience size of BBC Sounds’ digital products.
  2. Increase the demographic breadth of consumption across BBC Sounds’ products, especially among the young.
  3. Convert ‘lighter users’ into regular users.
  4. Enable users to more easily discover content from the more than 50 hours of new audio produced by the BBC on an hourly basis.[footnote]According to David Jones (Executive Product Manager, BBC Sounds, interviewed in 2021), his top-line KPI is to reach 900,000 members of the British population who are under 35 by March 2022. These numbers are determined centrally by BBC senior managers based on the BBC’s Service Licence for BBC Online and Red Button. See: BBC Trust. (2016). BBC Online and Red Button Service Licence. Available at: http://downloads.bbc.co.uk/bbctrust/assets/files/pdf/regulatory_framework/service_licences/online/2016/online_red_button_may16.pdf[/footnote]

These objectives map onto the goals for using recommendation systems we discussed in the previous chapter. Specifically, the first three relate to capturing audience attention and the fourth relates to reducing information overload and improving discoverability for audiences.

These product goals then inform the objectives of the engineering and product teams in the development and deployment of a recommendation system, as a feature within the wider product.

At each stage, as the higher level objectives are interpreted and contextualised lower down, they may not always align with each other.

The objectives for the development and deployment of recommendation systems in public service media seem most clear for entertainment products, e.g. audio-on-demand and video-on-demand. Here, the goal of the system is clearly articulated as a combination of audience engagement with reaching underserved demographics and serving more diverse content. These are often explicitly linked by the development teams to achieving the public service values of diversity and a personalised version of universality, which they see as serving the needs of each and every group in society

In these cases, public service media organisations seem better at articulating goals for recommendation systems when they are using recommendation systems for a similar purpose as private-sector commercial media organisations. This seems, in part, because there is greater existing knowledge of how to operationalise those objectives, and the developers can draw on their own private sector experience and existing industry practice, open-source libraries and similar resources.

However, when setting objectives that focus more focus on public service value, public service media organisations often seem less clear about the goals of the recommendation system within the wider product.

This seems partly because in the domain of news, for example, the use of recommendation systems by public service media is more experimental and at an earlier stage of maturity. Here, the motivations often come further apart from commercial providers, with the implicit motivation of public service media developers seemingly to augment existing editorial capabilities with a recommendation system, rather than drive engagement with the news content. This means public service media developers have less existing practices and resources to draw upon for translating product goals and articulating recommendation system objectives in those domains.

In general, it seems that some public service values are easier to operationalise in the context of recommendation systems than others, such as diversity and universality. These values get privileged over others, such as accountability, in the development of recommendation systems, as they are the easiest to translate through from the overarching set of organisational values down to the product and feature objectives.

Metrics

Public service media organisations have struggled to operationalise their complex public service values into specific metrics. There seem to be three broad responses to this:

  1. Fall back on established engagement metrics, e.g. click-through rate and watch time, often with additional quantitative measures of the diversity of audience content consumption.
  2. The above approach combined with attempts to create crude numerical measures (e.g. a score from 1 to 5) of ‘public service value’ for pieces of content, often reducing complex values to a single number subjectively judged by journalists, then measuring the consumption of content with a ‘high’ public service value score.
  3. Try to indirectly optimise for public service value by making their metrics the satisfaction of editorial stakeholders, whose preferences are seen as the best ‘ground truth’ proxy for public service value. Then optimise for lists of recommendations which are seen to have high public service value by editorial stakeholders.

Karin van Es found that, as of 2017, the European Broadcasting Union and the Dutch public service media organisation NPO evaluated pilot algorithms using the same metrics found in commercial systems i.e. stream starts and average‐minute ratings.[footnote]van Es, K. F. (2017). ‘An Impending Crisis of Imagination : Data‐Driven Personalization in Public Service Broadcasters’. Media@LSE. Available at: https://dspace.library.uu.nl/handle/1874/358206[/footnote] As van Es notes, these metrics are a proxy for audience retention and even if serving diverse content was an explicit goal in designing the system, the chosen metrics reflect – and will ultimately lead to – a focus on engagement over diversity.

Therefore, despite different stated goals, the public service media use of recommendation systems ends up optimising for similar outcomes as private providers.

By now, most public service media organisations using recommendation systems also have explicit metrics for diversity, although there is no single shared definition of diversity across the different organisations, nor is there one single metric used to measure the concept.

However, most quantitative metrics for diversity in the evaluation of public service media recommendation systems focus on diversity in terms of audience exposure to unique pieces of content or to categories of content, rather than on the representation of demographic groups and viewpoints across the content audiences are exposed to.[footnote]This was generally attributed by interviewees to a combination of a lack of metadata to measure the representativeness within content and assumption that issues of representation within content were better dealt with at the point at which content is commissioned, so that the recommendation systems have diverse and representative content over which to recommend.[/footnote]

Some aspects of diversity, as Hildén observes, are easier to define and ‘to incorporate into a recommender system than others. For example, genres and themes are easy to determine at least on a general level, but questions of demographic representation and the diversity of ideas and viewpoints are far more difficult as they require quite detailed content tags in order to work. Tagging content and attributing these tags to users might also be politically sensitive especially within the context of news recommenders’.[footnote]Hildén, J. (2021). ‘The Public Service Approach to Recommender Systems: Filtering to Cultivate’. Television & New Media, 23(7). Available at: https://doi.org/10.1177/15274764211020106[/footnote]

Commonly used metrics for diversity include intra-list diversity, i.e. the average difference between each pair of items in a list of recommendations and inter-list diversity, i.e. the ratio of items recommended to total items recommended across all the lists of recommendations.

Some public service media organisations are experimenting with more complex measures of exposure diversity. For example, Koen Muylaert at Belgian VRT explained how they measure an ‘affinity score’ for each user for each category of content, e.g. your affinity with documentaries or with comedy shows, which increases as you watch more pieces of content in that category.[footnote]Interview with Koen Muylaert, Project Lead, VRT data platform and data science initiative, Vlaamse Radio- en Televisieomroeporganisatie (VRT) (2021).[/footnote] VRT then measures the diversity of content that each user consumes by looking at the difference between a user’s affinity scores for different categories.[footnote]By measuring the entropy of the distribution of affinity scores across categories, and trying to improve diversity by increasing that entropy.[/footnote] RT see this method of measuring diversity as valuable because they can explain it to others and measure it across users over time, to track how new iterations of their recommendation system increase users’ exposure to diverse content.

To improve on this, some public service media organisations have tried to implement ‘public service value’ as an explicit metric in evaluating their recommendation systems. NPO, for example, ask a panel of 1,500 experts and ordinary citizens to assess the public value of each piece of content, including the diversity of actors and viewpoints represented in the content, and then ask those panellists to assign a single ‘public value’ from 1 to 100 to all pieces of content on their on-demand platform. They then calculate an average ‘public value’ score for the consumption history of each user. According to Sara van der Land, Digital Innovation Advisor at NPO, their target is to make sure that the average ‘public value’ score of every user rises over time.[footnote]Interview with Arno van Rijswijk, Head of Data & Personalization, and Sarah van der Land, Digital Innovation Advisor, Nederlandse Publieke Omroep (2021).[/footnote]

At the moment, they are only specifically focusing on optimising for that metric within a specific ‘public value’ recommendations section within their wider on-demand platform, which is a mixture of recommendations based on user engagement and  the ‘public value’ of the content. However, through experiments, they found there was a trade-off between optimising for ‘public value’ and viewership, as noted by Arno van Rijswijk, Head of Data & Personalization at NPO:

‘When we’re focusing too much on the public value, we see that the percentage of people that are watching the actual content from the recommender is way lower than when you’re using only the collaborative filtering algorithm […] So when you are focusing more on the relevance then people are willing to watch it. And when you’re adding too much weight on the public values, people are not willing to watch it anymore.’

This resulted in them choosing to have a ‘low ratio’ of public value content to engaging content, making explicit the choice that public service media organisations often do and have to make between audience retention and other public service values like diversity, at least over the short-term these metrics measure.

Others, when faced with the inadequacy of conventional engagement and diversity metrics, have tried to indirectly optimise for public service value by making their metrics the satisfaction of editorial stakeholders, whose preferences are seen as the best ‘ground truth’ proxy for public service value.

In the early stages of developing an article-to-article news recommendation system in 2018,[footnote]The Datalab team was experimenting with and evaluating a number of approaches using a combination of content and user interaction data, such as neural network approaches that combine both content and user data as well as collaborative filtering models based only on user interactions.[/footnote] the BBC Datalab initially used a number of quantitative metrics for its offline evaluation.[footnote]Panteli, M., Piscopo, A., Harland, A., Tutcher, J. and Moss, F. M. (2019). ‘Recommendation systems for news articles at the BBC’, p. 4. CEUR Workshop Proceedings. Available at: http://ceur-ws.org/Vol-2554/paper_07.pdf[/footnote]

They evaluated these using offline metrics, with proxies for engagement, diversity and relevance to audiences, including:

  • hit rate, i.e. whether the list of recommended articles includes an article a user did in fact view within 30 minutes of viewing the original article
  • normalised discounted cumulative gain, i.e. how relevant the recommended articles were assumed to be to the user, with a higher weighting for the relevance of articles higher up in the list of recommendations
  • intra-list diversity, i.e. the average difference between every pair of articles in a list of recommendations
  • inter-list diversity, i.e. the ratio of unique articles recommended to total articles recommended across all the lists of recommendations
  • popularity-based surprisal, i.e. how novel the articles recommended were
  • recency, i.e. how old the articles recommended were when shown to the user.

However, they found that performance on these metrics didn’t match the editorial teams’ priorities. When they tried to instead operationalise into metrics what public service value meant to the editors,  existing quantitative metrics were unable to capture editorial preferences and creating new ones was not straightforward. As Alessandro Piscopo, Lead Data Scientist, BBC Datalab notes:[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote]

‘We did notice that in some cases, one of the recommender prototypes was going higher in some metrics and went to editorial and [they would] say well we just didn’t like it […] Sometimes it was just comments from editorial world, we want to see more depth. We want to see more breadth. Then you have to interpret what that means.’

This difficulty in finding appropriate metrics led to the Datalab team changing their primary method of evaluation, from offline evaluation to user studies with BBC editorial staff, which they called ‘subjective evaluation’.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk[/footnote]

In this approach, they asked editorial staff to score each list of articles generated by the recommendation systems as either: unacceptable, inappropriate, satisfactory or appropriate. The editors were then prompted to describe what properties they considered in choosing how appropriate the recommendations were. The development team would then iterate the recommendation system based on the scoring and written feedback along with discussion with editorial about the recommendation.

Early in the process, the Datalab team agreed with editorial what percentage of each grade they were aiming for, and so what would be a benchmark for success in creating a good recommendation system. In this case, the editorial team decided that they wanted:[footnote]Piscopo, A. (2021); Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote]

  1. No unacceptable recommendations, on the basis that any unacceptable recommendations would be detrimental to the reputation of the BBC.
  2. Maximum 10% inappropriate recommendations.

This change of metrics meant that the evaluation of the recommendation system, and the iteration of the system as a result, was optimising for the preferences of the editorial team, over imperfect measures of audience engagement, relevance and diversity. The editors are seen as the most reliable ‘source of truth’ for public service value, in lieu of better quantitative metrics.

Methods

Public service media often rely on internal user studies with their own staff as an evaluation method during the pre-deployment stage of recommendation system development. For example, Greg Detre, ex-Chief Data Scientist at Channel 4, said that when developing a recommendation system for All 4 in 2016, they would ask staff to subjectively compare the output of two recommendation systems side by side, based on the staff’s understanding of Channel 4’s values:

‘So we’re making our recommendations algorithms fight, “Robot Wars” style, pick the one that you think […] understood this view of the best, good recommendations are relevant and interesting to the viewer. Great recommendations go beyond the obvious. Let’s throw in something a little unexpected, or showcase the Born Risky programming that we’re most proud of, [clicking the] prefer button next to the […]one you like best […] Born Risky, which was one of the kind of Channel Four cultural values for like, basically being a bit cheeky. Going beyond the mainstream, taking a chance. It was one of, I think, a handful of company values.’[footnote]Interview with Greg Detre, ex-Chief Data Scientist, Channel 4 (2021).[/footnote]

Similarly, when developing a recommendation system for BBC Sounds, the BBC Datalab decided to use a process of qualitative evaluation. BBC Sounds uses a factorisation machine approach, which is a mixture of content matching and collaborative filtering. This uses your listening history, metadata about the content and other users’ listening history to make recommendations in two ways:

  1. It recommends items that have similar metadata to items you have already listened to.
  2. It recommends items that have been listened to by people with otherwise similar listening histories.

When evaluating this approach, BBC compared the new factorisation machine recommendation system head-to-head with the existing external provider’s recommendations.

They recruited 30 BBC staff members under the age of 35 to be test users.[footnote]Al-Chueyr Martins, T. (2021). ‘From an idea to production: the journey of a recommendation engine’ [presentation recording]. MLOps London. Available at: https://www.youtube.com/watch?v=dFXKJZNVgw4[/footnote] They then showed these test users two sets of nine recommendations side by side. One set was provided by the current external provider’s recommendation system, and the other set was provided by the team’s internal factorisation machine recommendation system. The users were not told which system had produced which set of recommendations, and had to choose whether they preferred ‘A’ or ‘B’, or ‘both’ or ‘neither’, and then explain their decision why in words.

Over 60% of test users preferred the recommendation sets provided by the internal factorisation machine.[footnote]Al-Chueyr Martins, T. (2021).[/footnote] This convinced the stakeholders that the system should move into production and A/B testing, and helped editorial teams get hands-on experience evaluating automated curations, increasing their confidence in the recommendation system.

Similarly, when later deploying the recommendation system to create personalised sorting system for feature items, the Datalab team held a number of digital meetings with editorial staff, showing them the personalised and non-personalised featured items side-by-side. The Datalab then got feedback from the editors on which they preferred.[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote] This approach allowed them to more directly capture internal staff preferences and manually step towards meeting those preferences. However, the team acknowledged its limitations upfront, particularly in terms of scale.[footnote]Interview with Alessandro Piscopo.[/footnote] Editorial teams and other internal staff only have so much capacity to judge recommendations, and thus would struggle to assess every edge case or judge recommendations, if every recommendation changed depending on the demographics of the audience member viewing it. 

Once the recommendation systems are deployed to a live environment, i.e. accessible by audiences on their website or app, public service media all have some form of online evaluation in place, most commonly in the form of A/B testing in which viewers are given two different recommendations to choose from.

Channel 4 used online evaluation in the form of A/B testing to evaluate the recommendation system used by their video-on-demand service, All 4 Greg Detre noted that:

‘We did A/B test it eventually. And it didn’t show a significant effect. That said [Channel 4] had an already somewhat good system in place. That was okay. And we were very constrained in terms of the technical solutions that we were allowed, there were only a very, very limited number of algorithms that we were able to implement, given the constraints that have already been agreed when I got there. And so as a result, the solution we came up with was, you know, efficient in terms of it was fast to compute in real time, and easy to sort of deploy, but it wasn’t that great… I think perhaps it didn’t create that much value.’[footnote]Interview with Greg Detre, ex-Chief Data Scientist, Channel 4 (2021).[/footnote]

BBC Datalab also used A/B testing in combination with continued user studies and behavioural testing. By April/May 2020, editorial had given sign-off and the recommendation system was deemed ready for initial deployment.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk[/footnote]

During deployment, the team took a ‘failsafe approach’ with weekly monitoring of the live version of the recommendation system by editorial staff. This included further subjective evaluation described above and behavioural tests. In these behavioural tests, developers use a list of pairs of inputs and desired outputs, comparing the output of the recommendation system with the desired output for each given input.[footnote]See: BBC. RecList. GitHub. Available at: https://github.com/bbc/datalab-reclist; Tagliabue, J. (2022). ‘NDCG Is Not All You Need’. Towards Data Science. Available at: https://towardsdatascience.com/ndcg-is-not-all-you-need-24eb6d2f1227[/footnote]

After deployment, there was still a need to understand the effect and success of the recommendation systems. This took the form of A/B testing the live system. This included measuring the click-through rate on the recommended articles. However, members of the development team noted it was only a rough proxy for user satisfaction and were working to go beyond click-through rate.

Ultimately at the post-deployment stage, the success of the recommendation system is determined by the product teams, with input by development teams in the identification of appropriate metrics. It is editorial considerations that are central to product teams decide which metrics they think they are best suited to evaluate for.[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote]

Once the system reaches the stage of online evaluation, these methods can only tell public service media whether the recommendation system was worthwhile after it is has already been built and considering the time and resources invested in building it. Therefore the evaluation becomes about whether to continue to use and maintain the system given the operating costs versus the costs involved in removing or replacing it. This can mean even systems that only provide limited value to the audience or to the public service media organisation will remain in use in this phase of evaluation.

How could evaluations be done differently?

In this section, we explore how the objectives, metrics and methods for evaluating recommendation systems could be done differently by public service media organisations.

Objectives

Some public service media organisations could benefit from more explicitly drawing a connection from their public service values to the organisational and product goals and finally to the recommendation system itself, showing how each level links to the next. This can help prevent value drift as goals go through several levels of interpretation and operationalisation, and help contextualise the role of the recommendation system in achieving public value within the wider process of content delivery.

More explicitly connecting these objectives can help organisations to recognise that, while a product as a whole should achieve public service objectives, a recommendation system doesn’t need to achieve every objective in isolation. While a recommendation system’s objectives should not be in conflict with the higher level objectives, they may only need to achieve some of those goals (e.g. its primary purpose might be to attract and engage younger audiences and thus promote diversity and universality). Therefore, its contribution to the product and organisational objectives should be seen in the context of the overall audience experience and the totality of the content an individual user interacts with. Evaluating against the recommendation system’s feature-level objectives alone is not enough to know whether a recommendation system is also consistent with product and organisational objectives.

Audience involvement in goal-setting

Another area worthy of further exploration is providing greater audience input and control over the objectives and therefore the initial system design choices. This could involve eliciting individual preferences from a panel of audience members and then working with staff to collaboratively trade-off and explicitly set different weighting for different objectives of the system. This should take place as part of a broader co-design approach at the product level. This is because the evaluation process for a recommendation system should include the option to say a recommendation system is not the most appropriate tool for achieving the higher-level objectives of the product and providing the outcomes the staff and the audiences want from the product, rather than constraining audiences to just choose between different versions of a recommendation system.

Making safeguards an explicit objective in system evaluation

A final area worthy of exploration is building in system safeguards like accountability, transparency and interpretability as explicit objectives in the development of the system, rather than just as additional governance considerations. Some interviewees suggested making considerations such as interpretability a specific objective in evaluating recommendation systems. By explicitly weighing those considerations against other objectives and attempting to measure the degree of interpretability or transparency, it would ensure greater salience of those safeguards in the selection of systems.[footnote]Interview with Greg Detre, ex-Chief Data Scientist, Channel 4 (2021).[/footnote]

Metrics

More nuanced metrics for public service value

If public service media organisations want to move beyond optimising for a mix of engagement and exposure diversity in their recommendation systems, then they will need to develop better metrics to measure public service value. As we’ve seen above, some are already moving in this direction with varying degrees of success, but more experimentation and learning will be required.

When creating metrics for public service value, it will be important to disambiguate between different meanings of ‘public service value’. A public service media organisation cannot expect to have one quantitative measure of ‘public service value’, which conflates a number of priorities that can be in tension with one another.

One approach would be to explicitly break each public service value down into separate metrics for universality, independence, excellence, diversity, accountability and innovation, and most likely sub-values within those. This could help public service media developers to clearly articulate the components of each value and make it explicit how they are weighted against each other. However, quantifying concepts like accountability and independence can be challenging to do, and this approach may struggle to work in practice. More experimentation is needed.

The most promising approach may be to adopt more subjective evaluations of recommendation systems. This approach recognises that ‘public service value’ is going to be inherently subjective and uses metrics which reflect that. Qualitative metrics based on feedback from individuals interacting with the recommendation system can let developers balance the tensions between different aspects of public service value. This places less of a burden on developers to weight those values themselves, which they might be poorly suited to, and can accommodate different conceptions of public service value from different stakeholders.

However, subjective evaluations do have their limits. They are only able to evaluate a tiny subset of the overall recommendations, and will only capture the subjective evaluation of features appearing in that subset. These evaluations may miss features that were not present in the content evaluated, or which are only able to be observed in aggregate over some wider set of recommendations. These challenges can be mitigated by broadening subjective evaluations to a more representative sample of the public, but that may raise other challenges around the costs of running these evaluations at that scale.

More specific metrics

In a related way, evaluation metrics could be improved by greater specificity and explicitness about what concept the metric is trying to measure and therefore explicitness about how different interpretations of the same high-level concept are weighted.[footnote]van Es, K. F. (2017). ‘An Impending Crisis of Imagination : Data‐Driven Personalization in Public Service Broadcasters’. Media@LSE. Available at: https://dspace.library.uu.nl/handle/1874/358206[/footnote] In particular, public service media organisations could be more explicit about the kind of diversity they want to optimise, e.g. unique content viewed, the balance of categories viewed or the representation of demographics and viewpoints across recommendations, and whether they care about each individual’s exposure or exposure across all users.

Longer-term metrics

Another issue identified is that most metrics used in the evaluation of recommendation systems, within public service media and beyond, are short-term metrics, measured in days or weeks, rather than years. Yet at least some of the goals of stakeholders will be longer-term than the metrics used to approximate them. Users may be interested in both immediate satisfaction and in discovering new content so they continue to be informed and entertained in the future. Businesses may both be trying to maximise quarterly profits and also trying to retain users into the future to maximise profits in the quarters to come.

Short-term metrics are not entirely ineffective at predicting long-term outcomes. Better outcomes right now could mean better outcomes months or years down the road, so long as the context the recommendation system is operating in stays relatively stable and the recommendation system itself doesn’t change user behaviour in ways that lead to poorer long-term outcomes.

By definition, long-term consequences take a longer time to occur, and thus there is a longer waiting period between a change in the recommendation system and the resulting change in outcome. A longer period between action and evaluation also means a greater number of confounding variables which make it more challenging to assess the causal link between the change in the system and the change in outcomes.

Dietmar Jannach, Professor at the University of Klagenfurt, highlighted this was a problem across academic and industry evaluations, and that ‘when Netflix changes the algorithms, they measure, let’s see, six weeks, two months to try out different things in parallel and look what happens. I’m not sure they know what happens in the long run.’[footnote]Interview with Dietmar Jannach, Professor, University of Klagenfurt (2021).[/footnote]

Methods

Simulation-based evaluation

One possible method to estimate long-term metrics is to use simulation-based offline evaluation approaches. In this approach, the developers use a virtual environment with a set of content which can be recommended and a user model which simulates the expected preferences of users based on parameters selected by the developers (which could include interests, demographics, time already spent on the product, previous interactions with the product etc.).[footnote]Ie, E., Hsu, C., Mladenov, M. et al. (2019). ‘RecSim: A Configurable Simulation Platform for Recommender Systems’. arXiv. Available at: https://doi.org/10.48550/arXiv.1909.04847[/footnote] This recommendation system then makes recommendations to the user model, which generates a simulated response to that recommendation. The user model can also update its preferences in response to the recommendations it has received, e.g. a user might become more or less interested in a particular category of content, and model the simulated users’ overall satisfaction with the recommendations over time.

This provides some indication of how the dynamics of the recommendation system and changes to it might play out over a long period of time. It can evaluate how users respond to a series of recommendations over time and therefore whether a recommendation system could lead to audience satisfaction or diverse content exposure over a period longer than a single recommendation or user session. However, this approach still has many of the limitations of other kinds of offline evaluation. Historical user interaction data is still required to model the preferences of users, and that data is not neutral because it is itself the product of interaction with the previous system, including any previous recommendation system that was in place.

The user model is also only based on data from previous users, which might not generalise well to new users. Given that many of these recommendation systems are put in place to reach new audiences, specifically younger and more diverse audiences than those who currently use the service, the simulation-based evaluation might lead to unintentionally underserving those audiences and overfitting to existing user preferences.

Furthermore, the simulation can only model the impact of parameters coded into it by the developers. The simulation only reflects the world as a developer understands it, and may not reflect the real considerations users take into account in interacting with recommendation systems, nor the influences on user behaviour beyond the product.

This means that if there are unexpected shocks, exogenous to the recommendation system, that change user interaction behaviour to a significant degree, then the simulation will not take those factors into account. For example, a simulation of a news recommendation system’s behaviour in December 2019 would not be a good source of truth for a recommendation system in operation during the COVID-19 pandemic. The further the simulation tries to look ahead at outcomes, the more vulnerable it will be to changes in the environment that may invalidate its results.

User panels and retrospective feedback

After deployment, asking audiences for informed and retrospective feedback on their recommendations is a promising method for short-term and long-term recommendation system evaluation.[footnote]Stray, J., Adler, S. and Hadfield-Menell, D. (2020), ‘What are you optimizing for? Aligning Recommender Systems with Human Values’, pp. 4–5. Participatory Approaches to Machine Learning ICML 2020 Workshop (July 17). Available at: https://participatoryml.github.io/papers/2020/42.pdf[/footnote] This could involve asking the users to review, rate and provide feedback on a subsection of the recommendations they received over the previous month, in a similar manner to the subjective evaluations undertaken by the BBC Datalab. This would provide development and product teams with much more informative feedback than through A/B testing.

This could be particularly effective in the form of a representative longitudinal user panel which returns to the same audience members at regular intervals to get their detailed feedback on recommendations.[footnote]Stray, J. (2021). ‘Beyond Engagement: Aligning Algorithmic Recommendations With Prosocial Goals’. Partnership on AI. Available at: https://www.partnershiponai.org/beyond-engagement-aligning-algorithmic-recommendations-with-prosocial-goals/[/footnote] Participants in these panels should be compensated for their participations to recognise the contribution they are making to the improvement of the system and ensure long-term retention of participants. This would allow development and product teams to gauge how audience responses change over time, by seeing how they react to the same recommendations months later, to understand how their opinions on that recommendation may have changed over time, including in response to changes to the underlying system over longer periods.

Case studies

Through two case studies, we examine how the differing prioritisation of values in different forms of public service media and the differing nature of the content itself manifests itself in different approaches to recommendation systems. We will focus on the use of recommendation systems across BBC News for news content, and BBC Sounds for audio-on-demand.

Case study 1: BBC News

Introduction

BBC News is the UK’s dominant news provider and one of the world’s most influential news organisations.[footnote]This case study focuses on the parts of BBC News that function as a public service, rather than BBC Global News, the international commercial news division.[/footnote] It reaches 57% of UK adults every week and 456 million globally. Its news websites are the most-visited English language news websites on the internet.[footnote]As of 2021, BBC News on TV and radio reaches 57% of UK adults every week and across all channels, BBC News globally reaches a weekly global audience of 456 million adults., Ssee: BBC Media Centre. (2021). ‘BBC on track to reach half a billion people globally ahead of its centenary in 2022′. BBC Media Centre. Available at: https://www.bbc.co.uk/mediacentre/2021/bbc-reaches-record-global-audience; BBC News is equally influential globally within the domain of digital news. By one measure, the BBC News and BBC World News websites combined are the most-visited English-language news websites, receiving three to four times the website traffic of the New York Times, Daily Mail, or The Guardian, see: Majid, A. (2021). ‘Top 50 largest news websites in the world: Surge in traffic to Epoch Times and other ring-wing sites’. Press Gazette. Available at: https://pressgazette.co.uk/top-50-largest-news-websites-in-the-world-right-wing-outlets-see-biggest-growth/; As of 2021, BBC News Online reaches 45% of UK adults every week, approximately triple the reach of its nearest competitors: The Guardian (17%), Sky News Online (14%) and the MailOnline (14%). Estimates of UK reach are based on a sample 2029 adults surveyed by YouGov (and their partners) using an online questionnaire at the end of January and beginning of February 2021. See: Reuters Institute for Institute for the Study of Journalism. Reuters Institute Digital News Report 2021, 10th Edition, p. 62. Available at: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2021-06/Digital_News_Report_2021_FINAL.pdf[/footnote] For most of the time that BBC News has had an online presence, it has not used any recommendation systems on its platforms.

In recent years, BBC News has taken a more experimental approach to recommendation systems, with a number of different systems for recommending news content developed, piloted and deployed across the organisation.[footnote]The team initially developed an experimental recommendation system for BBC Mundo, the BBC World Service’s Spanish-language news website. See: Panteli, M., Piscopo, A., Harland, A., Tutcher, J. and Moss, F. M. (2019). ‘Recommendation systems for news articles at the BBC’, p.1. CEUR Workshop Proceedings. Available at: http://ceur-ws.org/Vol-2554/paper_07.pdf; These are also live on BBC World Service websites in Russian, Hindi and Arabic and in beta on the BBC News App. See: Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk; Al-Chueyr Martins, T. (2019). ‘Responsible Machine Learning at the BBC’ [presentation]. Available at: https://www.slideshare.net/alchueyr/responsible-machine-learning-at-the-bbc-194466504[/footnote]

Goal

For editorial teams, the goal of adding recommendation systems to BBC News was to augment editorial curation and make it easier to scale on a more personalised level. This addresses challenges relating to editors facing an ‘information overload’ of content to recommend. Additionally, product teams at BBC believed this feature would improve the discoverability of news content for different users.[footnote]Panteli, M., Piscopo, A., Harland, A., Tutcher, J. and Moss, F. M. (2019). ‘Recommendation systems for news articles at the BBC’, p. 4. CEUR Workshop Proceedings. Available at: http://ceur-ws.org/Vol-2554/paper_07.pdf[/footnote]

What did they build?

From around 2019, , a team (which later become part of BBC Datalab) collaborated with a team building out the BBC News app to develop a content-to-content recommendation system. This focused on ‘onward journeys’ from news articles. Partway through each article the recommendation system generated a section that was titled ‘You might be interested in’ (in the language relevant to that news website) that listed four recommended articles.[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote]

Figure 2: BBC News ‘You might be interested in’ section (image courtesy of the BBC)

The recommendation system is combined with a set of business rules which constrain the set of articles that the system recommends content from. The rules aim to ensure ‘sufficient quality, breadth, and depth’ in the recommendations.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk[/footnote]

For example, these included:

  • recency, e.g. only selecting content from the past few weeks
  • unwanted content, e.g. content in the wrong language
  • contempt of court
  • elections
  • children-safe content.

In an earlier project, this team had developed an experimental recommendation system for BBC Mundo, the BBC World Service’s Spanish-language news website.[footnote]Panteli, M., Piscopo, A., Harland, A., Tutcher, J. and Moss, F. M. (2019). ‘Recommendation systems for news articles at the BBC’, p. 4. CEUR Workshop Proceedings. Available at: http://ceur-ws.org/Vol-2554/paper_07.pdf[/footnote] Similar recommendation systems are also live on BBC World Service websites in Russian, Hindi and Arabic and in beta on the BBC News App.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk; Al-Chueyr Martins, T. (2019). ‘Responsible Machine Learning at the BBC’ [presentation]. Available at: https://www.slideshare.net/alchueyr/responsible-machine-learning-at-the-bbc-194466504[/footnote]

Figure 3: BBC Mundo recommendation system (image courtesy of the BBC)

Figure 4: Recommendation system on BBC World Service website in Hindi (image courtesy of the BBC)

Criteria (and how they relate to public service values)

The BBC News team eventually settled on a content-to-content recommendation system using a model (called ‘tf-idf’) that encoded article data (like text) and metadata (like the categorical tags that editorial teams gave the article) into vectors. Once articles were represented as vectors, additional metrics could be applied to measure the similarity between them. This enabled the ability to penalise more popular content.[footnote]Crooks, M. (2019). ‘A Personalised Recommender from the BBC’. BBC Data Science. Available at: https://medium.com/bbc-data-science/a-personalised-recommender-from-the-bbc-237400178494[/footnote]

The business rules the BBC used sought to ensure ‘sufficient quality, breadth, and depth’ in the recommendations, which aligns with the BBC’s values around universality and excellence.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk[/footnote]

There was also an emphasis on the recommendation system needing to be easy to understand and explain. This can be attributed to BBC News being more risk-averse than other parts of the organisation.[footnote]Piscopo, A. (2021).[/footnote] Given the BBC’s mandate to be a ‘provider of accurate and unbiased information’ and BBC News that staff themselves identify as ‘the product that likely contributes most to its reputation as a trustworthy and authoritative media outlet’.[footnote]Panteli, M., Piscopo, A., Harland, A., Tutcher, J. and Moss, F. M. (2019). ‘Recommendation systems for news articles at the BBC’, p. 4. CEUR Workshop Proceedings. Available at: http://ceur-ws.org/Vol-2554/paper_07.pdf[/footnote] It is unsurprising they would want to pre-empt any accusations of bias for any automated news recommendation system, by making it understandable to audiences.

Evaluation

The Datalab team experimented with a number of approaches using a combination of content and user interaction data.

Initially, they found that a content-to-content approach to item recommendations was more suited to the editorial requirements for the product, and user interaction data was therefore less relevant to the evaluation of the recommender, prompting a shift to a different approach.

As they began to compare different content-to-content approaches, they found that performance in quantitative metrics often didn’t match the editorial teams priorities, and it was difficult to operationalise editorial judgement of public service value into metrics. As Alessandro Piscopo notes: ‘We did notice that in some cases, one of the recommender prototypes was going higher in some metrics and went to editorial and [they would] say well we just didn’t like it.’ And, ‘Sometimes it was just comments from editorial world, we want to see more depth. We want to see more breadth. Then you have to interpret what that means.’[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote]

The Datalab team chose to take a subjective evaluation-first approach, whereby editors would directly compare and comment on the output of two recommendation systems. This approach allowed them to capture editorial preferences more directly and manually work towards meeting those preferences.

However, the team acknowledged its limitations upfront, particularly in terms of scale.[footnote]Interview with Alessandro Piscopo.[/footnote] They tried to pick articles that would bring up the most challenging cases. However, editorial teams only have so much capacity to judge recommendations, and thus would struggle to assess every edge case or judge every recommendation. This issue would be even more acute if in a future recommendation system, every article’s associated recommendations changed depending on the demographics of the audience member viewing it.

By May 2020, editorial had given sign-off and the recommendation system was deemed ready for initial deployment.[footnote]Piscopo, A. (2021). ‘Building public service recommenders: Logbook of a journey’ [presentation recording]. The Academic Fringe Festival. Available at: https://www.youtube.com/watch?v=Q2EYAxX5Pnk[/footnote] During deployment, the team took a ‘failsafe approach’ with weekly monitoring of the live version of the recommendation system by editorial staff, alongside A/B testing measuring the click-through rate on the recommended articles. However, members of the development team noted it was only a rough proxy for user satisfaction and were working to go beyond click-through rate.

Case Study 2: BBC Sounds

Introduction

BBC Sounds is the BBC’s audio streaming and download service for live radio, music, audio-on-demand and podcasts,[footnote]BBC. ‘What is BBC Sounds?’. Available at: https://www.bbc.co.uk/contact/questions/help-using-bbc-services/what-is-sounds[/footnote] replacing the BBC’s previous live and catch-up audio service, iPlayer Radio.[footnote]The BBC Sounds website replaced the iPlayer Radio website in October 2018; the BBC Sounds app was launched in beta in the United Kingdom in June 2018 and made available internationally in September 2020, with the iPlayer Radio app decommissioned for the United Kingdom in September 2019 and internationally in November 2020. See: BBC. (2018). ‘The next major update for BBC Sounds’ Available at: https://www.bbc.co.uk/blogs/aboutthebbc/entries/03e55526-e7b4-45de-b6f1-122697e129d9; BBC. (2018). ‘Introducing the first version of BBC Sounds’, Available at: https://www.bbc.co.uk/blogs/aboutthebbc/entries/bde59828-90ea-46ac-be5b-6926a07d93fb; BBC. (2020). ‘An international update on BBC Sounds and BBC iPlayer Radio’. Available at: https://www.bbc.co.uk/blogs/internet/entries/166dfcba-54ec-4a44-b550-385c2076b36b; BBC Sounds. ‘Why has the BBC closed the iPlayer Radio app?’. Available at: https://www.bbc.co.uk/sounds/help/questions/recent-changes-to-bbc-sounds/iplayer-radio-message[/footnote] A key difference between BBC Sounds and iPlayer Radio is that BBC Sounds was built with personalisation and recommendation as a core component of the product, rather than as a radio catch-up service.[footnote]In May 2019, six months after the launch of BBC Sounds, James Purnell, then Director of Radio & Education at the BBC, said that ‘“The [BBC Sounds] app, for instance, is built for personalisation, but is not yet fully personalised. This means that right now a user sees programmes that have not been curated for them. That is changing, as of this month in fact. By the autumn, Sounds will be highly personalised.’” See: BBC Media Centre. (2019). ‘Changing to stay the same – Speech by James Purnell, Director, Radio & Education, at the Radio Festival 2019 in London.’ Available at: https://www.bbc.co.uk/mediacentre/speeches/2019/bbc.com/mediacentre/speeches/2019/james-purnell-radio-festival/[/footnote]

Goal

The goals of BBC Sounds as a product team are:

  • increase the audience size of BBC Sounds’ digital products
  • increase the demographic breadth of consumption across BBC Sounds’ products, especially among the young[footnote]According to David Jones (Executive Product Manager, BBC Sounds, interviewed in 2021), his top-line KPI is to reach 900,000 members of the British population who are under 35 by March 2022. These numbers are determined centrally by BBC senior managers based on the BBC’s Service Licence for BBC Online and Red Button. See: BBC Trust. (2016). BBC Online and Red Button Service Licence. Available at: http://downloads.bbc.co.uk/bbctrust/assets/files/pdf/regulatory_framework/service_licences/online/2016/online_red_button_may16.pdf [/footnote]
  • convert ‘lighter users’ who only engage a certain number of times a week into regular users
  • enable users to more easily discover content from the more than 50 hours of new audio produced by the BBC on an hourly basis.

Product

BBC Sounds initially used an outsourced recommendation system from a third-party provider. Having knowledge about the inner working of the recommendation systems and the ability to quickly iterate were seen as valuable by the development team, as it proved challenging to request changes to the external provider. The BBC decided it wanted to own the technology and the experience as a whole, and believed they could achieve better value-for-money for TV License-payers by bringing the system in-house. So the BBC Datalab developed a hybrid recommendation system named Xantus for BBC Sounds.

BBC Sounds use a factorisation machine approach, which is a mixture of content matching and collaborative filtering. This uses your listening history, metadata about the content, and other users’ listening history to make recommendations in two ways:

  1. It recommends items that have similar metadata to items you have already listened to.
  2. It recommends items that have been listened to by people with otherwise similar listening histories.

Figure 5: BBC Sounds’ ‘Recommended For You’ section (image courtesy of the BBC)

Figure 6: ‘Music Mixes’ on BBC Sounds (image courtesy of the BBC)

Criteria (and how they relate to public service media values)

On top of this factorisation machine approach are a number of business rules. Some rules apply equally across all users and constrain the set of content that the system recommends content from, e.g. only selecting content from the past few weeks. Other rules apply after individual user recommendations have been generated and filter the recommendations based on specific information about the user, e.g. not recommending content the user has already consumed.

As of summer 2021, the business rules used in the BBC Sounds’ Xantus recommendation system were:[footnote]Note that the business rules are subject to change, and so the rules given here are intended to be an indicative example only, representing a snapshot of practice at one point in time. See: Al-Chueyr Martins, T. (2021). ‘From an idea to production: the journey of a recommendation engine’ [presentation recording]. MLOps London. Available at: https://www.youtube.com/watch?v=dFXKJZNVgw4[/footnote]

Non-personalised business rules Personalised business rules
Recency Already seen items
Availability Local radio (if not consumed previously)
Excluded ‘master brands’, e.g. particular radio channels[footnote]Smethurst, M. (2014). Designing a URL structure for BBC programmes. Available at: https://smethur.st/posts/176135860[/footnote] Specific language (if not consumed previously)
Excluded genres Episode picking from a series
Diversification (1 episode per brand/series)

Governance

Editorial and others help define the business rules for Sounds.[footnote]Interview with Kate Goddard, Senior Product Manager, BBC Datalab (2021).[/footnote] The product team adopted the business rules from the incumbent system and then checked whether they made sense in the context of the new system. They constantly review the business rules. Kate Goddard, Senior Product Manager, BBC Datalab, noted that: 

‘Making sure you are involving [editorial values] at every stage and making sure there is strong collaboration between data scientists in order to define business rules to make sure we can find good items. For instance with BBC Sounds you wouldn’t want to be recommending news content to people that’s more than a day or two old and that would be an editorial decision along with UX research and data. So, it’s a combination of optimizing for engagement while making sure you are working collaboratively with editorial to make sure you have the right business rules in there.’

Evaluation

To decide whether to progress further with the prototype, the team decided to use a process of subjective evaluation. The Datalab team showed recommendations generated by both the new factorisation machine recommendation system head-to-head with the existing external provider’s recommendations and got feedback from the editors on which of the two they liked.[footnote]Interview with Alessandro Piscopo, Principal Data Scientist, BBC Datalab (2021).[/footnote] The factorisation machine recommendation system was preferred by the editors and so was deployed into the live environment.

After deployment, UX testing, qualitative feedback and A/B testing were used to fine-tune the system. In their initial A/B tests, they were optimising for engagement, looking at click-throughs, play throughs and play completes. In these tests, they were able to achieve:[footnote]Al-Chueyr Martins, T. (2021). ‘From an idea to production: the journey of a recommendation engine’ [presentation recording]. MLOps London. Available at: https://www.youtube.com/watch?v=dFXKJZNVgw4[/footnote]

  • 59% increase in interactions in the ‘Recommended for You’ rail
  • 103% increase in interactions for under-35s.

 

Outstanding questions and areas for further research and experimentation

Through this research we have built up an understanding of the use of recommendation systems in public service media in the BBC and Europe, as well as the opportunities and challenges that arise. This section offers recommendations to address some of the issues that have been raised and indicate areas beyond the scope of this project that merit further research. These recommendations are directed at the research community, including funders, regulators and public service media organisations themselves.

There is an opportunity for public service media to define a new, responsible approach to the development of recommendation systems that work to the benefit of society as a whole and offer an alternative to the paradigm established by big technology platforms. Some initiatives that are already underway could underpin this, such as the BBC’s Databox project with the University of Nottingham and subsequent work on developing personal data stores.[footnote]Sharp, E. (2021). ‘Personal data stores: building and trialling trusted data services’. BBC R&Desearch & Development. Available at: https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research; Leonard, M. and Thompson, B. (2020), ‘Putting audience data at the heart of the BBC’. BBC Research & Development. Available at: https://www.bbc.co.uk/rd/blog/2020-09-personal-data-store-privacy-services[/footnote] These personal data stores primarily aim to address issues around data ownership and portability, but could also act as a foundation for more holistic recommendations across platforms and greater user control over the data used in recommending them content.

But in making recommendations to public service media we recognise the pressures they face. In the course of this project, a real-terms cut to BBC funding has been announced and the corporation has said it will have to reduce the services it offers in response.[footnote]Hansard – Volume 707: debated on Monday 17 January 2022. ‘BBC Funding’. UK Parliament. Available at: https://hansard.parliament.uk//commons/2022-01-17/debates/7E590668-43C9-43D8-9C49-9D29B8530977/BBCFunding[/footnote] We acknowledge that, in the absence of new resources and faced with the reality of declining budgets, public service media organisations would have to cut other activities to carry out our suggestions. 

We therefore encourage both funders and regulators to support organisations to engage in public service innovation as they further explore the use of recommendation systems. Historically the BBC has set a precedent for using technology to serve the public good, and in doing so brought soft power benefits to the UK. As the UK implements its AI strategy, it should build on this strong track record and comparative advantage and invest in the research and implementation of responsible recommendation systems.

1. Define public service value for the digital age

Recommendation systems are designed to optimise against specific objectives. However, the development and implementation of recommendation systems is happening at a time when the concept of public service value and the role of public service media organisations in the wider media landscape is rapidly changing.

Although we make specific suggestions for approaches to these systems, unless public service media organisations are clear about their own identities and purpose, it will be difficult for them to build effective recommendation systems. It is essential that public service media revisit their values in the digital age, and articulate their role in the contemporary media ecosystem.

In the UK, significant work has already been done by Ofcom as well as the Digital, Culture, Media and Sport Select Committee to identify the challenges public service media face and offer new approaches to regulation. Their recommendations must be implemented so that public service media can operate within a paradigm appropriate to the digital age and build systems that address a relevant mission.

2. Fund a public R&D hub for recommendation systems and responsible recommendation challenges

There is a real opportunity to create a hub for the research and development of recommendation systems that are not tied to industry goals. This is especially important as recommendation systems are one of the prime use cases of behaviour modification technology, but research into it is impaired by lack of access to interventional data.[footnote]Greene, T., Martens, D. and Shmueli, G. (2022). ‘Barriers to academic data science research in the new realm of algorithmic behaviour modification by digital platforms’. Nature Machine Intelligence, 4, pp.323–330. Available at: https://www.nature.com/articles/s42256-022-00475-7[/footnote]

Existing academic work on responsible recommendations could be brought together into a public research hub on responsible recommendation technology, with the BBC as an industry partner. It could involve developing and deploying methods for democratic oversight of the objectives of recommendation systems and the creation and maintenance of useful datasets for researchers outside of private companies.

We recommend that the strategy for using recommendation systems in public service media should be integrated within a broader vision to make this part of a publicly accountable infrastructure for social scientific research.

Therefore, as part of UKRI’s National AI Research and Innovation (R&I) Programme, set out in the UK AI Strategy, it should fund the development of a public research hub on recommendation technology. This programme could also connect with the European Broadcasting Union’s PEACH project, which has similar goals and aims.

Furthermore, one of the programme’s aims is to create challenge-driven AI research and innovation programmes for key UK priorities. The arrival of Netflix in 2006 spurred the development of today’s recommendation systems. The UK could create new challenges to spur the development of responsible recommendation system approaches  encouraging a better information environment. For example, the hub could release a dataset and benchmark for a challenge on generating automatic labels for a dataset of news items.

3. Publish research into audience expectations of personalisation

There was a striking consensus in our interviews with public service media teams working on recommendation systems that personalisation was both wanted and expected by the audience. However, we were offered little evidence to support this belief. Research in this area is essential for a number of reasons.

  1. Public service media exist to serve the public. They must not assume they are acting in the public interest without any evidence of their audience’s views towards recommendation systems.
  2. The adoption of recommendation systems without evidence that they are either wanted or needed by the public raises the risk that public service media are blindly following a precedent set by commercial competitors, rather than defining a paradigm aligned to their own missions.
  3. Public service media have limited resources and multiple demands. It is not strategic to invest heavily in the development and implementation of these systems without an evidence base to support their added value.

If research into user expectations of recommendation systems does exist, the BBC should strive to make this public.

4. Communicate and be transparent with audiences

Although most public service media organisations profess a commitment to transparency about their use of recommendation systems, in practice there is limited effective communication with their audiences about where and how recommendation systems are being used.

What communication there is tends to adopt the language of commercial services, for example talking about ‘relevance’. In our interviews, we found that within teams there was no clear responsibility for audience communication. Staff often assumed that few people would want to know more, and that any information provided would only be accessed by a niche group of users and researchers.

However, we argue that public service organisations have a responsibility to explain their practices clearly and accessibly and to put their values of transparency into practice. This should not only help retain public trust at a time when scandals from big technology companies have understandably made people view algorithmic

systems with suspicion, but also develop a new, public service narrative around the use of these technologies.

Part of this task is to understand what a meaningful explanation of a recommendation system looks like. Describing the inner workings of algorithmic decision-making is not only unfeasible but probably unhelpful. However, they can educate audiences about the interactive nature of recommendation systems. They can make salient the idea that when consuming content through a recommendation system, they are in effect ‘voting with their attention’. Their viewing behaviour is something private, but at the same time affects what the system learns and what others will view.

Public service media should invest time and research into understanding how to usefully and honestly articulate their use of recommendation systems in ways that are meaningful to their audiences.

This communication must not be one-way. There must be opportunities for audience members to give feedback and interrogate the use of the systems, and raise concerns where things have gone wrong.

5. Balance user control with convenience

However, transparency alone is not enough. Giving users agency over the recommendations they see is an important part of responsible recommendation. Simply giving users direct control over the recommendation system is an obvious and important first step, but it is not a universal solution.

Some interviewees pointed to evidence that the majority of users do not choose to use these controls and instead opt for the default setting. But there is also evidence that younger users are beginning to use a variety of accounts, browsers and devices, with different privacy settings and aimed at ‘training’ the recommendation algorithm to serve different purposes.

Many public service media staff we spoke with described providing this level of control. Some challenges that were identified include the difficulty of measuring how well the recommendations meet specific targets, as well as risks relating to the potential degradation of the user experience.

Firstly, some of our interviewees noted how it would be more difficult to measure how well the recommendation system is performing on dimensions such as diversity of exposure, if individual users were accessing recommendations through multiple accounts. Secondly, it was highlighted how recommendation systems are trained on user behavioural data, and therefore giving more latitude to users to intentionally influence the recommendations may give rise to negative dynamics that degrade the overall experience for all users over the long run, or even expose the system to hostile manipulation attempts.

While these are valid concerns, we believe that there is some space for experimentation, between giving users no control and too much control. For example, users could be allowed to have different linked profiles, and key metrics could be adjusted to take into account the content that is accessed across these profiles. Users could be more explicitly shown how to interact with the system to obtain different styles of recommendations, making it easy to maintain different ‘internet personas’. Some form of ongoing monitoring for detecting adversarial attempts at influencing recommendation choices could also be explored. We encourage the BBC to experiment with these practices and publish research on their findings.

Another trial worth exploring is allowing ‘joint’ user recommendation profiles, where the recommendations are made based on multiple individuals’ aggregated interaction history and preferences, such as a couple, a group of friends or a whole community. This would allow users to create their own communities and ‘opt-in’ to who and what influenced their recommendations in an intuitive way. This could enabled by the kind of personal data stores being explored by the BBC and Belgian VRT.[footnote]Sharp, E. (2021). ‘Personal data stores: building and trialling trusted data services’. BBC Research & Development. Available at: https://www.bbc.co.uk/rd/blog/2021-09-personal-data-store-research[/footnote]

There are multiple interesting versions of this approach. In one version, you would see recommendations ‘meant’ for others and know it was a recommendation based on their preferences. In another version, users would simply be exposed to a set of unmarked recommendations based on all their combined preferences.

Another potential approach to pilot would be to create different recommendation systems that coexist and allow users to choose which they want to use or offer different ones at different times of day or when significant events happen (e.g. switching to a different recommendation system during the run up to an election or overriding them with breaking news). Such an approach might offer a chance to invite audiences to play a more active part in the formulation of recommendations, and open up opportunities for experimentation, which would need to be balanced against the additional operational costs that would be introduced.

6. Expand public participation

Beyond transparency or individual user choice and control over the parameters of the recommendation systems already deployed, users and wider society could also have greater input during the initial design of the recommendation systems and in the subsequent evaluations and iterations.

This is particularly salient for public service media organisations, as unlike private companies, which are primarily accountable to their customers and shareholders, public service media organisations see themselves as having a universal obligation to wider society. Therefore, even those who are not direct consumers of content should have a say in how public service media recommendations are shaped.

User panels

One approach to this, suggested by Jonathan Stray, is to create user panels that provide informed, retrospective feedback about live recommendation systems.[footnote]Stray, J. (2021). ‘Beyond Engagement: Aligning Algorithmic Recommendations With Prosocial Goals’. Partnership on AI. Available at: https://www.partnershiponai.org/beyond-engagement-aligning-algorithmic-recommendations-with-prosocial-goals/[/footnote] These would involve paying users for detailed, longitudinal data about their experiences with the recommendation system. 

This could involve daily questions about their satisfaction with their recommendations, or monthly reviews where users are shown a summary of their recommendations and interaction with them. They could be asked how happy they are with the recommendations, how well do their interests are served and how informed they feel.

This approach could provide new, richer and more detailed metrics for developers to optimise the recommendation systems against, which would potentially be more aligned with the interests of the audience. It might also open up the ability to try new approaches to recommendation, such as reinforcement learning techniques that optimise for positive responses to daily and monthly surveys.

Co-design

A more radical approach would be to involve audience communities directly in the design of the recommendation system. This could involve bringing together representative groups of citizens, analogous to citizens’ assemblies, which have direct input and oversight of the creation of public service media recommendation systems, creating a third core pillar in the design process, alongside editorial teams and developer teams. This is an approach that has been proposed by the Media Reform Coalition Manifesto for a People’s Media.[footnote]Grayson, D. (2021). Manifesto for a People’s Media. Media Reform Coalition. Available at: https://drive.google.com/file/u/1/d/1_6GeXiDR3DGh1sYjFI_hbgV9HfLWzhPi/view?usp=embed_facebook[/footnote]

These would allow citizens to ask questions of the editors and developers about how the system is intended to work, what kinds of data inform those systems and about what alternative approaches exist (including not using recommendation systems at all). These groups could then set out their requirements for the system and iteratively provide feedback on versions of the system as its developed, in the same way that editorial teams have, for example, by providing qualitative feedback on recommendations provided by different systems.

7. Standardise metadata

Each public service media organisation should have a central function that standardises the format, creation and maintenance of metadata across the organisation.

Inconsistent, poor quality metadata was consistently highlighted as a barrier to developing recommendation systems in public service media, particularly in developing more novel approaches that go beyond user engagement and try to create diverse feeds of recommendations.

Institutionalising the collection of metadata and making access to it more transparent across each individual organisation is an important investment in public service media’s future capabilities.

We also think it’s worth exploring how much metadata can be standardised across European media organisations. The European Broadcasting Union (EBU)’s ‘A European Perspective’ project is already trialling bringing together content from across different European public service media organisations onto a single platform, underpinned by the EBU’s PEACH system for recommendations and the EuroVOX toolkit for automated language services. Further cross-border collaboration could be enabled by sharing best practices among member organisations.

8. Create shared recommendation system resources

Some public service media organisations have found it valuable to have access to recommendations-as-a-service provided by the European Broadcasting Union (EBU) through their PEACH platform. This reduces the upfront investment required to start using the recommendation system and provides a template for recommendations that have already been tested and improved upon by other public service media organisations.

One area identified as valuable for the future development of PEACH was greater flexibility and customisation. For example, some asked for the ability to incorporate different concepts of diversity into the system and control the relative weighting of diversity. Others would have found it valuable to be able to incorporate more information on the public service value of content into the recommendations directly.

We also heard from several interviewees that they would value a similar repository for evaluating recommendation systems on metrics valued by public service media, including libraries in common coding languages, e.g. Python, and a number of worked examples for measuring the quality of recommendations. The development of this could be led by the EBU or a single organisation like the BBC.

This would help systemise the quantifying of public service values and collate case studies of how values are quantified. This would be best as an open-source repository that others outside of public service media could learn from and draw on. This would:

  • lower costs and thus easier to justify investment
  • reduce the technical burden, making it easier for newer and smaller teams to implement
  • point to how they’re used elsewhere, reducing the burden of proof and making the alternative approach appear less risky
  • provide source of existing ideas, meaning the team have to spend less time either coming up with their own (which might be suboptimal and discover that for themselves) or spend time wading through the technical literature.

Future public service media recommendation systems projects, and responsible recommendation system development more broadly, could then more easily evaluate their system against more sophisticated metrics than just engagement.

9. Create and empower integrated teams

When developing and deploying recommendation systems, public service media organisations need to integrate editorial and development teams from the start. This ensures that the goals of the recommendation system are better aligned with the organisation’s goals as a whole and ensures the systems augment and complement existing editorial expertise.

An approach that we have seen applied successfully is having two project leads, one with an editorial background and one with a technical development background, who are jointly responsible for the project.

Public service media organisations could also consider adopting a combined product and content team. This can ensure that both editorial and development staff have a shared language and common context, which can reduce the burden of communication and help staff feel like they have a common purpose rather than competition between the different teams.

Methodology

To investigate our research questions, we adopted two main methods:

  1. Literature review
  2. Semi-structured interviews

Our literature review surveyed current approaches to recommendation systems, the motivations and risks in using recommendation systems, and existing approaches and challenges in evaluating recommendation systems. We then focused in on reviewing existing public information on the operation of recommendation systems across European public service media, and the existing theorical work and case studies on the ethics implications of the use of those systems.

In order to situate the use of these systems, we also surveyed the history and context of public service media organisations, with a particular focus on previous technological innovations and attempts at measuring values.

We also undertook 29 semi-structured interviews with 8 current and 3 former BBC staff members, across engineering, product and editorial, 9 interviews with current and former staff from other public service media organisations and the European Broadcasting Union, and 9 further interviews with external experts from academia, civil society and regulators.

Partner information and acknowledgements

This work was undertaken with support from the Arts and Humanities Research Council (AHRC).

This report was co-authored by Elliot Jones, Catherine Miller and Silvia Milano, with substantive contributions from Andrew Strait.

We would like to thank the BBC for their partnership on this project, and in particular, the following for their support, feedback and cooperation throughout the project:

  • Miranda Marcus, Acting Head, BBC News Labs
  • Tristan Ferne, Lead Producer, BBC R&D
  • George Wright, Head of Internet Research and Future Services, BBC R&D
  • Rhia Jones, Lead R&D Engineer for Responsible Data-Driven Innovation

We would like to thank the following colleagues for taking the time to be interviewed for this project:

  • Alessandro Piscopo, Principal Data Scientist, BBC Datalab
  • Anna McGovern, Editorial Lead for Recommendations and Personalisation, BBC
  • Arno van Rijswijk, Head of Data & Personalization, & Sarah van der Land, Digital Innovation Advisor, Nederlandse Publieke Omroep
  • Ben Clark, Senior Research Engineer, Internet Research & Future Services, BBC Research & Development
  • Ben Fields, Lead Data Scientist, Digital Publishing, BBC
  • David Caswell, Executive Product Manager, BBC News Labs
  • David Graus, Lead Data Scientist, Randstad Groep Nederland
  • David Jones, Executive Product Manager, BBC Sounds
  • Debs Grayson, Media Reform Coalition
  • Dietmar Jannach, Professor, University of Klagenfurt
  • Eleanora Mazzoli, PhD Researcher, London School of Economics
  • Francesco Ricci, Professor of Computer Science, Free University of Bozen-Bolzano
  • Greg Detre, Chief Product & Technology Officer, Filtered and former Chief Data Scientist, Channel 4
  • Jannick Kirk Sørensen, Associate Professor in Digital Media, Aalborg University
  • Jonas Schlatterbeck, Head of Content ARD Online & Leiter Programmplanung, ARD
  • Jonathan Stray, Visiting Scholar, Berkeley Center for Human-Compatible AI
  • Kate Goddard, Senior Product Manager, BBC Datalab
  • Koen Muylaert, Head of Data Platform, VRT
  • Matthias Thar, Bayerische Rundfunk
  • Myrna McGregor, BBC Lead, Responsible AI+ML
  • Natalie Fenton, Professor of Media and Communications, Goldsmiths, University of London
  • Nic Newman, Senior Research Associate, Reuters Institute for the Study of Journalism
  • Olle Zachrison, Deputy News Commissioner & Head of Digital News Strategy, Swedish Radio
  • Sébastien Noir, Head of Software, Technology and Innovation, European Broadcasting Union and Dmytro Petruk, Developer, European Broadcasting Union
  • Sophie Chalk, Policy Advisor, Voice of the Listener & Viewer
  • Uli Köppen, Head of AI + Automation Lab, Co-Lead BR Data, Bayerische Rundfunk

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 110

Skip to content

Understanding public attitudes towards artificial intelligence (AI), and how to involve people in decision-making about AI, is becoming ever-more urgent in the UK and internationally. As new technologies are developed and deployed, and governments move towards proposals for AI regulation, policymakers and industry practitioners are increasingly navigating complex trade-offs between opportunities, risks, benefits and harms.

Taking into account people’s perspectives and experiences in relation to AI – alongside expertise from policymakers and technology developers and deployers – is vital to ensure AI is aligned with societal values and needs, in ways that are legitimate, trustworthy and accountable.

As the UK Government and other jurisdictions consider AI governance and regulation, it is imperative that policymakers have a robust understanding of relevant public attitudes and how to involve people in decisions.

This rapid review is intended to support policymakers – in the context of the UK AI Safety Summit and afterwards – to build that understanding. It brings together a review of evidence about public attitudes towards AI that considers the question: ‘What do the public think about AI?’ In addition, it provides knowledge and methods to support policymakers to meaningfully involve the public in current and future decision-making around AI.

Introduction

Why is it important to understand what the public think about AI?

We are experiencing rapid development and deployment of AI technologies and heightened public discourse on their opportunities, benefits, risks and harms. This is accompanied by increasing interest in public engagement and participation in policy decision-making, described as a ‘participatory turn’ or ‘deliberative wave’.

However, there is some hesitation around the ability or will of policy professionals and governments to consider the outcomes of these processes meaningfully, or to embed them into policies. Amid accelerated technological development and efforts to develop and coordinate policy, public voices are still frequently overlooked or absent.

The UK’s global AI Safety Summit in November 2023 invites ‘international governments, leading AI companies and experts in research’ to discuss how coordinated global action can help to mitigate the risks of ‘frontier AI’.[1] Making AI safe requires ‘urgent public debate’.[2]These discussions must include meaningful involvement of people affected by AI technologies.

The Ada Lovelace Institute was founded on the principle that discussions and decisions about AI cannot be made legitimately without the views and experiences of those most impacted by the technologies. The evidence from the public presented in this review demonstrates that people have nuanced views, which change in relation to perceived risks, benefits, harms, contexts and uses.

In addition, our analysis of existing research shows some consistent views:

  • People have positive attitudes about some uses of AI (for example, in health and science development).
  • There are concerns about AI for decision-making that affects people’s lives (for example, eligibility for welfare benefits).
  • There is strong support for the protection of fundamental rights (for example, privacy).
  • There is a belief that regulation is needed.

The Ada Lovelace Institute’s recent policy reports Regulating AI in the UK[3] and Foundation models in the public sector[4] have made the case for public participation and civil society involvement in the regulation of AI and governance of foundation models. Listening to and engaging the public is vital not only to make AI safe, but also to make sure it works for individual people and wider society.

Why is public involvement necessary in AI decision-making?

This rapid review of existing research with different publics, predominantly in the UK, shows consistency across a range of studies as to what the public think about different uses of AI, as well as providing calls to action for policymakers. It draws important insights from existing evidence that can help inform just and equitable approaches to developing, deploying and regulating AI.

This evidence must be taken into account in decision-making about the distribution of emerging opportunities and benefits of AI – such as the capability of systems to develop vaccines, identify symptoms of diseases like cancers and help humans adapt to the realities of climate change. It should also be considered in decision-making to support governance of AI-driven technologies that are already in use today in ways that permeate the everyday lives of individuals and communities, including people’s jobs and the provision of public services like healthcare, education or welfare.

This evidence review demonstrates that listening to the public is vital in order for AI technologies and uses to be trustworthy. It also evidences a need for more extensive and deeper research on the many uses and impacts of AI across different publics, societies and jurisdictions. Public views point towards ways to harness the benefits and address the challenges of AI technologies, as well as to the desire for diverse groups in society to be involved in how decisions are made.

In summary, the evidence that follows presents an opportunity for policymakers to listen to and engage with the views of the public, so that policy can navigate effectively the complex and fast-moving world of AI with legitimacy, trustworthiness and accountability in decision-making processes.

What this rapid evidence review does and does not do

This review brings together research conducted with different publics by academics, researchers in public institutions, and private companies, assessed against the methodological rigour of each research study. It addresses the following research questions:

  • What does the existing evidence say about people’s views on AI?
  • What methods of public engagement can be used by policymakers to involve the public meaningfully in decisions on AI?

As a rapid evidence review, this is not intended to be a comprehensive and systematic literature review of all available research. However, we identify clear and consistent attitudes, drawn from a range of research methods that should guide policymakers’ decision-making at this significant time for AI governance.

More detail is provided in the ‘Methodology’ section.

How to read this review

…if you’re a policymaker or regulator concerned with AI technologies:

The first part of this review summarises themes identified in our analysis of evidence relating to people’s views on AI technologies. The headings in this section synthesise the findings into areas that relate to current policy needs.

In the second part of the report, we build on the findings to offer evidence-based solutions for how to meaningfully include the views of the public in decision-making processes. The insights come from this review of evidence alongside research into public participation.

The review aims to support policymakers in understanding more about people’s views on AI, about different kinds of public engagement and in finding ways to involve the public in decisions on AI uses and regulation.

…if you’re a developer or designer building AI-driven technologies, or a deployer or organisation using them or planning to incorporate them:

Read Findings 1 to 5 to understand people’s expectations, hopes and concerns for how AI technologies need to be designed and deployed.

Findings 6 and 7 will support understanding of how to include people’s views in the design and evaluation of technologies, to make them safer before deployment.

…if you’re a researcher, civil society organisation, public participation practitioner or member of the public interested in technology and society:

We hope this review will be a resource to take stock of people’s views on AI from evidence across a range of research studies and methods.

In addition to pointing out what the evidence shows so far, Findings 1 to 6 also indicate gaps and omissions, which are designed to support the identification of further research questions to answer through research or public engagement.

Clarifying terms

The public

Our societies are diverse in many ways, and historic imbalances of power mean that some individuals and groups are more represented than others in both data and technology use, and more exposed than others to the opportunities, benefits, risks or harms of different AI uses.

 

There are therefore many publics whose views matter in the creation and regulation of AI. In this report, we refer to ‘the public’ to distinguish citizens and residents from other stakeholders, including the private sector, policy professionals and civil society organisations. We intentionally use the singular form of ‘public’ as a plural (‘the public think’), to reinforce the implicit acknowledgement that society includes many publics with different levels of power and lived experiences.

Safety

While the UK’s AI Safety Summit of 2023 has been framed around ‘safety’, there is no consensus definition of this term, and there are many ways of thinking about risks and harms from AI. The idea of ‘safety’ is employed in other important domains – like medicines, air travel and food – to ensure that systems and technologies enjoy public trust. As AI increasingly forms a core part of our digital infrastructure, our concept of AI safety will need to be similarly broad.[5]

 

The evidence in this report was not necessarily or explicitly framed by questions about ‘safety’. It surfaces people’s views about the potential or perceived opportunities, benefits, risks and harms presented by different uses of AI. People’s lived experience and views on AI technologies are useful, to understand what safety might mean in its broader scope and where policymakers’ attention – for example in national security – does not reflect diverse publics’ main concerns.

AI and AI systems

We use the UK Data Ethics Framework’s definition to understand AI systems, which they describe as technologies that ‘carry out tasks that are commonly thought to require human intelligence. [AI systems] deploy digital tools to find repetitive patterns in very large amounts of data and use them, in various ways, to perform tasks without the need for constant human supervision’.[6]

 

With this definition in mind, our analysis of attitudes to AI includes attitudes to data because data and data-driven technologies (like artificial intelligence and computer algorithms) are deeply intertwined, and AI technologies are underpinned by data collection, use, governance and deletion. In this review, we focus on research into public attitudes towards AI specifically, but draw on research about data more broadly where it is applicable, relevant and informative.

Expectations

Public attitudes research often describes public ‘expectations’. Where we have reported what the public ‘expect’ in this review, our interpretation of this term means what the public feel is required from AI practices and regulation. ’Expectation’, in this sense, does not refer to what people predict will happen.

 

Summary of findings

What do the public think about AI?

  • 1.  Public attitudes research is consistent in showing what the public think about some aspects of AI, which the findings below identify. This evidence is an opportunity for policymakers to ensure the views of the public are included in next steps in policy and regulation.
  • 2. There isn’t one ‘AI’: the public have nuanced views and differentiate between benefits, opportunities, risks and harms of existing and potential uses of different technologies.
    • The public have nuanced views about different AI technologies.
    • Some concerns are associated with socio-demographic differences.
  • 3. The public welcome AI uses when they can make tasks efficient, accessible and supportive of public benefit, but they also have specific concerns about other uses and effects, especially if AI uses that replace human decision-making affect people’s lives.
    • The public recognise potential benefits of uses of AI relating to efficiency, accessibility and working for the public good.
    • The public are concerned about an overreliance on technology over professional judgement and human communication.
    • Public concerns relate to the impacts of uses of AI, on jobs, privacy or societal inequalities.
    • In relation to foundation models: existing evidence from the public indicates that they have concerns about uses beyond mechanical, low-risk analysis tasks, and around their impact on jobs.
  • 4. Regulation and the way forward: people have clear views on how to make AI work for people and society.
    • The evidence is consistent in showing a demand for regulation of data and AI that is independent and has ‘teeth’.
    • The public are less trusting of private industry developing and regulating AI-driven technologies than other stakeholders.
    • The public are concerned about ethics, privacy, equity, inclusiveness, representativeness and non-discrimination. The use of data-driven technologies should not exacerbate unequal social stratification or create a two-tiered society.
    • Explainability of AI-driven decisions is important to the public.
    • The public want to be able to address and appeal decisions determined by AI.
  • 5. People’s involvement: people want to have a meaningful say over decisions that affect their everyday lives.
    • The public want their views and experiences to be included in decision-making processes.
    • The public expect to see diversity in the views that are included and heard.

How can involving the public meaningfully in decision-making support safer AI?

  • 6. There are important gaps in research with underrepresented groups, those impacted by specific AI uses, and in research from different countries.
    • Different people and groups, such as young people or people from minoritised ethnic communities, have distinct views about AI.
    • Some people, groups and parts of the world are underrepresented in the evidence.

 

  • 7. There is a significant body of evidence that demonstrates ways to meaningfully involve the public in decision-making, but making this happen requires a commitment from decision-makers to embed participatory processes.
    • Public attitudes research, engagement and participation involve distinct methods that deliver different types of evidence and outcomes.
    • Complex or contested topics need careful and deep public engagement.
    • Deliberative and participatory engagement can provide informed and reasoned policy insights from diverse publics.
    • Using participation as a consultative or tick-box exercise risks the trustworthiness, legitimacy and effectiveness of decision-making.
    • Empirical practices, evidence and research on embedding participatory and deliberative approaches can offer solutions to policymakers.

 

Different research methods, and the evidence they produce

There are three principal types of evidence in this review:

  1. Representative surveys, which give useful, population-level insights but can be consultative (meaning participants have low agency) for those involved.
  2. Deliberative research, which enables informed and reasoned policy conclusions from groups reflective of a population (meaning a diverse group of members of the public).
  3. Co-designed research, which can embed people’s lived experiences into research design and outputs, and make power dynamics (meaning knowledge and agency) between researchers and participants more equitable.

 

Different methodologies surface different types of evidence. Table 1 in the Appendix summarises some of the strengths of different research methods included in this evidence review.

 

Most of the evidence in this review is from representative surveys (14 studies), followed closely by deliberative processes (nine processes) and qualitative interviews and focus groups (six studies). In addition, there is one study involving peer research. This gap in the number of deliberative studies compared to quantitative research, alongside evidence included in Finding 7, may indicate the need for more in-depth public engagement methods.

Detailed findings

What do the public think about AI?

Finding 1: Public attitudes research is consistent in showing what the public think about some aspects of AI, which the findings below identify.

 

This evidence is an opportunity for policymakers to ensure the views of the public are included in next steps in policy and regulation.

Our synthesis of evidence shows there is consistency in public attitudes to AI across studies using different methods.

These include positive attitudes about some uses of AI (for example, advancing science and some aspects of healthcare), concerns about AI making decisions that affect people’s lives (for example, assessing eligibility for welfare benefits), strong support for the protection of fundamental rights (for example, privacy) and a belief that regulation is needed.

The evidence is consistent in showing concerns with the impact of AI technologies in people’s everyday lives, especially when these technologies replace human judgement. This concern is particularly evident in decisions with substantial consequences on people’s lives, such as job recruitment and access to financial support; when AI technologies replace human compassion in contexts of care; or when they are used to make complex and moral judgements that require taking into account soft factors like trust or opportunity. People are also concerned about privacy and the normalisation of surveillance.

The evidence is consistent in showing a demand for public involvement and for diverse views to be meaningfully engaged in decision-making related to AI uses.

We develop these views in detail in the following findings and reference the studies that support them.

Finding 2: There isn’t one ‘AI’

 

The public have nuanced views and differentiate between benefits, opportunities, risks and harms of existing and potential uses of different technologies

The public have nuanced views about different AI technologies

  • The public see some uses of AI as clearly beneficial. This was an insight from the joint Ada Lovelace Institute and The Alan Turing Institute’s research report How do people feel about AI?, which asked about specific AI-driven technologies.[7] In the nationally representative survey of the British public, people identified 11 of the 17 technologies we asked about as either somewhat or very beneficial. The use of AI for detecting the risk of cancer was seen as beneficial by nine out of ten people.
  • The public see some uses of AI as concerning. The same survey found the public also felt other uses were more concerning than beneficial, like advanced robotics or targeted advertising. Uses in care were also viewed as concerning by half of people, with 55% either somewhat or very concerned by virtual healthcare assistants, and 48% by robotic care assistants. In a separate qualitative study members of the UK public suggested that ‘the use of care robots would be a sad reflection of a society that did not value care givers or older people’.[8]
  • Overall, the public can simultaneously perceive the benefits as well as the risks presented by most applications of AI. More importantly, the public identify concerns to be addressed across all technologies, even when seen as broadly beneficial, as found in How do people feel about AI? by the Ada Lovelace Institute and The Alan Turing Institute.[9] Similarly, a recent survey in the UK by the Office for National Statistics found that, when people were asked to rank whether AI would have a positive or negative impact in society, the most common response was in between both ends, or neutral.[10] In a recent qualitative study in the UK, USA and Germany, participants also ‘saw benefits and concerns in parallel: even if they had a concern about a particular AI use case, they could recognise the upsides, and vice versa’.[11] Other research, including both surveys and qualitative studies in the USA[12] [13] and Germany,[14] has also found mixed views depending on the application of AI.

This nuance in views, depending on the context in which a technology is used, is illustrated by one of the comments of a juror during the Citizens’ Biometrics Council:

‘Using it [biometric technology] for example to get your money out of the bank, is pretty uncontroversial. It’s when other people can use it to identify you in the street, for example the police using it for surveillance, that has another range of issues.’
– Juror, The Citizens’ Biometrics Council[15]

 Some concerns are associated with socio-demographic differences

  • Higher awareness, levels of education and levels of information are associated with more concerns about some types of technologies. Individual differences in education levels can exacerbate concerns. The 2023 survey of the British public How do people feel about AI? found that those who have a degree-level education and feel more informed about technology are less likely to think that technologies such as facial recognition, eligibility technologies and targeted advertising in social media were beneficial.[16] A prior BEIS Public Attitudes Tracker reported similar findings.[17]
  • In the USA, the Pew Research Center found in 2023 that ‘those who have heard a lot about AI are 16 points more likely now than they were in December 2022 to express greater concern than excitement about it.’[18] Similarly, existing evidence suggests that public concerns around data should not be dismissed as uninform,[19] which goes against the assumption that the more people know about a technology, the more they will support it.

Finding 3: The public welcome AI uses that can make tasks efficient, accessible and supportive of public benefit

 

But they also have specific concerns about other uses and effects, especially if AI uses that replace human decision-making affect people’s lives.

The public recognise potential benefits of uses of AI relating to efficiency, accessibility and working for the public good

  • The public see the potential of AI-driven technologies in improving efficiency including speed, scale and cost-saving potential for some tasks and applications. They particularly welcome its use in mechanical tasks,[20] [21] in health, such as for early diagnosis, and in the scientific advancement of knowledge.[22] [23] [24] [25] For example, a public dialogue on health data by the NHS AI Lab found that the perceived benefits identified by participants included ‘increased precision, reliability, cost-effectiveness and time saving’ and that ‘through further discussion of case studies about different uses of health data in AI research, participants recognised additional benefits including improved efficiency and speed of diagnosis’.[26]
  • Improving accessibility is another potential perceived benefit of some AI uses, although other uses can also compromise For example, How do people feel about AI? by the Ada Lovelace Institute and The Alan Turing Institute found that accessibility was the most commonly selected benefit for robotic technologies that can make day-to-day activities easier for people.[27] These technologies included driverless cars and robotic vacuum cleaners. However, there is also a view that these benefits may be compromised due to digital divides and inequalities. For example, members of the Citizens’ Biometrics Council, who reconvened in November 2022 to consider the Information Commissioner’s Office (ICO)’s proposals for guidance on biometrics, raised concerns that while there is potential for biometrics to make services more accessible, an overreliance on poorly designed biometric technologies would create more barriers for people who are disabled or digitally excluded.[28]
  • For controversial uses of AI, such as certain uses of facial recognition or biometrics, there may be support when the public benefit is clear. The Citizens’ Biometrics Council that Ada convened in 2021 felt the use of biometrics was ‘more ok’ when it was in the interests of members of the public as a priority, such as in instances of public safety and health.[29] However, they concluded that the use of biometrics should not infringe people’s rights, such as the right to privacy. They also asked for safeguards related to regulation as described in Finding 4, such as independent oversight and transparency on how data is used, as well as addressing bias and discrimination or data management. The 2023 survey by the Ada Lovelace Institute and The Alan Turing Institute, How do people feel about AI?, found that speed was the main perceived benefit of facial recognition technologies, such as its use to unlock a phone, for policing and surveillance and at border control. But participants also raised concerns related to false accusations or accountability for mistakes.[30]

The public are concerned about an overreliance on technology over professional judgement and human communication

‘“Use data, use the tech to fix the problem.” I think that’s very indicative of where we’re at as a society at the moment […] I don’t think that’s a good modality for society. I don’t think we’re going down a good road with that.’
– Jury member, The rule of trust[31]

  • There are concerns in the evidence reviewed that an overreliance on data-driven systems will affect people’s agency and autonomy.[32] [33] [34] Relying on technology over professional judgement seems particularly concerning for people when AI is applied to eligibility, scoring or surveillance, because of the risk of discrimination and not being able to explain decisions that have high stakes, including those related to healthcare or jobs.[35] [36] [37] [38]
  • The nationally representative survey of the British public How do people feel about AI? found that not being able to account for individual circumstances was a concern related to this loss of agency. For example, almost two thirds (64%) of the British public were concerned that workplaces would rely too heavily on AI over professional judgement for recruitment.
  • Qualitative studies help to understand that this concern relates to fear of losing autonomy as well as fairness over important decisions, even when people can see the benefits of some uses. For example, in a series of workshops conducted in the USA, a participant said: ‘[To] have your destiny, or your destination in life, based on mathematics or something that you don’t put in for yourself… to have everything that you worked and planned for based on something that’s totally out of your control, it seems a little harsh. Because it’s like, this is what you’re sent to do, and because of [an] algorithm, it sets you back from doing just that. It’s not fair.[39]
  • Autonomy remains important, even when technologies are broadly seen as beneficial. Research by the Centre for Data Ethics and Innovation (CDEI) found that, even when the benefits of AI were broadly seen to outweigh the risks in terms of improving efficiency, the risks are more front-of-mind with strong concern about societal reliance on AI and where this may leave individuals and their autonomy.’[40]
  • There is a concern that algorithm-based decisions are not appropriate for making complex and moral judgements, and that they will generate ‘false confidence in the quality, reliability and fairness of outputs’.[41] [42] A study involving workshops in Finland, Germany, the UK and the USA found as examples of these complex or moral judgements those that ‘moved beyond assessment of intangibles like soft factors, to actions like considering extenuating circumstances, granting leniency for catastrophic events in people’s lives, ‘giving people a chance’, or taking into account personal trust’[43]. A participant from Finland said: ‘I don’t believe an artificial intelligence can know whether I’m suitable for some job or not.[44]
  • Research with the public also shows concerns that an overreliance on technology will result in a loss of compassion and the human touch in important services like health care.[45] [46] This concern is also raised in relation to technologies using foundation models: ‘Imagine yourself on that call. You need the personal touch for difficult conversations.[47]

Concerns also relate to the impacts of uses of AI on jobs, privacy or societal inequalities

  • Public attitudes research also finds some concern about job loss or reduced job opportunities for some applications of AI. For example, in a recent survey of the British public, the loss of jobs was identified as a concern by 46% of participants in relation to the use of robotic care assistants and by 47% in relation to facial recognition at border control as this would replace border staff.[48] Fear of the replacement or loss of some professions is also echoed in research from other countries in Europe[49] [50] and from the USA.[51] [52] For example, survey results from 2023 found that nearly two fifths of American workers are worried that AI might make some or all of their job duties obsolete.[53]
  • The public care about privacy and how people’s data is used, especially for the use of AI in everyday technologies such as smart speakers or for targeted advertising in social media.[54] [55] For example, the surveyHow do people feel about AI? found that over half (57%) of participants are concerned that smart speakers will gather personal information that could be shared with third parties, and that 68% are concerned about this for targeted social media adverts.[56] Similarly, the 2023 survey by the Pew Research Center in the USA found that people’s concerns about privacy in everyday uses of AI are growing, and that increase relates to a perceived lack of control over people’s own personal information.[57]
  • The public have also raised concerns about how some AI uses can be a threat to people’s rights, including the normalisation of surveillance.[58] Jurors in a deliberation on governance during pandemics were concerned about whether data collected during a public health crisis – in this case, the COVID-19 pandemic – could subsequently be used to surveil, profile or target particular groups of people. In addition, survey findings from March 2021 showed that minority ethnic communities in the UK were more concerned than white respondents about legal and ethical issues around vaccine passports.[59] In the workplace, whether in an office or working remotely, over a third (34%) of American workers were worried that their ‘employer uses technology to spy on them during work hours’, regardless of whether or not they report knowing they were being monitored at work.[60]
  • The public also care about the risk that data-driven technologies exacerbate inequalities and biases. Deliberative engagements ask for proportionality and a context-specific approach to the use of AI and data-driven technologies.[61] [62] For example, bias and justice were core themes raised by the Citizens’ Biometrics Council that Ada convened in 2021. The members of the jury made six recommendations to address bias, discrimination and accuracy issues, such as ensuring technologies are accurate before they are deployed, fixing them to remove bias and taking them through an Ethics Committee.[63]

‘There is a stigma attached to my ethnic background as a young Black male. Is that stigma going to be incorporated in the way technology is used? And do the people using the technologies hold that same stigma? It’s almost reinforcing the fact that people like me get stopped for no reason.’
– Jury member, The Citizens’ Biometrics Council[64]

Foundation models: existing evidence from the public indicates that they have concerns about uses beyond mechanical, low-risk analysis tasks and around their impact on jobs

The evidence from the public so far on foundation models[65] is consistent with attitudes to other applications of AI. People can see both benefits and disadvantages relating to these technologies, some of which overlap with attitudes towards other applications of AI, while others are specific to foundation models. However, evidence from the public about these technologies is limited, and more public participation is needed to better understand how the public feel foundation models should be developed, deployed and governed. The evidence below is from a recent qualitative study by the Centre for Data Ethics and Innovation (CDEI).[66]

  • People see the role of foundation models as potentially beneficial in assisting and augmenting mechanical, low-stakes human capabilities, rather than replacing them.[67] For example, participants in this study saw foundation models as potentially beneficial when they were doing data synthesis or analysis tasks. This could include assisting policymaking by synthesising population data or advancing scientific research by speeding up analysis or finding new patterns in the data, which were some of the potential uses presented to participants in the study.

 

‘This is what these models are good at [synthesising large amounts of population data]… You don’t need an emotional side to it – it’s just raw data.’,
– Interviewee, Public perceptions of foundation models[68]

 

  • Similar concerns around job losses found in relation to other applications of AI were raised by participants in the UK in relation to technologies built on foundation models.[69] There was concern that the replacement of some tasks by technologies based on foundation models would also mean workers lose the critical skills to judge whether a foundation model was doing a job well.
  • Concerns around bias extend to technologies based on foundation models. Bias and transparency were front of mind: ‘[I want the Government to consider] transparency – we should be declaring where AI has been applied. And it’s about where the information is coming from, ensuring it’s as correct as it can be and mitigating bias as much as possible.’ There was a view that bias could be mitigated by ensuring that the data training these models is cleaned so that it is accurate and representative.[70]
  • There are additional concerns about trade-offs between accuracy and speed when using foundation models. Inaccuracy of foundation models is a key concern among members of the public. This inaccuracy would require checks that may compromise potential benefits such as speed and make the process more inefficient. As this participant working in education said: ‘I don’t see how I feed the piece of [homework] into the model. I don’t know if in the time that I have to set it up and feed it the objectives and then review afterwards, whether I could have just done the marking myself?[71]
  • People are also concerned by the inability of foundation models to provide emotional intelligence. The lack of emotional intelligence and inability to communicate like a human, including understanding non-verbal cues and communication in context, was another concern raised in the study from the Centre for Data Ethics and Innovation, which meant participants did not see technologies based on foundation models as useful in decision-making.[72]

 

‘The emotional side of things… I would worry a lot as people call because they have issues. You need that bit of emotional caring to make decisions. I would worry about the coldness of it all.’
– Interviewee, Public perceptions of foundation models[73]


Finding 4: Regulation and the way forward: People have clear views on how to make AI work for people and society.

 

The evidence is consistent in showing a demand for regulation of data and AI that is independent and has ‘teeth’.

  • The public demand regulation around data and AI.[74] [75] [76] [77] Within the specific application of AI systems in biometric technologies, the Citizens’ Biometrics Council felt an independent body is needed to bring governance and oversight together in an otherwise crowded ecosystem of different bodies working towards the same goals.[78] The Council felt that regulation should also be able to enforce penalties for breaches in the law that were proportionate to the severity of such breaches, surfacing a desire for regulation with ‘teeth’. The Ada Lovelace Institute’s three-year project looking at COVID-19 technologies highlighted that governance and accountability measures are important for building public trust in data-driven systems.[79]
  • The public want regulation to represent their best interests. Deliberative research from the NHS AI Lab found that: ‘Participants wanted to see that patients’ and the public’s best interests were at the heart of decision-making and that there was some level of independent oversight of decisions made.’[80] Members of the Ada Lovelace Institute’s citizens’ jury on data governance during a pandemic echoed this desire for an independent regulatory body that can hold data-driven technology to account, adding that they would value citizen representation within such a body.[81]
  • Independence is important. The nationally representative public attitudes survey How do people feel about AI? revealed that a higher proportion of individuals felt that an independent regulator was best placed to ensure AI is used safely than other bodies, including private companies and the Government.[82] This may reflect differential relations of trust and trustworthiness between civil society and other stakeholders involved in data and AI, which we discuss in the next section.

The public are less trusting of private industry developing and regulating AI-driven technologies than they are of other stakeholders

  • Evidence from the UK and the USA finds that the public do not trust private companies as developers or regulators of AI-driven technologies, and instead hold higher trust in scientists and researchers or professionals and independent regulatory bodies, respectively.[83] [84] [85] [86] [87] For example, when asked how concerned they are with different stakeholders developing high-impact AI-driven technologies, such as systems that determine an individual’s eligibility for welfare benefits or their risk of developing cancer from a scan, survey results found that the public are most concerned by private companies being involved and least concerned by the involvement of researchers or universities.[88]
  • UK research also shows that the public do not trust private companies to act with safety or accountability in mind. The Centre for Data Ethics and Innovation’s public attitudes survey found that only 43% of people trusted big technology companies to take actions with data safely, effectively, transparently and with accountability, with this figure decreasing to 30% for social media companies specifically.[89]
  • The public are critical of the motivations of commercial organisations that develop and deploy AI systems in the public sector. Members of a public dialogue on data stewardship were sceptical of the involvement of commercial organisations in the use of health data.[90] Interviews with members of the UK public on data-driven healthcare technologies also revealed that many did not expect technology companies to act on anyone’s interests but their own.[91]

‘[On digital health services] I’m not sure that all of the information is kept just to making services better within the NHS. I think it’s used for [corporations] and large companies that do not have the patients’ best interests at heart, I don’t think.’

– Interviewee, Access Denied? Socioeconomic inequalities in digital health services [92]

The public are concerned about ethics, privacy, equity, inclusiveness, representativeness and non-discrimination, and about exacerbating unequal social stratification and creating a two-tiered society

  • The public support using and developing data-driven technologies when appropriate considerations and guardrails are in place. An earlier synthesis of public attitudes to data by the Ada Lovelace Institute shows support for the use of data-driven technologies when there is a clear benefit to society,[93] with public attitudes research into AI revealing broad positivity for applications of AI in areas like health, as described earlier in this report. Importantly, this positivity is paralleled by high expectations around ethics and responsibility to limit how and where these technologies can be used.[94] However, perceptions around innovation and regulation are not always at odds with each other. A USA participant from a qualitative study stated that ‘there can be a lot of innovation with guardrails’.[95]
  • There is a breadth of evidence highlighting that principles of equity, inclusion, fairness and transparency are important to the public:
    • The Ada Lovelace Institute’s deliberative research shows that the public believe equity, inclusiveness and non-discrimination need to be embedded into data governance during pandemics for governance to be considered trustworthy,[96] or before deploying biometric technologies.[97] The latter study highlighted that data-driven systems should not exacerbate societal inequalities or create a two-tiered society, with the public questioning the assumption that individuals have equal access to digital infrastructure and expressing concern around discriminatory consequences that may arise from applications of data-driven technology.[98]
    • Qualitative research in the UK found that members of the public feel that respecting privacy, transparency, fairness and accountability underpins good governance of AI.[99] Ethical principles such as fairness, privacy and security were valued highly in an online survey of German participants in the evaluation of the application of AI in making decisions around tax fraud.[100] These participants equally valued a range of ethical principles, highlighting the importance of taking a holistic approach to the development of AI-driven systems. Among children aged 7–11 years in Scotland, who took part in deliberative research, fairness was a key area of interest after being introduced to real-life examples of uses of AI.[101]
    • The public also emphasise the importance of considering the context within which AI-driven technologies are applied. Qualitative research in the UK found that in high-risk applications of AI, such as mental health chatbots or HMRC fraud detection services, individuals expect more information to be provided on how the system has been designed and tested than for lower-risk applications of AI, such as music streaming recommendation systems.[102] As mentioned earlier in this report, members of the Ada Lovelace Institute’s Citizens’ Biometrics Council similarly emphasised proportionality in the use of biometric technology across different contexts, with use in contexts that could enforce social control deemed inappropriate, while other uses around crime prevention elicited mixed perspectives.[103]
  • Creating a trustworthy data ecosystem is seen as crucial in avoiding resistance or backlash to data-driven technologies.[104] [105] Building data ecosystems or data-driven technologies that are trustworthy is likely to improve public acceptance of these technologies. However, a previous analysis of public attitudes to data suggests that aims to build trust can often place the burden on the public to be more trusting rather than demand more trustworthy practices from other stakeholders.[106] Members of a citizens’ jury on data governance highlighted that trust in data-driven technologies is contingent on the trustworthiness of all stakeholders involved in the design, deployment and monitoring of these technologies.[107] These stakeholders include the developers building technologies, the data governance frameworks in place to oversee these technologies and the institutions tasked with commissioning or deploying these technologies.
  • Listening to the public is important in establishing trustworthiness. Trustworthy practices can include better consultation with, listening to, and communicating with people, as suggested by UK interviewees when reflecting on UK central Government deployment of pandemic contact tracing apps.[108] These participants felt that mistrust of central Government was in part related to feeling as though the views of citizens and experts had been ignored. Finding 5 further details public attitudes around participation in data-driven ecosystems.

‘The systems themselves are quite exclusionary, you know, because I work with people with experiences of multiple disadvantages and they’ve been heavily, heavily excluded because they say they have complex needs, but what it is, is that the system is unwilling to flex to provide what those people need to access those services appropriately.’

– Interviewee, Access Denied? Socioeconomic inequalities in digital health services [109]

Explainability of AI-driven decisions is important to the public

  • It is important for people to understand how AI-driven decisions are made, even if that reduces the accuracy of that decision for reasons relating to fairness and accountability.[110] [111] [112] [113] [114] [115] The How do people feel about AI? survey of British public attitudes by the Ada Lovelace Institute and The Alan Turing Institute found that explainability was important because it helped with accountability and the need to consider individual differences in circumstance.[116] When balancing accuracy of an AI-powered decision against an explanation into how that decision was made, or the possibility of humans making all decisions, most people in the survey preferred the latter two options. At the same time, a key concern across most AI technologies – such as virtual healthcare assistants and technologies that assess eligibility for welfare or loan repayment risk – was around accountability for mistakes if things go wrong, or the need to consider individual and contextual circumstances in automated decision-making.
  • Exposing bias and supporting personal agency is also linked to support for explainability. In scenarios where biases could impact decisions, such as in job application screening decisions, participants from a series of qualitative workshops highlighted that explanations could be a mechanism to provide oversight and expose discrimination, as well as to support personal agency by allowing individuals to contest decisions and advocate for themselves: ‘A few participants also worried that it would be difficult to escape from an inaccurate decision once it had been made, as decisions might be shared across institutions, leaving them essentially powerless and without recourse.’[117]
  • However, there are trade-offs people make between explainability and accuracy depending on the context. The extent to which a decision is mechanical versus subjective, the gravity of the consequences of the decision, whether it is the only chance at a decision or whether information can help the recipient take meaningful action are some of the criteria identified in research with the public when favouring accuracy over explainability.[118]
  • The type of information people want from explanations behind AI-driven technologies also varies depending on context. A qualitative study involving focus groups in the UK, USA and Germany found that transparency and explainability were important, and that the method for providing this transparency depended on the type of AI technology, use and potential negative impact: ‘For AI products used in healthcare or finance, they wanted information about data use, decision-making criteria and how to make an appeal. For AI-generated content, visual labels were more important.’[119]

The public want to be able to address and appeal decisions determined by AI

  • It is important for the public that there are options for redress when mistakes have been made using AI-driven technologies.[120] [121] When asked what would make them more comfortable with the use of AI, the second most commonly chosen option by the public in the How do people feel about AI? attitudes survey was ‘procedures in place to appeal AI decisions’, selected by 59% of people, with only ‘laws and regulation’, selected by more people (62%).[122] In line with the value of explanations in providing accountability and transparency as previously discussed, workshops with members of the general public across several countries also found that explanations accompanying AI-made decisions were seen as important, as they could support appeals to change decisions if mistakes were made.[123] For example, as part of a study commissioned by the Centre for Data Ethics and Innovation, participants were presented with a scenario in which AI was used to detect tax fraud. They concluded that they would want to understand what information is used, outside of the tax record, in order to identify someone’s profile as a risk. As the quote below shows, understanding the criteria was important to address a potential mistake with significant consequences:

‘I would like to know the criteria used that caused me to be flagged up [in tax fraud detection services using AI], so that I can make sure everything could be cleared up and clear my name.’
– Interviewee, AI Governance[124]

The public ask for agency, control and choice in involvement, as well as in processes of consent and opt-in for sharing data

  • The need for agency and control over data and how decisions are made was a recurrent theme in our rapid review of evidence. People are concerned that AI systems can take over people’s agency in high-stakes decisions that affect their lives. In the Ada Lovelace Institute’s and The Alan Turing Institute’s recent survey of the British public, people noted concerns about AI replacing professional judgements, not being able to account for individual circumstances and a lack of transparency and accountability in decision-making. For example, almost two thirds (64%) were concerned that workplaces would rely too heavily on AI for recruitment compared to professional judgements.[125]
  • The need for control is also mentioned in relation to consent. For example, the Ada Lovelace Institute’s previous review of evidence Who Cares what the Public Think? found that ‘people often want more specific, granular and accessible information about what data is collected, who it is used by, what it is used for and what rights data subjects have over that use.’[126] A juror from the Citizens’ Biometrics Council also referenced the importance of consent:

‘One of the things that really bugs me is this notion of consent: in reality [other] people determine how we give that consent, like you go into a space and by being there you’ve consented to this, this and this. So, consent is nothing when it’s determined how you provide it.’
– Jury member, The Citizens’ Biometrics Council[127]

  • Control also relates to privacy. Lack of privacy and control over the content people see in social media and the data that is extracted was also identified as a consistent concern in the recent survey of the British public conducted by the Ada Lovelace Institute and The Alan Turing Institute.[128] In this study 69% of people identified invasion of privacy as a concern around targeted consumer advertising and 50% were concerned about the security of their personal information.
  • Consent is particularly important in high-stakes uses of AI. Consent was also deemed important in a series of focus groups conducted in the UK, USA and Germany, especially ‘where the use of AI has more material consequences for someone affected, like a decision about a loan, participants thought that people deserved the right to consent every time’.[129] In the same study, participants noted consent is about informed choice, rather than just choosing yes or no.
  • The need for consent is ongoing and complicated by the pervasiveness of some technologies. Consent remained an issue for members of the Citizens’ Biometric Council that the Information Commissioner’s Office (ICO) reconvened in While some participants welcomed the inclusion of information on consent in the new guidance by the ICO, others remained concerned because of the increased pervasiveness of biometrics, which would make it more difficult for people to be able to consent.[130]
  • The demand for agency and control is also linked to demands for transparency in data-driven systems. For example, the citizens’ juries the Ada Lovelace Institute convened on health systems in 2022 found that ‘agency over personal data was seen as an extension of the need for transparency around data-driven systems. Where a person is individually affected by data, jurors felt it was important to have adequate choice and control over its use.’[131]

‘If we are giving up our data, we need to be able to have a control of that and be able to see what others are seeing about us. That’s a level of mutual respect that needs to be around personal data sharing.’
– Jury member, The Rule of Trust [132]

Finding 5: People’s involvement: people want to have a meaningful say over decisions that affect their everyday lives.

 

The public want their views and experiences to be included in decision-making processes.

  • There is a demand from participants in research for more meaningful involvement of the public and of lived experience in the development of, implementation of and policy decision-making on data-driven systems and AI. For example, in a public dialogue for the NHS AI Lab, participants ‘flagged that any decision-making approaches need to be inclusive, representative, and accessible to all’. The research showed that participants valued a range of expertise, including the lived experience of patients.[133]
  • The public want their views to be valued, not just heard.[134] In the Ada Lovelace Institute’s peer research study on digital health services, participants were concerned that they were not consulted or even informed about new digital health services.[135] The research from the NHS AI Lab also found that, at the very least, when involvement takes place, the public wants their views to be given the same consideration as the views of other stakeholders.[136] The evidence also shows expectation for inclusive engagement and multiple channels of participation.[137]

There needs to be diversity in the views that are included and heard

  • A diversity of views and public participation need to be part of legislative and oversight bodies and processes.[138] The Citizens’ Biometrics Council that the Ada Lovelace Institute convened in 2020 also suggested the need to include the public in a broad representative group of individuals charged with overseeing an ongoing framework for governance and a register on the use of biometric technologies.[139] Members of the Ada Lovelace Institute’s citizens’ jury on data governance during a pandemic advocated for public representation in any regulatory bodies overseeing AI driven technologies.[140] Members of a public dialogue on data stewardship particularly stressed the importance of ensuring those that are likely to be affected by decisions are involved in the decision-making process.[141]

‘For me good governance might be a place where citizens […] have democratic parliament of technology, something to hold scrutiny.’
– Jury member, The Rule of Trust.[142]

  • This desire for involvement in decisions that affect them is felt even by children as young as 7–11 years old. Deliberative engagement with children in Scotland shows that they want agency over the data collected about them, and want to be consulted about the AI systems created with that data.[143] The young participants wanted to make sure that many children from different backgrounds would be consulted when data was gathered to create new systems, to ensure outcomes from these systems were equitable for all children.

‘We need to spend more time in a place to collect information about it and make sure we know what we are working with. We also need to talk to lots of different children at different ages.’
– Member of Children’s Parliament, Exploring Children’s Rights and AI[144]

How can involving the public meaningfully in decision-making support safer AI?

Finding 6: There are important gaps in research with underrepresented groups, those impacted by specific AI uses, and in research from different countries.

 

Different people and groups, like young people or people from minoritised ethnic communities, have distinct views about AI.

Evidence points to age and other socio-demographic differences as factors related to varying public attitudes to AI.[145] [146] [147]

  • Young people have different views on some aspects of AI. For example, the survey of British public attitudes How do people feel about AI? showed that the belief that the companies developing AI technologies should be responsible for the safety of those technologies was more common among people aged 18–24 years old than in older age groups. This suggests that younger people have high expectations of private companies and some degree of trust in them carrying out their corporate responsibilities.[148]
  • Specific concerns around technology may also relate to some socio-demographic characteristics. Polling from the USA suggests worries around job losses due to AI are associated with age (workers under 58 are more concerned than those over 58) and ethnicity (people from Asian, Black and Hispanic backgrounds are more concerned than those from white backgrounds).[149] And although global engagement on AI is limited, the available evidence suggests that there may be wide geographical differences in feelings about AI and fairness, and trust in both the companies using AI, and in the AI systems, to be fair.[150] [151]

Some people, groups and parts of the world are underrepresented in the evidence

  • Some publics are underrepresented in some of the evidence.[152] [153]
    • Sample size, recruitment, methods used for taking part in research, as well as other factors can affect the quality of insights that research is able to represent across different publics. For example, the How do people feel about AI? survey of public attitudes is limited in its ability to represent the views of groups of people who are racially minoritised, such as Black or Asian populations, due to small sample sizes. This can be a methodological limitation of representative, quantitative research, and so is present in the research findings despite a recognition by researchers that these groups may be disproportionately affected by some of the technologies surveyed.[154] There is therefore a need for quantitative and qualitative research among those most impacted and least represented by some uses of AI, especially marginalised or minoritised groups and younger age groups.
  • There is an overrepresentation of Western-centric views:
    • Existing evidence identified comes from English-speaking Western countries, often conducted by ‘a small group of experts educated in Western Europe or North America’.[155] [156] This is also evidenced in the gaps of this rapid review, and the Ada Lovelace Institute recognises that as a predominantly UK-based organisation, it might face barriers to discovering and analysing evidence emerging from across the world. In the context of global summits and discussions on global governance, and particularly recognising that the AI supply chain transcends boundaries of nations and regions, there is a need for research and evidence that includes different contexts and political economies, where views and experiences may vary in different ways across AI uses.

Finding 7: There is a significant body of evidence that demonstrates ways to meaningfully involve the public in decision-making.

 

But making this happen requires a commitment from decision-makers to embed participatory processes.

As described in the findings above, the public want to be able to have a say in – and to have control over – decisions that impact their lives. They also think that members of the public should be involved in legislative and oversight processes. This section introduces some of the growing evidence on how to do this meaningfully.

Public attitudes research, engagement and participation involve distinct methods that deliver different evidence and outcomes 

Different methods of public engagement and participation produce different outcomes, and it is important to understand their relative strengths and limitations in order to use them effectively to inform policy (see Table 1).

Some methods are consultative, whereas others enable a deeper involvement. According to the International Association for Public Participation (IAPP) framework, methods can embed the public deeper into decision-making to increase the impact they have on those decisions.[157] This framework has been further developed in the Ada Lovelace Institute’s report Participatory data stewardship, which sets out the relationships between different kinds of participatory practices.[158]

Surveys are quantitative methods of collecting data that capture immediate attitudes influenced by discourse, social norms and varied levels of knowledge and experience.[159] [160] They produce responses predominantly by using closed questions that require direct responses. Analysis of survey results helps researchers and policymakers to understand the extent to which some views are held across populations, and to track changes over time. However, quantitative methods are less suited to answering the ‘why’ or ‘how’ questions. In addition, they do not allow for an informed and reasoned process. As others have pointed out: ‘surveys treat citizens as subjects of research rather than participants in the process of acquiring knowledge or making judgements.’[161]

Some qualitative studies can provide important insight into people’s views and lived experience, such as focus groups or interviews. However, there is a risk that participation remains at the consultative level, depending on how the research is designed and embedded in decision-making processes.

Public deliberation can enable deep insights and recommendations to inform policy through an informed, reasoned and deliberative process of engagement. Participants are usually randomly selected to reflect the diversity of a population or groups, in the context of a particular issue or question. They are provided with expert guidance and informed, balanced evidence, and given time to learn, understand and discuss. These processes can be widened through interconnected events to ensure civil society and the underrepresented or minoritised groups less likely to attend these deliberative processes are included in different and relevant ways.[162] There is a risk that the trust of participants in these processes is undermined if their contributions are not seriously considered and embedded in policies.

Complex or contested topics need careful and deep public engagement

We contend that there is both a role and a need for methods of participation that provide in-depth involvement. This is particularly important when what is at stake are not narrow technical questions but complex policy areas that permeate all aspects of people’s lives, as is the case with the many different uses of AI in society. The Ada Lovelace Institute’s Rethinking data report argued the following:

‘Through a broad range of participatory approaches – from citizens’ councils and juries that directly inform local and national data policy and regulation, to public representation on technology company governance boards – people are better represented, more supported and empowered to make data systems and infrastructures work for them, and policymakers are better informed about what people expect and desire from data, technologies and their uses.[163]

Similar lessons have been learned from policymaking in climate. The Global Science Partnership finds that: ‘Through our experience delivering pilots worldwide as part of the Global Science Partnership, we found that climate policy making can be more effective and impactful when combining the expertise of policymakers, experts and citizens at an early stage in its development, rather than through consulting on draft proposals.’[164]

Other research has also argued that some AI uses, in particular those that risk civil and human rights, are in more need of successfully incorporating public participation. For example, a report by Data & Society finds that AI uses related to access to government services and benefits, retention of biometric or health data, surveillance or uses that bring new ethical challenges like generative AI or self-driving cars, require in-depth public engagement.[165]

Deliberative and participatory engagement can provide informed and reasoned policy insights from diverse publics

Evidence about participatory and deliberative approaches shows their potential for enabling rigorous engagement processes, in which publics who are reflective of the diversity of views in the population are exposed to a range of knowledge and expertise. According to democratic theorists, inclusive deliberation is a key mechanism to enable collective decision-making.[166]

Through a shared process of considered deliberation and reasoned judgement with others, deliberative publics are able to meaningfully understand different data-driven technologies and the impact they are having or can have on different groups.[167] [168]

Research on these processes shows that ‘deliberating citizens can and do influence policies’, and that they are being implemented in parliamentary contexts by civil society, private companies and international institutions.[169]

Using participation as a tick-box exercise risks the trustworthiness, legitimacy and effectiveness of decision-making

Evidence from public participation research identifies the risk of using participation to simply tick a box to demonstrate public engagement, or as a stamp of approval for a decision that has already been substantially made. For example, participants in a deliberative study by the NHS AI Lab discussed the need for public engagement to be meaningful and impactful, and considered how lived experience would impact decision-making processes alongside the agendas of other stakeholders.[170]

There is a need to engage with the public in in-depth processes that are consequential in their influence in government policy.[171] Our Rethinking data report also referred to this risk:

‘In order to be successful, such initiatives need political will, support and buy-in, to ensure that their outcomes are acknowledged and adopted. Without this, participatory initiatives run the risk of ‘participation washing’, whereby public involvement is merely tokenistic.’[172]

Other lessons from public engagement in Colombia, Kenya and the Seychelles also represent the need for ‘deep engagement at all stages through the policymaking process’ to improve effectiveness, trust and transparency.[173]

Experiences and research on institutionalising participatory and deliberative approaches can offer solutions to policymakers

The use of participatory and deliberative approaches and the evidence of its impact are growing in the UK and many parts of the world[174] in what has been described as a ‘turn toward deliberative systems’[175] or ‘deliberative wave’.[176] [177] However, there is a need for policy professionals and governments to take the results from these processes seriously and embed them in policy.

Frameworks like OECD’S ‘Institutionalising Public Deliberation’ provide a helpful summary of some of the ways in which this can happen, including examples like the Ostbelgien model, the city of Paris’ model or Bogota’s itinerant assembly.[178]

Ireland’s experience running deliberative processes that culminated in policy change,[179] or the experience of the London Borough of Newham with its standing assembly, offer other lessons.

At a global level, the Global Assembly on the Climate and Ecological Crisis held in 2021 serves as a precedent for what a global assembly or similar permanent citizens’ body on AI could look like, including civil society and underrepresented communities.[180] An independent evaluation found that the Global Assembly ‘established itself as a potential player in global climate governance, but it also spotlighted the challenges of influencing global climate governance on the institutional level.’[181] This insight shows the importance of these processes to be connected to decision-making organs for them to be consequential.

Deliberative and participatory processes have been used for decades in many areas of policymaking, but their use by governments to involve the public in decisions on AI remains surprisingly unexplored:

‘Despite their promising potential to facilitate more effective policymaking and regulation, the role of public participation in data and technology-related policy and practice remains remarkably underexplored, if compared – for example – to public participation in city planning and urban law.’[182]

This review of existing literature demonstrates ways to operationalise or institutionalise the involvement of the public into legislative processes and lessons on how to avoid them becoming consultative exercises. We do not claim that all these processes and examples have always been successful, and point to evidence of a lack of commitment from governments to implement the recommendations by citizens being one of the reasons why they can fail.[183] We contend that there is currently a significant opportunity for governments to consider processes that can embed the participation of the public in meaningful and consequential ways – and that doing this will improve outcomes for people affected by technologies and for current and future societies.

 

Conclusions

This rapid review shows public attitudes research is consistent in showing what the public think about the potential benefits of AI, their concerns and how they think it should be regulated.

  • It is important for governments to listen to and act on this evidence, paying attention in particular to different AI uses and how they currently have impacts on people’s everyday lives. AI uses affecting decision-making around services and jobs, or affecting human and civil rights, require particular attention. The public do not see AI as just one thing and have nuanced views about its different uses, risks and impacts. AI uses in advancing science and improving health diagnosis are largely seen as positive, and so are its uses in tasks that can be made faster and more efficient. However, the public are concerned about relying on AI systems to make decisions that impact people’s lives, such as in job recruitment or accessing financial support, either through loans or welfare.
  • The public are also concerned with uses of AI that replace human judgement, communication and emotion, in aspects like care or decisions that need to account for context and personal circumstances.
  • There are also concerns about privacy, especially in relation to uses of AI in people’s everyday lives, like targeted advertising, robotic home assistants or surveillance.
  • There is emerging evidence that the public have equivalent concerns about the use of foundation models. Whereas they may be welcome when they facilitate or augment mechanical, low-risk tasks or speed up data analysis, the public are concerned about trading off accuracy for speed. They are also concerned about AI uses replacing human judgement or emotion, and about their potential to amplify bias and discrimination.

Policymakers should use evidence from public attitudes research to strengthen regulation and independent oversight of AI design, development, deployment and uses and to meaningfully engage with diverse publics in the process.

  • Evidence from the public shows a preference for independent regulation with ‘teeth’ that demands transparency and includes mechanisms for assessing risk before deployment of technologies, as well as for accountability and redress.
  • The public want to maintain agency and control over how data is used and for what purposes.
  • Inclusion and non-discrimination are important for people. There is a concern that both the design and uses of AI technologies will amplify exclusion, bias or discrimination, and the public want regulatory frameworks that prevent this.
  • Trust in data-driven systems is contingent on the trustworthiness of all stakeholders involved. The public find researchers and academics more trustworthy than the private sector. Engaging the public in the design, deployment, regulation and monitoring of these systems is also important to avoid entrenching resistance.

Policymakers should use diverse methods and approaches to engage with diverse publics with different views and experiences, and in different contexts. Engaging the public in participatory and deliberative processes to inform policy requires embedded, institutional commitment so that the engagement is consequential rather than tokenistic.

  • The research indicates differences in attitudes across demographics, including age and socio-economic background, and there is a need for more evidence from underrepresented groups and specific publics impacted by specific AI uses.
  • There is also a need for evidence from different contexts across the globe, especially considering that the AI supply chain transcends political jurisdictions.
  • The public want to have a say in decisions that affect their lives, and want spaces for themselves and representative bodies to be part of legislative and monitoring processes.
  • Different research and public participation approaches result in different outcomes. While some methods are best suited to consulting the public on a particular issue, others enable them to be involved in decision-making. Participatory and deliberative methods enable convening publics that are reflective of diversity in the population to offer informed and reasoned conclusions that can inform practice and policy.
  • Evidence from deliberative approaches shows ways for policymakers to meaningfully include the public in decision-making processes at a local, national, regional and global level, such as through citizens’ assemblies or juries and working with civil society. These processes need political will to be consequential.

Methodology

To conduct this rapid evidence review, we combined research with the public carried out by the Ada Lovelace Institute with studies by other research organisations, assessed against criteria that gave a high level of confidence in the robustness of the research.

We used keyword-based online searches to identify evidence about public attitudes in addition to our own research. We also assessed the quality and relevance of recent studies encountered or received through professional networks that we had not identified through our search. We incorporated these in the review when the assessment of studies delivered methodological confidence. A thematic analysis was conducted to categorise and identify recurrent themes. These themes have been developed and structured around a set of key findings that aim to speak directly to policy professionals.

The evidence resulting from this process is largely from the UK, complemented by other research done predominantly in other English-speaking countries. This is a limitation for a global conversation on AI, and more research from across the globe and diverse publics and contexts is needed.

We focus on research conducted within recent years. The oldest evidence included dates from 2014, and the vast majority has been published since 2018. We chose this focus to ensure findings are relevant, given that events of recent years have had a profound influence on public attitudes towards technology, such as the Cambridge Analytica scandal[184], the growth and prominence of large technology companies in society, the impacts of the COVID-19 pandemic and the popularisation of large-language models including ChatGPT and Bard.

Various methodologies are used in the research we have cited, from national online surveys to deliberative dialogues, qualitative focus groups and more. Each of these methods has different strengths and limitations, and the strengths of one approach can complement the limitations of another.

Table 1: Research methodologies and the evidence they surface

Research method Type of evidence Level of citizen involvement[185]
Representative surveys ·       Understanding the extent to which some views are held across some groups in the population.

·       Potential to track how views change over time or how they differ across groups in the population

 

 

Consultation
Deliberative processes like citizens’ juries or citizens’ assemblies  

·       Participants reflective of the diversity in a population reach conclusions and policy recommendations based on an informed and reasoned process, that considers pros and cons and different expertise and lived experiences.

 

 

Involve, collaborate or empower (depending on the extent to which the process is embedded in decision-making and recommendations from participants are consequential)
Qualitative research like focus groups or in-depth interviews ·       In-depth understanding of the type of views that exist on a topic either in a collective or individual setting, the contextual and socio-demographic reasons behind those views and an understanding of trade-offs people make in their thinking about a topic. Consultation
Co-designed research ·       Participants’ lived experience and knowledge is included in the research process from the start (including the problem that needs to be solved and how to approach the research), and power in how decisions are made is distributed. Involve, collaborate or empower (depending on the extent to which power is shared across participants and researchers and the extent to which it has an impact on decision-making)

Acknowledgements

This report was co-authored by Dr Anna Colom, Roshni Modhvadia and Octavia Reeve.

We are grateful to the following colleagues for their review and comments on a draft of this paper:

  • Reema Patel, ESRC Digital Good Network Policy Lead
  • Ali Shah, Global Principal Director for Responsible AI at Accenture and advisory board member at the Ada Lovelace Institute
  • Professor Jack Stilgoe, Co-lead Policy and Public Engagement Strategy, Responsible AI UK

Bibliography

Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (2021) <https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/>

Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ (2021) <https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/>

Ada Lovelace Institute, ‘Participatory Data Stewardship: A Framework for Involving People in the Use of Data’ (2021) <https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/>

Ada Lovelace Institute, ‘Rethinking Data and Rebalancing Digital Power’ (2022) <https://www.adalovelaceinstitute.org/project/rethinking-data/>

Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (2022) <https://www.adalovelaceinstitute.org/wp-content/uploads/2022/07/The-rule-of-trust-Ada-Lovelace-Institute-July-2022.pdf>

Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (2022) <https://www.adalovelaceinstitute.org/evidence-review/public-attitudes-data-regulation/>

Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (2023) <https://www.adalovelaceinstitute.org/wp-content/uploads/2023/09/ADALOV1.pdf>

Ada Lovelace Institute, ‘Listening to the Public. Views from the Citizens’ Biometrics Council on the Information Commissioner’s Office’s Proposed Approach to Biometrics.’ (2023) <https://www.adalovelaceinstitute.org/report/listening-to-the-public/>

Ada Lovelace Institute, ‘Regulating AI in the UK’ (2023) <https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/> accessed 1 August 2023

Ada Lovelace Institute, ‘Foundation Models in the Public Sector: Key Considerations for Deploying Public-Sector Foundation Models’ (2023) Policy briefing <https://www.adalovelaceinstitute.org/policy-briefing/foundation-models-public-sector/>

Ada Lovelace Institute, ‘Lessons from the App Store’ (2023) <https://www.adalovelaceinstitute.org/wp-content/uploads/2023/06/Ada-Lovelace-Institute-Lessons-from-the-App-Store-June-2023.pdf> accessed 27 September 2023

Ada Lovelace Institute and Alan Turing Institute, ‘How Do People Feel about AI? A Nationally Representative Survey of Public Attitudes to Artificial Intelligence in Britain’ (2023) <https://www.adalovelaceinstitute.org/report/public-attitudes-ai/> accessed 6 June 2023

American Psychological Association, ‘2023 Work in America Survey: Artificial Intelligence, Monitoring Technology, and Psychological Well-Being’ (https://www.apa.org, 2023) <https://www.apa.org/pubs/reports/work-in-america/2023-work-america-ai-monitoring> accessed 26 September 2023

BEIS, ‘Public Attitudes to Science’ (Department for Business, Energy and Industrial Strategy/Kantar Public 2019) <https://www.kantar.com/uk-public-attitudes-to-science>

BEIS, ‘BEIS Public Attitudes Tracker: Artificial Intelligence Summer 2022, UK’ (Department for Business, Energy & Industrial Strategy 2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1105175/BEIS_PAT_Summer_2022_Artificial_Intelligence.pdf>

BritainThinks and Centre for Data Ethics and Innovation, ‘AI Governance’ (2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1146010/CDEI_AI_White_Paper_Final_report.pdf>

Budic M, ‘AI and Us: Ethical Concerns, Public Knowledge and Public Attitudes on Artificial Intelligence’, Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (ACM 2022) <https://dl.acm.org/doi/10.1145/3514094.3539518> accessed 22 August 2023

‘CDEI | AI Governance’ (BritainThinks 2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1177293/Britainthinks_Report_-_CDEI_AI_Governance.pdf> accessed 22 August 2023

Central Digital & Data Office, ‘Data Ethics Framework’ (GOV.UK, 16 September 2020) <https://www.gov.uk/government/publications/data-ethics-framework> accessed 23 May 2023

Centre for Data Ethics and Innovation, ‘Public Attitudes to Data and AI: Tracker Survey (Wave 2)’ (2022) <https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-2>

Children’s Parliament, Scottish AI Alliance and The Alan Turing Institute, ‘Exploring Children’s Rights and AI. Stage 1 (Summary Report)’ (2023) <https://www.turing.ac.uk/sites/default/files/2023-05/exploring_childrens_rights_and_ai.pdf>

Cohen K and Doubleday R (eds), Future Directions for Citizen Science and Public Policy (Centre for Science and Policy 2021)

Curato N, Deliberative Mini-Publics: Core Design Features (Bristol University Press 2021)

Curato N and others, ‘Global Assembly on the Climate and Ecological Crisis: Evaluation Report’ [2023] https://eprints.ncl.ac.uk <https://eprints.ncl.ac.uk> accessed 26 October 2023

Daedalus, ‘Twelve Key Findings in Deliberative Democracy Research’ (2017) 146 Daedalus 28 <https://direct.mit.edu/daed/article/146/3/28-38/27148> accessed 6 August 2021Davies M and Birtwistle M, ‘Seizing the “AI Moment”: Making a Success of the AI Safety Summit’ (7 September 2023) <https://www.adalovelaceinstitute.org/blog/ai-safety-summit/>

Doteveryone, ‘People, Power and Technology: The 2020 Digital Attitudes Report’ (2020) <https://doteveryone.org.uk/wp-content/uploads/2020/05/PPT-2020_Soft-Copy.pdf> accessed 21 September 2023

Farbrace E, Warren J and Murphy R, ‘Understanding AI Uptake and Sentiment among People and Businesses in the UK’ (Office for National Statistics 2023)

Farrell DM and others, ‘When Mini-Publics and Maxi-Publics Coincide: Ireland’s National Debate on Abortion’ [2020] Representation 1 <https://www.tandfonline.com/doi/full/10.1080/00344893.2020.1804441> accessed 19 July 2021

Gilman M, ‘Democratizing AI: Principles for Meaningful Public Participation’ (Data & Society 2023) <https://datasociety.net/wp-content/uploads/2023/09/DS_Democratizing-AI-Public-Participation-Brief_9.2023.pdf> accessed 5 October 2023

Global Assembly Team, ‘Report of the 2021 Global Assembly on the Climate and Ecological Crisis’ (2022) <http://globalassembly.org>

Global Science Partnership, ‘The Inclusive Policymaking Toolkit for Climate Action’ (2023) <https://www.globalsciencepartnership.com/_files/ugd/b63d52_8b6b397c52b14b46a46c1f70e04839e1.pdf> accessed 3 October 2023

Goldberg S and Bächtiger A, ‘Catching the “Deliberative Wave”? How (Disaffected) Citizens Assess Deliberative Citizen Forums’ (2023) 53 British Journal of Political Science 239 <https://www.cambridge.org/core/product/identifier/S0007123422000059/type/journal_article> accessed 8 September 2023

González F and others, ‘Global Reactions to the Cambridge Analytica Scandal: A Cross-Language Social Media Study’ [2019] WWW ’19: Companion Proceedings of The 2019 World Wide Web Conference 799

Grönlund, Kimmo, Bächtiger, André, and Setälä, Maija, Deliberative Mini-Publics. Invovling Citizens in the Democratic Process (ECPR Press 2014)

Hadlington L and others, ‘The Use of Artificial Intelligence in a Military Context: Development of the Attitudes toward AI in Defense (AAID) Scale’ (2023) 14 Frontiers in Psychology 1164810 <https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1164810/full> accessed 24 August 2023

IAP2, ‘IAP2 Spectrum of Public Participation’ <https://iap2.org.au/wp-content/uploads/2020/01/2018_IAP2_Spectrum.pdf>

Ipsos, ‘Global Views on AI 2023: How People across the World Feel about Artificial Intelligence and Expect It Will Impact Their Life’ (2023) <https://www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report%20-%20NZ%20Release%2019.07.2023.pdf> accessed 3 October 2023

Ipsos MORI, Open Data Institute and Imperial College Health Partners, ‘NHS AI Lab Public Dialogue on Data Stewardship’ (NHS AI Lab 2022) <https://www.ipsos.com/en-uk/understanding-how-public-feel-decisions-should-be-made-about-access-their-personal-health-data-ai>

Kieslich K, Keller B and Starke C, ‘Artificial Intelligence Ethics by Design. Evaluating Public Perception on the Importance of Ethical Design Principles of Artificial Intelligence’ (2022) 9 Big Data & Society 205395172210929 <https://journals.sagepub.com/doi/10.1177/20539517221092956?icid=int.sj-full-text.similar-articles.3#:~:text=The%20results%20suggest%20that%20accountability,systems%20is%20slightly%20less%20important.> accessed 22 August 2023

Kieslich K, Lünich M and Došenović P, ‘Ever Heard of Ethical AI? Investigating the Salience of Ethical AI Issues among the German Population’ [2023] International Journal of Human–Computer Interaction 1 <http://arxiv.org/abs/2207.14086> accessed 22 August 2023

Landemore H, Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many (2017)

Lazar S and Nelson A, ‘AI Safety on Whose Terms?’ (2023) 381 Science 138 <https://www.science.org/doi/10.1126/science.adi8982> accessed 13 October 2023

‘Majority of Britons Support Vaccine Passports but Recognise Concerns in New Ipsos UK KnowledgePanel Poll’ (Ipsos, 31 March 2021) <https://www.ipsos.com/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-uk-knowledgepanel-poll> accessed 27 September 2023

Mellier C and Wilson R, ‘Getting Real About Citizens’ Assemblies: A New Theory of Change for Citizens’ Assemblies’ (European Democracy Hub: Research, 10 October 2023)

Milltown Partners and Clifford Chance, ‘Responsible AI in Practice: Public Expectations of Approaches to Developing and Deploying AI’ (2023) <https://www.cliffordchance.com/content/dam/cliffordchance/hub/TechGroup/responsible-ai-in-practice-report-2023.pdf>

Nussberger A-M and others, ‘Public Attitudes Value Interpretability but Prioritize Accuracy in Artificial Intelligence’ (2022) 13 Nature Communications 5821 <https://www.nature.com/articles/s41467-022-33417-3> accessed 8 June 2023

OECD, ‘Innovative Citizen Participation and New Democratic Institutions: Catching the Deliberative Wave’ (OECD 2021) <https://www.oecd-ilibrary.org/governance/innovative-citizen-participation-and-new-democratic-institutions_339306da-en> accessed 5 January 2022

OECD, ‘Institutionalising Public Deliberation’ (OECD) <https://www.oecd.org/governance/innovative-citizen-participation/icp-institutionalising%20deliberation.pdf>

Rainie L and others, ‘AI and Human Enhancement: Americans´Openness Is Tempered by a Range of Concerns’ (Pew Research Center 2022) <https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence/>

Thinks Insights & Strategy and Centre for Data Ethics and Innovation, ‘Public Perceptions of Foundation Models’ (Centre for Data Ethics and Innovation 2023) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1184584/Thinks_CDEI_Public_perceptions_of_foundation_models.pdf>

Tyson A and Kikuchi E, ‘Growing Public Concern about the Role of Artificial Intelligence in Daily Life’ (Pew Research Center 2023) <https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/>

UK Government, ‘Iconic Bletchley Park to Host UK AI Safety Summit in Early November’ <https://www.gov.uk/government/news/iconic-bletchley-park-to-host-uk-ai-safety-summit-in-early-november>

van der Veer SN and others, ‘Trading off Accuracy and Explainability in AI Decision-Making: Findings from 2 Citizens’ Juries’ (2021) 28 Journal of the American Medical Informatics Association 2128 <https://academic.oup.com/jamia/article/28/10/2128/6333351> accessed 3 May 2023

Woodruff A and others, ‘A Qualitative Exploration of Perceptions of Algorithmic Fairness’, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (ACM 2018) <https://dl.acm.org/doi/10.1145/3173574.3174230> accessed 22 August 2023

Woodruff A and others, ‘“A Cold, Technical Decision-Maker”: Can AI Provide Explainability, Negotiability, and Humanity?’ (arXiv, 1 December 2020) <http://arxiv.org/abs/2012.00874> accessed 22 August 2023

Wright J and others, ‘Privacy, Agency and Trust in Human-AI Ecosystems: Interim Report (Short Version)’ (The Alan Turing Institute)

Zhang B and Dafoe A, ‘Artificial Intelligence: American Attitudes and Trends’ [2019] SSRN Electronic Journal <https://www.ssrn.com/abstract=3312874> accessed 22 August 2023


Footnotes

[1] UK Government, ‘Iconic Bletchley Park to Host UK AI Safety Summit in Early November’ <https://www.gov.uk/government/news/iconic-bletchley-park-to-host-uk-ai-safety-summit-in-early-november>.

[2] Seth Lazar and Alondra Nelson, ‘AI Safety on Whose Terms?’ (2023) 381 Science 138 <https://www.science.org/doi/10.1126/science.adi8982> accessed 13 October 2023.

[3] Ada Lovelace Institute, ‘Regulating AI in the UK’ (2023) <https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/> accessed 1 August 2023.

[4] Ada Lovelace Institute, ‘Foundation Models in the Public Sector: Key Considerations for Deploying Public-Sector Foundation Models’ (2023) Policy briefing <https://www.adalovelaceinstitute.org/policy-briefing/foundation-models-public-sector/>.

[5] Matt Davies and Michael Birtwistle, ‘Seizing the “AI Moment”: Making a Success of the AI Safety Summit’ (7 September 2023) <https://www.adalovelaceinstitute.org/blog/ai-safety-summit/>.

[6] Central Digital & Data Office, ‘Data Ethics Framework’ (GOV.UK, 16 September 2020) <https://www.gov.uk/government/publications/data-ethics-framework> accessed 23 May 2023.

[7] Ada Lovelace Institute and Alan Turing Institute, ‘How Do People Feel about AI? A Nationally Representative Survey of Public Attitudes to Artificial Intelligence in Britain’ (2023) <https://www.adalovelaceinstitute.org/report/public-attitudes-ai/> accessed 6 June 2023.

[8] James Wright and others, ‘Privacy, Agency and Trust in Human-AI Ecosystems: Interim Report (Short Version)’ (The Alan Turing Institute).

[9] Ada Lovelace Institute and Alan Turing Institute (n 7).

[10] Emily Farbrace, Jeni Warren and Rhian Murphy, ‘Understanding AI Uptake and Sentiment among People and Businesses in the UK’ (Office for National Statistics 2023).

[11] Milltown Partners and Clifford Chance, ‘Responsible AI in Practice: Public Expectations of Approaches to Developing and Deploying AI’ (2023) <https://www.cliffordchance.com/content/dam/cliffordchance/hub/TechGroup/responsible-ai-in-practice-report-2023.pdf>.

[12] Lee Rainie and others, ‘AI and Human Enhancement: Americans´ Openness Is Tempered by a Range of Concerns’ (Pew Research Center 2022) <https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence/>.

[13] Baobao Zhang and Allan Dafoe, ‘Artificial Intelligence: American Attitudes and Trends’ [2019] SSRN Electronic Journal <https://www.ssrn.com/abstract=3312874> accessed 22 August 2023.

[14] Kimon Kieslich, Marco Lünich and Pero Došenović, ‘Ever Heard of Ethical AI? Investigating the Salience of Ethical AI Issues among the German Population’ [2023] International Journal of Human–Computer Interaction 1 <http://arxiv.org/abs/2207.14086> accessed 22 August 2023.

[15] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (2021) <https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/>.

[16] Ada Lovelace Institute and Alan Turing Institute (n 7).

[17] BEIS, ‘Public Attitudes to Science’ (Department for Business, Energy and Industrial Strategy/Kantar Public 2019) <https://www.kantar.com/uk-public-attitudes-to-science>.

[18] Alec Tyson and Emma Kikuchi, ‘Growing Public Concern about the Role of Artificial Intelligence in Daily Life’ (Pew Research Center 2023) <https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/>.

[19] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (2022) <https://www.adalovelaceinstitute.org/evidence-review/public-attitudes-data-regulation/>.

[20] Thinks Insights & Strategy and Centre for Data Ethics and Innovation, ‘Public Perceptions of Foundation Models’ (Centre for Data Ethics and Innovation 2023) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1184584/Thinks_CDEI_Public_perceptions_of_foundation_models.pdf>.

[21] Allison Woodruff and others, ‘“A Cold, Technical Decision-Maker”: Can AI Provide Explainability, Negotiability, and Humanity?’ (arXiv, 1 December 2020) <http://arxiv.org/abs/2012.00874> accessed 22 August 2023.

[22] Ada Lovelace Institute and Alan Turing Institute (n 7).

[23] Ipsos MORI, Open Data Institute and Imperial College Health Partners, ‘NHS AI Lab Public Dialogue on Data Stewardship’ (NHS AI Lab 2022) <https://www.ipsos.com/en-uk/understanding-how-public-feel-decisions-should-be-made-about-access-their-personal-health-data-ai>.

[24] BEIS (n 17).

[25] Woodruff and others (n 21).

[26] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[27] Ada Lovelace Institute and Alan Turing Institute (n 7).

[28] Ada Lovelace Institute, ‘Listening to the Public. Views from the Citizens’ Biometrics Council on the Information Commissioner’s Office’s Proposed Approach to Biometrics.’ (2023) <https://www.adalovelaceinstitute.org/report/listening-to-the-public/>.

[29] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15).

[30] Ada Lovelace Institute and Alan Turing Institute (n 7).

[31] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (2022) <https://www.adalovelaceinstitute.org/wp-content/uploads/2022/07/The-rule-of-trust-Ada-Lovelace-Institute-July-2022.pdf>.

[32] BritainThinks and Centre for Data Ethics and Innovation, ‘AI Governance’ (2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1146010/CDEI_AI_White_Paper_Final_report.pdf>.

[33] Ada Lovelace Institute and Alan Turing Institute (n 7).

[34] Allison Woodruff and others, ‘A Qualitative Exploration of Perceptions of Algorithmic Fairness’, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (ACM 2018) <https://dl.acm.org/doi/10.1145/3173574.3174230> accessed 22 August 2023.

[35] Ada Lovelace Institute and Alan Turing Institute (n 7).

[36] Woodruff and others (n 21).

[37] Rainie and others (n 12).

[38] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[39] Woodruff and others (n 34).

[40] BritainThinks and Centre for Data Ethics and Innovation (n 32).

[41] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[42] Woodruff and others (n 21).

[43] ibid.

[44] ibid.

[45] ibid.

[46] Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (2023) <https://www.adalovelaceinstitute.org/wp-content/uploads/2023/09/ADALOV1.pdf>.

[47] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20).

[48] Ada Lovelace Institute and Alan Turing Institute (n 7).

[49] Marina Budic, ‘AI and Us: Ethical Concerns, Public Knowledge and Public Attitudes on Artificial Intelligence’, Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (ACM 2022) <https://dl.acm.org/doi/10.1145/3514094.3539518> accessed 22 August 2023.

[50] Kieslich, Lünich and Došenović (n 14).

[51] Rainie and others (n 12).

[52] American Psychological Association, ‘2023 Work in America Survey: Artificial Intelligence, Monitoring Technology, and Psychological Well-Being’ (https://www.apa.org, 2023) <https://www.apa.org/pubs/reports/work-in-america/2023-work-america-ai-monitoring> accessed 26 September 2023.

[53] ibid.

[54] BEIS (n 17).

[55] Tyson and Kikuchi (n 18).

[56] Ada Lovelace Institute and Alan Turing Institute (n 7).

[57] Tyson and Kikuchi (n 18).

[58] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[59] ‘Majority of Britons Support Vaccine Passports but Recognise Concerns in New Ipsos UK KnowledgePanel Poll’ (Ipsos, 31 March 2021) <https://www.ipsos.com/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-uk-knowledgepanel-poll> accessed 27 September 2023.

[60] American Psychological Association (n 52).

[61] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[62] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15).

[63] ibid.

[64] ibid. ibid.

[65] ‘Explainer: What Is a Foundation Model?’ <https://www.adalovelaceinstitute.org/resource/foundation-models-explainer/> accessed 26 October 2023

[66] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[67] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[68] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[69] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[70] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[71] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20).

[72] ibid. ibid.

[73] Thinks Insights & Strategy and Centre for Data Ethics and Innovation (n 20). ibid.

[74] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (n 19).

[75] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[76] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15).

[77] Doteveryone, ‘People, Power and Technology: The 2020 Digital Attitudes Report’ (2020) <https://doteveryone.org.uk/wp-content/uploads/2020/05/PPT-2020_Soft-Copy.pdf> accessed 21 September 2023.

[78] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15).

[79] Ada Lovelace Institute, ‘Lessons from the App Store’ <https://www.adalovelaceinstitute.org/wp-content/uploads/2023/06/Ada-Lovelace-Institute-Lessons-from-the-App-Store-June-2023.pdf> accessed 27 September 2023.

[80] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[81] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[82] Ada Lovelace Institute and Alan Turing Institute (n 7).

[83] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[84] Ada Lovelace Institute and Alan Turing Institute (n 7).

[85] Centre for Data Ethics and Innovation, ‘Public Attitudes to Data and AI: Tracker Survey (Wave 2)’ (2022) <https://www.gov.uk/government/publications/public-attitudes-to-data-and-ai-tracker-survey-wave-2>.

[86] Zhang and Dafoe (n 13).

[87] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[88] Ada Lovelace Institute and Alan Turing Institute (n 7).

[89] Centre for Data Ethics and Innovation (n 84) 2.

[90] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[91] Wright and others (n 8).

[92] Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (n 46).

[93] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (n 19).

[94] ibid.

[95] Milltown Partners and Clifford Chance (n 11).

[96] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[97] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ (Ada Lovelace Institute 2021) <https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/>.

[98] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[99] ‘CDEI | AI Governance’ (BritainThinks 2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1177293/Britainthinks_Report_-_CDEI_AI_Governance.pdf> accessed 22 August 2023. ibid.

[100] Kimon Kieslich, Birte Keller and Christopher Starke, ‘Artificial Intelligence Ethics by Design. Evaluating Public Perception on the Importance of Ethical Design Principles of Artificial Intelligence’ (2022) 9 Big Data & Society 205395172210929 <https://journals.sagepub.com/doi/10.1177/20539517221092956?icid=int.sj-full-text.similar-articles.3#:~:text=The%20results%20suggest%20that%20accountability,systems%20is%20slightly%20less%20important.> accessed 22 August 2023.

[101] Children’s Parliament, Scottish AI Alliance and The Alan Turing Institute, ‘Exploring Children’s Rights and AI. Stage 1 (Summary Report)’ (2023) <https://www.turing.ac.uk/sites/default/files/2023-05/exploring_childrens_rights_and_ai.pdf>.

[102] ‘CDEI | AI Governance’ (n 98).

[103] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ (n 96).

[104] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (n 19).

[105] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[106] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (n 19).

[107] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[108] Wright and others (n 8).

[109] Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (n 46). ibid.

[110] Ada Lovelace Institute and Alan Turing Institute (n 7).

[111] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[112] Sabine N van der Veer and others, ‘Trading off Accuracy and Explainability in AI Decision-Making: Findings from 2 Citizens’ Juries’ (2021) 28 Journal of the American Medical Informatics Association 2128 <https://academic.oup.com/jamia/article/28/10/2128/6333351> accessed 3 May 2023.

[113] Woodruff and others (n 21).

[114] Anne-Marie Nussberger and others, ‘Public Attitudes Value Interpretability but Prioritize Accuracy in Artificial Intelligence’ (2022) 13 Nature Communications 5821 <https://www.nature.com/articles/s41467-022-33417-3> accessed 8 June 2023.

[115] Woodruff and others (n 34).

[116] Ada Lovelace Institute and Alan Turing Institute (n 7).

[117] Woodruff and others (n 21).

[118] ibid.

[119] Milltown Partners and Clifford Chance (n 11).

[120]  Woodruff and others (n 21).

[121] Ada Lovelace Institute and Alan Turing Institute (n 7).

[122] ibid.

[123] Woodruff and others (n 21).

[124] BritainThinks and Centre for Data Ethics and Innovation (n 32). ibid.

[125] Ada Lovelace Institute and Alan Turing Institute (n 7).

[126] Ada Lovelace Institute, ‘Who Cares What the Public Think?’ (n 19).

[127] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15). ibid.

[128] Ada Lovelace Institute and Alan Turing Institute (n 7).

[129] Milltown Partners and Clifford Chance (n 11).

[130] Ada Lovelace Institute, ‘Listening to the Public. Views from the Citizens’ Biometrics Council on the Information Commissioner’s Office’s Proposed Approach to Biometrics.’ (n 28).

[131] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[132] ibid.

[133] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23). ibid.

[134] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[135] Ada Lovelace Institute, ‘Access Denied? Socioeconomic Inequalities in Digital Health Services’ (n 46).

[136] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[137] ibid.

[138] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council. Recommendations and Findings of a Public Deliberation on Biometrics Technology, Policy and Governance’ (n 15). ibid.

[139] Ada Lovelace Institute, ‘The Citizens’ Biometrics Council’ (n 96). ibid.

[140] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31). ibid.

[141] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23). ibid.

[142] Ada Lovelace Institute, ‘The Rule of Trust: Findings from Citizens’ Juries on the Good Governance of Data in Pandemics.’ (n 31).

[143] Children’s Parliament, Scottish AI Alliance and The Alan Turing Institute (n 100).

[144] ibid.

[145] Ada Lovelace Institute and Alan Turing Institute (n 7).

[146] BEIS, ‘BEIS Public Attitudes Tracker: Artificial Intelligence Summer 2022, UK’ (Department for Business, Energy & Industrial Strategy 2022) <https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1105175/BEIS_PAT_Summer_2022_Artificial_Intelligence.pdf>.

[147] Tyson and Kikuchi (n 18).

[148] Ada Lovelace Institute and Alan Turing Institute (n 7).

[149] American Psychological Association (n 52).

[150] Ipsos, ‘Global Views on AI 2023: How People across the World Feel about Artificial Intelligence and Expect It Will Impact Their Life’ (2023) <https://www.ipsos.com/sites/default/files/ct/news/documents/2023-07/Ipsos%20Global%20AI%202023%20Report%20-%20NZ%20Release%2019.07.2023.pdf> accessed 3 October 2023.

[151] Tyson and Kikuchi (n 18).

[152] Ada Lovelace Institute and Alan Turing Institute (n 7).

[153] Rainie and others (n 12).

[154] Ada Lovelace Institute and Alan Turing Institute (n 7).

[155]

Wright and others (n 8).

[156] Woodruff and others (n 21).

[157] IAP2, ‘IAP2 Spectrum of Public Participation’ <https://iap2.org.au/wp-content/uploads/2020/01/2018_IAP2_Spectrum.pdf>.

[158] Ada Lovelace Institute, ‘Participatory Data Stewardship: A Framework for Involving People in the Use of Data’ (2021) <https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/>.

[159] Lee Hadlington and others, ‘The Use of Artificial Intelligence in a Military Context: Development of the Attitudes toward AI in Defense (AAID) Scale’ (2023) 14 Frontiers in Psychology 1164810 <https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1164810/full> accessed 24 August 2023.

[160] Katie Cohen and Robert Doubleday (eds), Future Directions for Citizen Science and Public Policy (Centre for Science and Policy 2021).

[161] ibid. ibid.

[162] Claire Mellier and Rich Wilson, ‘Getting Real About Citizens’ Assemblies: A New Theory of Change for Citizens’ Assemblies’ (European Democracy Hub: Research, 10 October 2023).

[163] Ada Lovelace Institute, ‘Rethinking Data and Rebalancing Digital Power’ (2022) <https://www.adalovelaceinstitute.org/project/rethinking-data/>.

[164] Global Science Partnership, ‘The Inclusive Policymaking Toolkit for Climate Action’ (2023) <https://www.globalsciencepartnership.com/_files/ugd/b63d52_8b6b397c52b14b46a46c1f70e04839e1.pdf> accessed 3 October 2023.

[165] Michele Gilman, ‘Democratizing AI: Principles for Meaningful Public Participation’ (Data & Society 2023) <https://datasociety.net/wp-content/uploads/2023/09/DS_Democratizing-AI-Public-Participation-Brief_9.2023.pdf> accessed 5 October 2023.

[166] Hélène Landemore, Democratic Reason: Politics, Collective Intelligence, and the Rule of the Many (2017).

[167] Nicole Curato, Deliberative Mini-Publics: Core Design Features (Bristol University Press 2021).

[168] OECD, ‘Institutionalising Public Deliberation’ (OECD) <https://www.oecd.org/governance/innovative-citizen-participation/icp-institutionalising%20deliberation.pdf>.

[169] Nicole Curato and others, ‘Twelve Key Findings in Deliberative Democracy Research’ (2017) 146 Daedalus 28 <https://direct.mit.edu/daed/article/146/3/28-38/27148> accessed 6 August 2021.

[170] Ipsos MORI, Open Data Institute and Imperial College Health Partners (n 23).

[171] Ada Lovelace Institute, ‘Rethinking Data and Rebalancing Digital Power’ (n 162).

[172] ibid.

[173] Global Science Partnership (n 163).

[174] Grönlund, Kimmo, Bächtiger, André, and Setälä, Maija, Deliberative Mini-Publics. Involving Citizens in the Democratic Process (ECPR Press 2014).

[175] Curato and others (n 168).

[176] OECD, ‘Innovative Citizen Participation and New Democratic Institutions: Catching the Deliberative Wave’ (OECD 2021) <https://www.oecd-ilibrary.org/governance/innovative-citizen-participation-and-new-democratic-institutions_339306da-en> accessed 5 January 2022.

[177] Saskia Goldberg and André Bächtiger, ‘Catching the “Deliberative Wave”? How (Disaffected) Citizens Assess Deliberative Citizen Forums’ (2023) 53 British Journal of Political Science 239 <https://www.cambridge.org/core/product/identifier/S0007123422000059/type/journal_article> accessed 8 September 2023.

[178] OECD (n 167).

[179] David M Farrell and others, ‘When Mini-Publics and Maxi-Publics Coincide: Ireland’s National Debate on Abortion’ [2020] Representation 1 <https://www.tandfonline.com/doi/full/10.1080/00344893.2020.1804441> accessed 19 July 2021.

[180] Global Assembly Team, ‘Report of the 2021 Global Assembly on the Climate and Ecological Crisis’ (2022) <http://globalassembly.org>. ibid.

[181] Nicole Curato and others, ‘Global Assembly on the Climate and Ecological Crisis: Evaluation Report’ (2023).

[182] Ada Lovelace Institute, ‘Rethinking Data and Rebalancing Digital Power’ (n 162).

[183] Mellier and Wilson (n 161).

[184] Felipe González and others, ‘Global Reactions to the Cambridge Analytica Scandal: A Cross-Language Social Media Study’ [2019] WWW ’19: Companion Proceedings of The 2019 World Wide Web Conference 799.

[185] Ada Lovelace Institute, ‘Participatory Data Stewardship: A Framework for Involving People in the Use of Data’ (n 157).


Image credit: Kira Allman

  1. Hancock, A. and Steer, G. (2021) ‘Johnson backtracks on vaccine “passport for pubs” after backlash’, Financial Times, 25 March 2021. Available at: https://www.ft.com/content/aa5e8372-8cec-4b82-96d8-0019f2f24998 (Accessed: 5 April 2021).
  2. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.
    adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021)
  3. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  4. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  5. Olivarius, K. (2020) ‘The Dangerous History of Immunoprivilege’, The New York Times. 12 April 2020. Available at: https://www.nytimes.com/2020/04/12/opinion/coronavirus-immunity-passports.html (Accessed: 6 April 2021).
  6. World Health Organization (ed.) (2016) International health regulations (2005). Third edition. Geneva, Switzerland: World Health Organization.
  7. Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021).
  8. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021).
  9. Wilson, K., Atkinson, K. M. and Bell, C. P. (2016) ‘Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record’, The American Journal of Tropical Medicine and Hygiene, 94(3), pp. 485–488. doi: 10.4269/ajtmh.15-0510
  10. Kobie, N. (2020) ‘Plans for coronavirus immunity passports should worry us all’, Wired UK, 8 June 202. Available at: https://www.wired.
    co.uk/article/uk-immunity-passports-coronavirus (Accessed: 10 February 2021); Miller, J. (2020) ‘Armed with Roche antibody test, Germany faces immunity passport dilemma’, Reuters, 4 May 2020. Available at: https://www.reuters.com/article/health-coronavirusgermany-antibodies-idUSL1N2CM0WB (Accessed: 10 February 2021); Rayner, G. and Bodkin, H. (2020) ‘Government considering “health certificates” if proof of immunity established by new antibody test’, The Telegraph, 14 May 2020. Available at: https:// www.telegraph.co.uk/politics/2020/05/14/government-considering-health-certificates-proof-immunity-established/ (Accessed: 10 February 2021).
  11. World Health Organisation (2020) “Immunity passports” in the context of COVID-19. Scientific Brief. 24 April 2020. Available at: https://www.who.int/news-room/commentaries/detail/immunity-passports-in-the-context-of-covid-19 (Accessed: 10 February 2021).
  12. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed:
    6 April 2021).
  13. European Commission (2021) Coronavirus: Commission proposes a Digital Green Certificate, European Commission – European Commission. Available at: https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1181 (Accessed: 6 April 2021).
  14. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  15. World Health Organisation (2020) Estonia and WHO to jointly develop digital vaccine certificate to strengthen COVAX. Available at: https://www.who.int/news-room/feature-stories/detail/estonia-and-who-to-jointly-develop-digital-vaccine-certificate-to-strengthen-covax (Accessed: 6 April 2021). World Health Organisation (2020) World Health Organization open call for nomination of experts to contribute to the Smart Vaccination Certificate technical specifications and standards. Available at: https://www.who.int/news-room/articles-detail/world-health-organization-open-call-for-nomination-of-experts-to-contribute-to-the-smart-vaccination-certificate-technical-specifications-and-standards-application-deadline-14-december-2020 (Accessed: 6 April 2021). Reuters (2021), WHO does not back vaccination passports for now – spokeswoman. Available at: https://www.reuters.com/article/us-health-coronavirus-who-vaccines-idUKKBN2BT158 (Accessed: 13 April 2021)
  16. IBM (2021) Digital Health Pass – Overview. Available at: https://www.ibm.com/products/digital-health-pass (Accessed: 6 April 2021).
  17. Watson Health (2020) ‘IBM and Salesforce join forces to help deliver verifiable vaccine and health passes’, Watson Health Perspectives. Available at: https://www.ibm.com/blogs/watson-health/partnership-with-salesforce-verifiable-health-pass/(Accessed: 6 April 2021).
  18. New York State (2021) Excelsior Pass. Available at: https://covid19vaccine.health.ny.gov/excelsior-pass (Accessed: 6 April 2021).
  19. CommonPass (2021) CommonPass. Available at: https://commonpass.org (Accessed: 7 April 2021) IATA (2021). IATA Travel Pass Initiative. Available at: https://www.iata.org/en/programs/passenger/travel-pass/ (Accessed: 7 April 2021).
  20. COVID-19 Credentials Initiative (2021). COVID-19 Credentials Initiative. Available at: https://www.covidcreds.org/ (Accessed: 7 April 2021). VCI (2021). Available at: https://vci.org/ (Accessed: 7 April 2021).
  21. myGP (2020) ‘“myGP” to launch England’s first digital COVID-19 vaccination verification feature for smartphones.’ myGP. 9 December 2020. Available at: https://www.mygp.com/mygp-to-launch-englands-first-digital-covid-19-vaccination-verificationfeature-for-smartphones/ (Accessed: 7 April 2021). iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase.
    Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  22. BBC News (2020) ‘Covid-19: No plans for “vaccine passport” – Michael Gove’, BBC News. 1 December 2020. Available at: https://www.bbc.com/news/uk-55143484 (Accessed: 7 April 2021). BBC News (2021) ‘Covid: Minister rules out vaccine passports in UK’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/55970801 (Accessed: 7 April 2021).
  23. Sheridan, D. (2021) ‘Vaccine passports to enter shops, pubs and events “under consideration”’, The Telegraph, 14 February 2021.
    Available at: https://www.telegraph.co.uk/news/2021/02/14/vaccine-passports-enter-shops-pubs-events-consideration/ (Accessed:
    7 April 2021). Zeffman, H. and Dathan, M. (2021) ‘Boris Johnson sees Covid vaccine passport app as route to freedom’, The Times, 11 February 2021. Available at: https://www.thetimes.co.uk/article/boris-johnson-sees-covid-vaccine-passport-app-as-route-tofreedom-rt07g63xn (Accessed: 7 April 2021)
  24. Boland, H. (2021) ‘Government funds eight vaccine passport schemes despite “no plans” for rollout’, The Telegraph, 24 January 2021. Available at: https://www.telegraph.co.uk/technology/2021/01/24/government-funds-eight-vaccine-passport-schemes-despiteno-plans/ (Accessed: 7 April 2021). Department of Health and Social Care (2020), Covid-19 Certification/Passport MVP. Available at: https://www.contractsfinder.service.gov.uk/notice/bf6eef14-6345-429a-a4e7-df68a39bd135 (Accessed: 13 April 2021). Hymas, C. and Diver, T. (2021) ‘Vaccine certificates being developed to unlock international travel’, The Telegraph, 12 February 2021. Available at: https://www.telegraph.co.uk/politics/2021/02/12/government-develop-COVID-vaccine-certificates-travel-abroad/ (Accessed: 7 April 2021)
  25. Cabinet Office (2021) COVID-19 Response – Spring 2021, GOV.UK. Available at: https://www.gov.uk/government/publications/COVID19-response-spring-2021/COVID-19-response-spring-2021 (Accessed: 7 April 2021)
  26. Cabinet Office (2021) Roadmap Reviews: Update. Available at: https://www.gov.uk/government/publications/COVID-19-responsespring-2021-reviews-terms-of-reference/roadmap-reviews-update.
  27. Scientific Advisory Group for Emergencies (2021) ‘SAGE 79 minutes: Coronavirus (COVID-19) response, 4 February 2021’, GOV.UK. 22 February 2021, Available at: https://www.gov.uk/government/publications/sage-79-minutes-coronavirus-covid-19-response-4-february-2021 (Accessed: 6 April 2021).
  28. Ada Lovelace Institute (2021) The epidemiological and economic impact of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=KRUmM-_Jjk4 (Accessed: 7 April 2021)
  29. European Centre for Disease Prevention and Control (2021) Risk of SARS-CoV-2 transmission from newly-infected individuals with documented previous infection or vaccination. Available at: https://www.ecdc.europa.eu/en/publications-data/sars-cov-2-transmission-newly-infected-individuals-previous-infection (Accessed: 13 April 2021). Science News (2021) Moderna and Pfizer COVID-19 vaccines may block infection as well as disease. Available at: https://www.sciencenews.org/article/coronavirus-covidvaccine-moderna-pfizer-transmission-disease (Accessed: 13 April 2021)
  30. Bonnefoy, P. and Londoño, E. (2021) ‘Despite Chile’s Speedy COVID-19 Vaccination Drive, Cases Soar’, The New York Times, 30 March 2021. Available at: https://www.nytimes.com/2021/03/30/world/americas/chile-vaccination-cases-surge.html (Accessed: 6 April 2021)
  31. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021). Parker et al. (2021) An interactive website tracking COVID-19 vaccine development. Available at: https://vac-lshtm.shinyapps.io/ncov_vaccine_landscape/ (Accessed: 21 April 2021)
  32. BBC News (2021) ‘COVID: Oxford jab offers less S Africa variant protection’, BBC News. 7 February 2021. Available at: https://www.bbc.com/news/uk-55967767 (Accessed: 6 April 2021).
  33. Wise, J. (2021) ‘COVID-19: The E484K mutation and the risks it poses’, The BMJ, p. n359. doi: 10.1136/bmj.n359. Sample, I. (2021) ‘What do we know about the Indian coronavirus variant?’, The Guardian, 19 April 2021. Available at: https://www.theguardian.com/world/2021/apr/19/what-do-we-know-about-the-indian-coronavirus-variant (Accessed: 22 April)
  34. World Health Organisation (2021) Coronavirus disease (COVID-19): Vaccines. Available at: https://www.who.int/news-room/q-a-detail/coronavirus-disease-(COVID-19)-vaccines (Accessed: 6 April 2021)
  35. ibid.
  36. The Royal Society provides a different categorisation, between measures demonstrating the subject is not infectious (PCR and Lateral Flow tests) and those suggesting the subject is immune and so will not become infectious (antibody tests and vaccination). Edgar Whitley, a member of our expert deliberative panel, distinguishes between ‘red light’ measures which say a person is potentially infectious and should self isolate, and ‘green light’ ones, which say a person tests negative and is not infectious.
  37. Asai, T. (2020) ‘COVID-19: accurate interpretation of diagnostic tests—a statistical point of view’, Journal of Anesthesia. doi: 10.1007/s00540-020-02875-8.
  38. Kucirka, L. M. et al. (2020) ‘Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction–Based SARS CoV-2 Tests by Time Since Exposure’, Annals of Internal Medicine. doi: 10.7326/M2
  39. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  40. Ainsworth, M. et al. (2020) ‘Performance characteristics of five immunoassays for SARS-CoV-2: a head-to-head benchmark comparison’, The Lancet Infectious Diseases, 20(12), pp. 1390–1400. doi: 10.1016/S1473-3099(20)30634-4.
  41. European Centre for Disease Prevention and Control (2021) Immune responses and immunity to SARS-CoV-2, European Centre for Disease Prevention and Control. Available at: https://www.ecdc.europa.eu/en/COVID-19/latest-evidence/immune-responses (Accessed: 10 February 2020).
  42. Kellam, P. and Barclay, W. 2020 (no date) ‘The dynamics of humoral immune responses following SARS-CoV-2 infection and the potential for reinfection’, Journal of General Virology, 101(8), pp. 791–797. doi: 10.1099/jgv.0.001439.
  43. Drury. J., et al. (2021) Behavioural responses to Covid-19 health certification: A rapid review. 9 April 2021. Available at https://www.medrxiv.org/content/10.1101/2021.04.07.21255072v1 (Accessed: 13 April 2021)
  44. ibid.
  45. Brianna Miller, Ryan Wain, and George Alderman (2021) ‘Introducing a Global COVID Travel Pass to Get the World Moving Again’, Tony Blair Institute for Global Change. Available at: https://institute.global/policy/introducing-global-COVID-travel-pass-get-world-moving-again (Accessed: 6 April 2021).
  46. World Health Organisation (2021) Interim position paper: considerations regarding proof of COVID-19 vaccination for international travellers. Available at: https://www.who.int/news-room/articles-detail/interim-position-paper-considerations-regarding-proof-of-COVID-19-vaccination-for-international-travellers (Accessed: 6 April 2021).
  47. World Health Organisation (2021) Call for public comments: Interim guidance for developing a Smart Vaccination Certificate – Release Candidate 1. Available at: https://www.who.int/news-room/articles-detail/call-for-public-comments-interim-guidance-for-developing-a-smart-vaccination-certificate-release-candidate-1 (Accessed: 6 April 2021).
  48. SPI-M-O (2020) Consensus statement on events and gatherings, 19 August 2020. Available at: https://www.gov.uk/government/publications/spi-m-o-consensus-statement-on-events-and-gatherings-19-august-2020 (Accessed: 13 April 2021)
  49. Patrick Gracey, Response to Ada Lovelace Institute call for evidence.
  50. Walker, P. (2021) ‘UK arts figures call for Covid certificates to revive industry’, The Guardian. 23 April 2021. Available at: http://www.theguardian.com/culture/2021/apr/23/uk-arts-figures-covid-certificates-revive-industry-letter (Accessed: 5 May 2021).
  51. Silverstone (2021), Summer sporting events support Covid certification, 9 April 2021. Available at: https://www.silverstone.co.uk/news/summer-sporting-events-support-covid-certification-review (Accessed: 22 April 2021).
  52. BBC News (2021) ‘Pimlico Plumbers to make workers get vaccinations’. BBC News. Available at: https://www.bbc.co.uk/news/business-55654229 (Accessed: 13 April 2021).
  53. Leadership and Worker Engagement Forum (2021) ‘Management of risk when planning work: The right priorities’, Leadership and worker involvement toolkit, p. 1. Available at: https://www.hse.gov.uk/construction/lwit/assets/downloads/hierarchy-risk-controls.pdf.
  54. Department of Health and Social Care (2021) ‘Consultation launched on staff COVID-19 vaccines in care homes with older adult residents’. GOV.UK. Available at: https://www.gov.uk/government/news/consultation-launched-on-staff-covid-19-vaccines-in-care-homes-with-older-adult-residents (Accessed: 14 April 2021)
  55. Full Fact (2021) Is there a precedent for mandatory vaccines for care home workers? Available at: https://fullfact.org/health/mandatory-vaccine-care-home-hepatitis-b/ (Accessed: 6 April 2021).
  56. House of Commons Work and Pensions Committee. (2021) Oral evidence: Health and Safety Executive HC 39. 17 March 2021. Available at: https://committees.parliament.uk/oralevidence/1910/pdf/ (Accessed: 6 April 2021). Q178
  57. Acas (2021) Getting the coronavirus (COVID-19) vaccine for work. [online] Available at: https://www.acas.org.uk/working-safely-coronavirus/getting-the-coronavirus-vaccine-for-work (Accessed: 6 April 2021).
  58. Pakes, A. (2020) ‘Workplace digital monitoring and surveillance: what are my rights?’, Prospect. Available at: https://prospect.org.uk/news/workplace-digital-monitoring-and-surveillance-what-are-my-rights/ (Accessed: 6 April 2021).
  59. Allegretti. A., and Booth. R., (2021) ‘Covid-status certificate scheme could be unlawful discrimination, says EHRC’. The Guardian. 14 April 2021. Available at: https://www.theguardian.com/world/2021/apr/14/covid-status-certificates-may-cause-unlawful-discrimination-warns-ehrc (Accessed: 14 April 2021).
  60. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  61. European Court of Human Rights (2014) Case of Brincat and Others v. Malta. Available at: http://hudoc.echr.coe.int/eng?i=001-145790 (Accessed: 6 April 2021).
  62. Ministry of Health (2021) What is a Green Pass? Available at: https://corona.health.gov.il/en/directives/green-pass-info/ (Accessed: 6 April 2021). Ministry of Health (2021) Traffic Light App for Businesses. Available at: https://corona.health.gov.il/en/directives/biz-ramzor-app/ (Accessed: 8 April 2021).
  63. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021)
  64. Beduschi, A. (2020) Digital Health Passports for COVID-19: Data Privacy and Human Rights Law. University of Exeter. Available at: https://socialsciences.exeter.ac.uk/media/universityofexeter/collegeofsocialsciencesandinternationalstudies/lawimages/research/Policy_brief_-_Digital_Health_Passports_COVID-19_-_Beduschi.pdf (Accessed: 6 April 2021).
  65. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  66. ibid.
  67. Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence.
  68. Beduschi, A. (2020)
  69. European Court of Human Rights. (2020) Guide on Article 8 of the European Convention on Human Rights. Available at: https://www.echr.coe.int/documents/guide_art_8_eng.pdf (Accessed: 6 April 2021).
  70. Access Now, Response to Ada Lovelace Institute call for evidence
  71. Privacy International (2020) “Anytime and anywhere”: Vaccination passports, immunity certificates, and the permanent pandemic. Available at: http://privacyinternational.org/long-read/4350/anytime-and-anywhere-vaccination-passports-immunity-certificates-and-permanent (Accessed: 26 April 2021).
  72. Douglas, T. (2021) ‘Cross Post: Vaccine Passports: Four Ethical Objections, and Replies’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/cross-post-vaccine-passports-four-ethical-objections-and-replies/ (Accessed: 8 April 2021).
  73. Brown, R. C. H. et al. (2020) ‘Passport to freedom? Immunity passports for COVID-19’, Journal of Medical Ethics, 46(10), pp. 652–659. doi: 10.1136/medethics-2020-106365.
  74. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence; Julian Savulescu and Rebecca Brown, Response to Ada Lovelace Institute call for evidence
  75. Beduschi, A. (2020).
  76. Black, I. and Forsberg, L. (2021) ‘Inoculate to Imbibe? On the Pub Landlord Who Requires You to be Vaccinated against COVID’. Practical Ethics. Available at: http://blog.practicalethics.ox.ac.uk/2021/03/inoculate-to-imbibe/ (Accessed: 6 April 2021).
  77. Hindu Council UK (2021) Supporting Nationwide Vaccination Programme. 19 January 2021. Available at: http://www.hinducounciluk.org/2021/01/19/supporting-nationwide-vaccination-programme/ (Accessed: 6 April 2021); Ladaria Ferrer. L., and Giacomo Morandi. G. (2020) ‘Note on the morality of using some anti-COVID-19 vaccines’. Vatican. Available at: https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_con_cfaith_doc_20201221_nota-vaccini-antiCOVID_en.html (Accessed: 6 April 2021); Sadakat Kadri (2021) ‘For Muslims wary of the COVID vaccine: there’s every religious reason not to be’. The Guardian. 8 February 2021. Available at: http://www.theguardian.com/commentisfree/2021/feb/18/muslims-wary-COVID-vaccine-religious-reason (Accessed: 6 April 2021).
  78. Office for National Statistics (2021) Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England: 8 December 2020 to 12 April 2021. 6 May 2021. Available at: Coronavirus and vaccination rates in people aged 50 years and over by socio-demographic characteristic, England – Office for National Statistics (ons.gov.uk).
  79. Schraer. R., (2021) ‘Covid: Black leaders fear racist past feeds mistrust in vaccine’. BBC News. 6 May 2021. Available at: https://www.bbc.co.uk/news/health-56813982 (Accessed: 7 May 2021)
  80. Allegretti. A., and Booth. R., (2021).
  81. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  82. Black, I. and Forsberg, L. (2021).
  83. Beduschi, A. (2020).
  84. Thomas, N. (2021) ‘Vaccine passports: path back to normality or problem in the making?’, Reuters, 5 February 2021. Available at: https://www.reuters.com/article/us-health-coronavirus-britain-vaccine-pa-idUSKBN2A4134 (Accessed: 6 April 2021).
  85. Buolamwini, J. and Gebru, T. (2018) ‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification’, in Conference on Fairness, Accountability and Transparency. PMLR, pp. 77–91. Available at: http://proceedings.mlr.press/v81/buolamwini18a.html (Accessed: 6 April 2021).
  86. Kofler, N. and Baylis, F. (2020) ‘Ten reasons why immunity passports are a bad idea’, Nature, 581(7809), pp. 379–381. doi: 10.1038/d41586-020-01451-0.
  87. ibid.
  88. Olivarius, K. (2019) ‘Immunity, Capital, and Power in Antebellum New Orleans’, The American Historical Review, 124(2), pp. 425–455. doi: 10.1093/ahr/rhz176.
  89. Access Now, Response to Ada Lovelace Institute call for evidence.
  90. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence.
  91. Pai. M., (2021) ‘How Vaccine Passports Will Worsen Inequities In Global Health,’ Nature Portfolio Microbiology Community. Available at: http://naturemicrobiologycommunity.nature.com/posts/how-vaccine-passports-will-worsen-inequities-in-global-health (Accessed: 6 April 2021).
  92. Merrick. J., (2021) ‘New variants will “come back to haunt” the UK unless it helps tackle worldwide transmission’, iNews, 23 April 2021. Available at: https://inews.co.uk/news/politics/new-variants-will-come-back-to-haunt-the-uk-unless-it-helps-tackle-worldwide-transmission-971041 (Accessed: 5 May 2021).
  93. Kuchler, H. and Williams, A. (2021) ‘Vaccine makers say IP waiver could hand technology to China and Russia’, Financial Times, 25 April 2021. Available at: https://www.ft.com/content/fa1e0d22-71f2-401f-9971-fa27313570ab (Accessed: 5 May 2021).
  94. Digital, Culture, Media and Sport Committee Sub-Committee on Online Harms and Disinformation (2021). Oral evidence: Online harms and the ethics of data, HC 646. 26 January 2021. Available at: https://committees.parliament.uk/oralevidence/1586/html/ (Accessed: 9 April 2021).
  95. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  96. A principle that argues reforms should not be made until the reasoning behind the existing state of affairs is understood, inspired by a quote from G. K. Chesterton’s The Thing (1929), arguing that an intelligent reformer would not remove a fence until you know why it was put up in the first place.
  97. Pietropaoli, I. (2021) ‘Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations’. British Institute of International and Comparative Law. 1 April 2021. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  98. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  99. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  100. Ada Lovelace Institute (2021) International monitor: vaccine passports and COVID status apps. Available at: https://www.adalovelaceinstitute.org/project/international-monitor-vaccine-passports-covid-status-apps/ (Accessed: 5 April 2021).
  101. Pew Research Center (2020) 8 charts on internet use around the world as countries grapple with COVID-19. Available at: https://www.pewresearch.org/fact-tank/2020/04/02/8-charts-on-internet-use-around-the-world-as-countries-grapple-with-covid-19/(Accessed: 13 April 2021).
  102. Ada Lovelace Institute (2021) The data divide. Available at: https://www.adalovelaceinstitute.org/survey/data-divide/ (Accessed: 6 April 2021).
  103. Pew Research Center (2020).
  104. Electoral Commission (2015) Delivering and costing a proof of identity scheme for polling station voters in Great Britain. Available at: https://www.electoralcommission.org.uk/media/1825 (Accessed: 13 April 2021); Davies, C. (2021). ‘Number of young people with driving licence in Great Britain at lowest on record’, The Guardian. 5 April 2021. Available at: https://www.theguardian.com/money/2021/apr/05/number-of-young-people-with-driving-licence-in-great-britain-at-lowest-on-record (Accessed: 6 May 2021).
  105. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  106. NHS Digital. (2021) NHS e-Referral Service integrated into the NHS App to make managing referrals easier. Available at: https://digital.nhs.uk/news-and-events/latest-news/nhs-e-referral-service-integrated-into-the-nhs-app-to-make-managing-referrals-easier (Accessed: 28 April 2021).
  107. Access Now, Response to Ada Lovelace Institute call for evidence.
  108. For example, see: Mvine at Ada Lovelace Institute (2021) The history and uses of vaccine passports and COVID status apps. Available at: https://www.youtube.com/watch?v=BL0vZeoWVKQ&t=213s (Accessed: 7 April 2021); evidence submitted to the Ada Lovelace Institute from Certus, IOTA, ZAKA, Tony Blair Institute for Global Change, SICPA, Yoti, Good Health Pass.
  109. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  110. Danish Government (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 13 April 2021)
  111. Ada Lovelace Institute (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/project/citizens-biometrics-council/ (Accessed: 13 April 2021)
  112. Whitley, E. (2021) ‘What must we consider if proof of Covid status is to help reopen the economy?’ LSE Department of Management blog. Available at: https://blogs.lse.ac.uk/management/2021/02/24/what-must-we-consider-if-proof-of-covid-status-is-to-help-reopen-the-economy/ (Accessed: 6 May 2021).
  113. Information Commissioner’s Office (2021) About the DPA 2018. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/introduction-to-data-protection/about-the-dpa-2018/ (Accessed: 6 April 2021).
  114. Beduschi, A. (2020).
  115. Horizon Digital Economy Research Institute, Response to Ada Lovelace Institute call for evidence.
  116. European Data Protection Board and European Data Protection Supervisor (2021), Joint Opinion 04/2021 on the Proposal for a Regulation of the European Parliament and of the Council on a framework for the issuance, verification and acceptance of interoperable certificates on vaccination, testing and recovery to facilitate free movement during the COVID-19 pandemic (Digital Green Certificate). Available at: https://edps.europa.eu/system/files/2021-04/21-03-31_edpb_edps_joint_opinion_digital_green_certificate_en_0.pdf (Accessed: 29 April 2021)
  117. Beduschi, A. (2020).
  118. ibid.
  119. Information Commissioner’s Office (2021) International transfers after the UK exit from the EU Implementation Period. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/international-transfers-after-uk-exit/ (Accessed: 5 May 2021).
  120. Global Privacy Assembly Executive Committee (2021).
  121. Beduschi, A. (2020).
  122. Global Privacy Assembly (2021) GPA Executive Committee joint statement on the use of health data for domestic or international travel purposes. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 13 April 2021).
  123. Information Commissioner’s Office (2021) Principle (c): Data minimisation. ICO. Available at: https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/principles/data-minimisation/ (Accessed: 6 April 2021).
  124. Denham. E., (2021) ‘Blog: Data Protection law can help create public trust and confidence around COVID-status certification schemes’. ICO. Available at: https://ico.org.uk/about-the-ico/news-and-events/blog-data-protection-law-can-help-create-public-trust-and-confidence-around-COVID-status-certification-schemes/ (Accessed: 6 April 2021).
  125. Illmer, A. (2021) ‘Singapore reveals COVID privacy data available to police’, BBC News, 5 January 2021. Available at: https://www.bbc.com/news/world-asia-55541001 (Accessed: 6 April 2021). Gross, A. and Parker, G. (2020) Experts decry move to share COVID test and trace data with police, Financial Times. Available at: https://www.ft.com/content/d508d917-065c-448e-8232-416510592dd1 (Accessed: 6 April 2021).
  126. Halpin, H. (2020) ‘Vision: A Critique of Immunity Passports and W3C Decentralized Identifiers’, in van der Merwe, T., Mitchell, C., and Mehrnezhad, M. (eds) Security Standardisation Research. Cham: Springer International Publishing (Lecture Notes in Computer Science), pp. 148–168. doi: 10.1007/978-3-030-64357-7_7.
  127. FHIR (2019) 2019 HL7 FHIR Release 4. Available at: http://www.hl7.org/fhir/ (Accessed: 21 April 2021).
  128. Doteveryone (2019) Consequence scanning, an agile practice for responsible innovators. Available at: https://doteveryone.org.uk/project/consequence-scanning/ (Accessed: 21 April 2021)
  129. NHS Digital (2020) DCB3051 Identity Verification and Authentication Standard for Digital Health and Care Services. Available at: https://digital.nhs.uk/data-and-information/information-standards/information-standards-and-data-collections-including-extractions/publications-and-notifications/standards-and-collections/dcb3051-identity-verification-and-authentication-standard-for-digital-health-and-care-services (Accessed: 7 April 2021).
  130. Royal College of General Practitioners (2021) RCGP submission for the COVID-status Certification Review call for evidence. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/covid-status-certification-review.aspx (Accessed: 6 April 2021).
  131. Say, M. (2021) ‘Government gives Verify a stay of execution.’ UKAuthority. Available at: https://www.ukauthority.com/articles/government-gives-verify-a-stay-of-execution/ (Accessed: 5 May 2021).
  132. Cabinet Office and Lopez. J., (2021) ‘Julia Lopez speech to The Investing and Savings Alliance’. GOV.UK. Available at: https://www.gov.uk/government/speeches/julia-lopez-speech-to-the-investing-and-savings-alliance (Accessed: 6 April 2021).
  133. For more on digital identity during the pandemic see: Freeguard, G. and Shepheard, M. (2020) ‘Digital government during the coronavirus crisis’. Institute for Government. Available at: https://www.instituteforgovernment.org.uk/sites/default/files/publications/digital-government-coronavirus.pdf.
  134. Department for Digital, Culture, Media and Sport (2021) The UK digital identity and attributes trust framework, GOV.UK. Available at: https://www.gov.uk/government/publications/the-uk-digital-identity-and-attributes-trust-framework/the-uk-digital-identity-and-attributes-trust-framework (Accessed: 6 April 2021).
  135. Access Now, Response to Ada Lovelace Institute call for evidence.
  136. iProov (2021) Covid-19 Passport from iProov and Mvine Moves Into Trial Phase. Available at: https://www.iproov.com/press/uk-covid19-passport-moves-into-trial-phase (Accessed: 7 April 2021).
  137. Ada Lovelace Institute (2021) The socio-technical challenges of designing and building a vaccine passport system. Available at: https://www.youtube.com/watch?v=Md9CLWgdgO8&t=2s (Accessed: 7 April 2021).
  138. On general trust, polls include Ipsos MORI Veracity Index. On data trust, see RSS and ODI polling.
  139. Sommer, A. K. (2021) ‘Some foreigners in Israel are finally able to obtain COVID vaccine pass’. Haaretz.com. Available at: https://www.haaretz.com/israel-news/.premium-some-foreigners-in-israel-are-finally-able-to-obtain-COVID-19-green-passport-1.9683026 (Accessed: 8 April 2021).
  140. Cabinet Office (2020) ‘Ventilator Challenge hailed a success as UK production finishes’. GOV.UK. Available at: https://www.gov.uk/government/news/ventilator-challenge-hailed-a-success-as-uk-production-finishes (Accessed: 6 April 2021).
  141. For example, evidence received from techUK and World Health Pass.
  142. Our World in Data (2021) Coronavirus (COVID-19) Vaccinations. Available at: https://ourworldindata.org/covid-vaccinations (Accessed: 13 April 2021)
  143. FT Visual and Data Journalism team (2021) Covid-19 vaccine tracker: the global race to vaccinate. Financial Times. Available at: https://ig.ft.com/coronavirus-vaccine-tracker/ (Accessed: 13 April 2021)
  144. Full Fact. (2020) How does the new coronavirus compare to influenza? Available at: https://fullfact.org/health/coronavirus-compare-influenza/ (Accessed: 6 April 2021).
  145. BBC News (2021) ‘Coronavirus: Third wave will “wash up on our shores”, warns Johnson’. BBC News. 22 March 2021. Available at: https://www.bbc.com/news/uk-politics-56486067 (Accessed: 6 April 2021).
  146. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  147. Tony Blair Institute for Global Change (2021) The New Necessary: How We Future-Proof for the Next Pandemic. Available at https://institute.global/policy/new-necessary-how-we-future-proof-next-pandemic (Accessed: 13 April 2021)
  148. Paton. G., (2021) ‘Cost of home Covid tests for travellers halved as companies accused of “profiteering”.’ The Times. 14 April 2021. Available at: https://www.thetimes.co.uk/article/cost-of-home-covid-tests-for-travellers-halved-as-companies-accused-of-profiteering-lh76wb585 (Accessed: 13 April 2021)
  149. Department of Health & Social Care (2021) ‘30 million people in UK receive first dose of coronavirus (COVID-19) vaccine’. GOV.UK. Available at: https://www.gov.uk/government/news/30-million-people-in-uk-receive-first-dose-of-coronavirus-COVID-19-vaccine (Accessed: 6 April 2021).
  150. Ipsos (2021) Global attitudes: COVID-19 vaccines. 9 February 2021. Available at: https://www.ipsos.com/en/global-attitudes-COVID-19-vaccine-january-2021 (Accessed: 6 April 2021).
  151. Reicher, S. and Drury, J. (2021) ‘How to lose friends and alienate people? On the problems of vaccine passports’, The BMJ, 1 April 2021. Available at: https://blogs.bmj.com/bmj/2021/04/01/how-to-lose-friends-and-alienate-people-on-the-problems-of-vaccine-passports/ (Accessed: 6 April 2021).
  152. Smith, M. (2021) ‘International study: How many people will take the COVID vaccine?’, YouGov, 15 January 2021. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/01/15/international-study-how-many-people-will-take-covi (Accessed: 6 April 2021).
  153. Reicher, S. and Drury, J. (2021).
  154. Razai, M. S. et al. (2021) ‘COVID-19 vaccine hesitancy among ethnic minority groups’, The BMJ, 372, p. n513. doi: 10.1136/bmj.n513.
  155. Royal College of General Practitioners (2021) ‘RCGP submission for the COVID-status Certification Review call for evidence’., Royal College of General Practitioners. Available at: https://www.rcgp.org.uk/policy/rcgp-consultations/COVID-status-certification-review.aspx (Accessed: 6 April 2021).
  156. Access Now, Response to Ada Lovelace Institute call for evidence.
  157. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence.
  158. ibid.
  159. ibid.
  160. ibid.
  161. Zimmer, C., Corum, J. and Wee, S.-L. (no date) ‘Coronavirus Vaccine Tracker’, The New York Times. Available at: https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (Accessed: 21 April 2021).
  162. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  163. Times of Israel Staff (2021) ‘Thousands reportedly attempt to obtain easily forged vaccinated certificate’. Times of Isreal. 18 February 2021. Available at: https://www.timesofisrael.com/thousands-reportedly-attempt-to-obtain-easily-forged-vaccinated-certificate/(Accessed: 6 April 2021).
  164. Senyor, E. (2021) ‘NIS 1,500 for Green Pass: Police arrest seller of illegal vaccine certificates’, ynetnews. 21 March 2021. Available at: https://www.ynetnews.com/article/Bk00wJ11B400 (Accessed: 6 April 2021).
  165. Europol (2021) ‘Early Warning Notification – The illicit sales of false negative COVID-19 test certificates’, Europol. 1 February 2021. Available at: https://www.europol.europa.eu/early-warning-notification-illicit-sales-of-false-negative-COVID-19-test-certificates (Accessed: 6 April 2021).
  166. Lewandowsky, S. et al. (2021) ‘Public acceptance of privacy-encroaching policies to address the COVID-19 pandemic in the United Kingdom’, PLOS ONE, 16(1), p. e0245740. doi: 10.1371/journal.pone.0245740.
  167. 165 Deltapoll (2021). Political Trackers and Lockdown. Available at: http://www.deltapoll.co.uk/polls/political-trackers-and-lockdown (Accessed: 7 April 2021).
  168. Ibbetson, C. (2021) ‘Most Britons support a COVID-19 vaccine passport system’. YouGov. Available at: https://yougov.co.uk/topics/health/articles-reports/2021/03/05/britons-support-COVID-19-vaccine-passport-system (Accessed: 7 April 2021).
  169. YouGov (2021). Daily Question | 02/03/2021 Available at: https://yougov.co.uk/topics/health/survey-results/daily/2021/03/02/9355e/2 (Accessed: 7 April 2021).
  170. Ipsos MORI. (2021) Majority of Britons support vaccine passports but recognise concerns in new Ipsos MORI UK KnowledgePanel poll. Available at: https://www.ipsos.com/ipsos-mori/en-uk/majority-britons-support-vaccine-passports-recognise-concerns-new-ipsos-mori-uk-knowledgepanel-poll (Accessed: 9 April 2021).
  171. King’s College London. (2021) Covid vaccines: passports, blood clots and changing trust in government. Available at: https://www.kcl.ac.uk/news/covid-vaccines-passports-blood-clots-and-changing-trust-in-government (Accessed: 9 April 2021).
  172. De Montfort University. (2021). Study shows UK punters see no need for pub vaccine passports. Available at: https://www.dmu.ac.uk/about-dmu/news/2021/march/-study-shows-uk-punters-see-no-need-for-pub-vaccine-passports.aspx (Accessed: 7 April 2021).
  173. Indigo (2021) Vaccine Passports – What do audiences think? Available at: https://www.indigo-ltd.com/blog/vaccine-passports-what-do-audiences-think (Accessed: 7 April 2021).
  174. Serco Institute (2021) Vaccine Passports & UK Public Opinion. Available at: https://www.sercoinstitute.com/news/2021/vaccine-passports-uk-public-opinion (Accessed: 7 April 2021).
  175. Studdert, M. H. and D. (2021) ‘Reaching agreement on COVID-19 immunity “passports” will be difficult’, Brookings, 27 January 2021. Available at: https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2021/01/27/reaching-agreement-on-COVID-19-immunity-passports-will-be-difficult/ (Accessed: 7 April 2021). ELABE (2021) Les Français et l’épidémie de COVID-19 – Vague 33. 3 March 2021. Available at: https://elabe.fr/epidemie-COVID-19-vague33/ (Accessed: 7 April 2021).
  176. Ada Lovelace Institute. (2021) The Citizens’ Biometrics Council. Available at: https://www.adalovelaceinstitute.org/report/citizens-biometrics-council/ (Accessed: 9 April 2021).
  177. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  178. Beacon, R. and Innes, K. (2021) The Case for Digital Health Passports. Tony Blair Institute for Global Change. Available at: https://institute.global/sites/default/files/inline-files/Tony%20Blair%20Institute%2C%20The%20Case%20for%20Digital%20Health%20Passports%2C%20February%202021_0_0.pdf (Accessed: 6 April 2021).
  179. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  180. Pietropaoli, I. (2021) Part 2: Getting Digital Health Passports Right? Legal, Ethical and Equality Considerations. Available at: https://www.biicl.org/blog/23/part-2-getting-digital-health-passports-right-legal-ethical-and-equality-considerations (Accessed: 6 April 2021).
  181. Prime Minister’s Office. (2021) Rammeaftale om plan for genåbning af Danmark. 22 March 2021. Available at: https://www.stm.dk/media/10258/rammeaftale-om-plan-for-genaabning-af-danmark.pdf (Accessed: 6 April 2021).
  182. Global Privacy Assembly Executive Committee (2021) Global Privacy Assembly Executive Committee joint statement on the importance of privacy by design in the sharing of health data for domestic or international travel requirements during the COVID-19 pandemic. 31 March 2021. Available at: https://globalprivacyassembly.org/gpa-executive-committee-joint-statement-on-the-use-of-health-data-for-domestic-or-international-travel-purposes/ (Accessed: 6 April 2021).
  183. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  184. medConfidential, Response to Ada Lovelace Institute call for evidence
  185. Dr Btihaj Ajana, Response to Ada Lovelace Institute call for evidence
  186. Nuffield Council on Bioethics (2020) Rapid policy briefing: COVID-19 antibody testing and ‘immunity certification’. Available at: https://www.nuffieldbioethics.org/assets/pdfs/Immunity-certificates-rapid-policy-briefing.pdf (Accessed: 6 April 2021).
  187. UK Ethics Accelerator, Response to Ada Lovelace Institute call for evidence
  188. ibid.

1–12 of 110