Skip to content

Law & Policy

Interrogating how existing and emerging AI and data law, regulation, governance and policy meets the needs of people and society.

Who we are and what we do

In the Law & Policy team, we bring together policy expertise and research to interrogate how existing and emerging AI and data law, regulation, governance and policy meets the needs of people and society.

We work with policymakers and decision-makers – across government, regulators, industry and civil society – to influence and inform AI and data policy and practice. We seek to support local, national and international policymaking through shaping debate: connecting decision-makers to research and academia, exploring areas of disagreement or trade-off, engaging with challenging perspectives and amplifying often-overlooked viewpoints. We are non-partisan and work with policymakers and stakeholders across political spectrums.

Our evidence-based policy positions and recommendations are grounded in research, synthesis, analysis and translation; convening disciplinary and sectoral expertise; and public attitudes and participatory research with people affected by technologies.

What we are working on

We are currently working on the following projects:

Our impact

We want to see a world where effective AI and data governance allows people and society to enjoy the benefits and avoid the harms of AI and data-driven technologies. Below are some ways in which our recent work has helped to move in that direction.

In March 2023, the UK Government published a white paper on AI regulation proposing a ‘contextual, sector-based regulatory framework’. Our blog post on the value chain of general-purpose AI was cited three times in the paper. Our initial response – welcoming the engagement with AI harms but highlighting limitations of the proposals, which failed to include legislation on the use of AI technologies – was covered by BBC News, The Times and The Guardian and we followed this up with a longer opinion piece in the New Statesman. Our analysis of how the value chains of general purpose AI systems create governance challenges for sectoral regulation was reproduced in the Government’s response to its AI White Paper consultation.

Our evidence to the Commons Science and Technology Select Committee inquiry into the governance of AI – on privacy, biometrics and the importance of public trust – was referenced multiple times in the Committee’s report, which called for ‘greater urgency in enacting the legislative powers’ needed to regulate AI. We also gave oral evidence to the Lords Communications and Digital Select Committee’s inquiry into large language models. The Committee’s report cited Ada’s evidence, as well as our research on foundation models and AI regulation.

2023’s highest-profile forum for AI policy was the UK’s AI Safety Summit, held at Bletchley Park in November, and attended by representatives from international governments, industry, civil society and academia. In the run-up to the Summit, we published a blog post providing suggestions for how to make the Summit meaningful, recommending that it expand its focus from ‘frontier’ AI and centre the perspectives of people, as well as a policy briefing exploring examples of other sectors with safety-based regulation. We used the briefing to facilitate discussions with policymakers about the importance of effective AI governance, including Peter Kyle MP, Shadow Secretary of State for Science, Innovation and Technology.

In response to the Government’s deregulatory changes to UK data protection law, we made the Data Protection and Digital Information Bill a focus for our parliamentary engagement in 2023. We gave evidence to the Commons Public Bill Committee and briefed MPs and peers on proposals for amendments based on our research. The Institute was mentioned over a dozen times in parliamentary debates relating to the Bill. As a result of our engagement, the Labour frontbench tabled several of our drafted Bill amendments on biometrics, which were debated in Committee stage. These were not passed (as expected), but provided a platform for cross-party engagement and built momentum for further work in the Lords and the next Parliament.

In Brussels, we engaged extensively to influence the EU’s AI Act – the world’s first example of comprehensive legislation to regulate AI – and published a briefing for the ‘trilogues’, a critical stage in the EU institutions’ legislative negotiations. The trilogues concluded successfully with a political agreement in December 2023, with over half of our 18 recommendations implemented in some form. The inclusion of ‘affected persons’ as a legally significant category is something that the Institute has been advocating for since 2021. We also saw our recommendations reflected in the AI Act’s approach to the AI value chain, with specific obligations for developers of general-purpose AI, and post-market monitoring and enforcement through the establishment of an EU AI Office.