Search
Browse the Ada Lovelace Institute website.
Filter by:
An infrastructure for safety and trust in European AI
Mandating independent safety assessment
Safe beyond sale: post-deployment monitoring of AI
Building the information infrastructure to improve safe AI use
Critical analytics?
Learning from the early adoption of data analytics for local authority service delivery
Code & conduct
How to create third-party auditing regimes for AI systems
Social and Economic Policy
Shaping how political choices and public services will be designed in the information age.
Public Participation & Research
Ensuring that the voices of people affected by data and AI contribute to building and shaping evidence, research, policy and practices.
Law & Policy
Interrogating how existing and emerging AI and data law, regulation, governance and policy meets the needs of people and society.
Emerging Technology & Industry Practice
Ensuring that the benefits of new technologies are distributed equitably and that potential harms are prevented.
How we work
We explore how data and AI interact with society; examine which governance models work in the public interest; and interrogate power imbalances
Safety first?
Reimagining the role of the UK AI Safety Institute in a wider UK governance framework
The role of public compute
How can we realise the societal benefits of AI with a market-shaping approach?
Mobilising publics and grassroots organisations to impact AI policy
What can we learn from network-building initiatives in Spain?