Search
Browse the Ada Lovelace Institute website.
Filter by:
Ada Lovelace Institute statement on the EU AI Act
Michael Birtwistle, Associate Director (Law & policy), encourages EU capitals and the European Parliament to approve the AI Act.
The Ada Lovelace Institute in 2023
Reflections on the last year and a look ahead to 2024 from Ada’s outgoing Director Carly Kind and Interim Director Fran Bennett
Evaluation of foundation models
What kinds of evaluation methods exist for foundation models, and what are their potential limitations?
Safe before sale
Learnings from the FDA’s model of life sciences oversight for foundation models
Going public
Exploring public participation in commercial AI labs
Genomics, AI and the politics of a data-first approach to medical evidence
What are the consequences of new forms of data-driven methods for health outcomes?
Carly Kind to step down as Director of the Ada Lovelace Institute
Carly Kind, Director of the Ada Lovelace Institute since 2019, will be leaving the Institute in February 2024.
Post-Summit civil society communique
Civil society attendees of the AI Safety Summit urge prioritising regulation to address well established harms
Emerging processes for frontier AI safety
The UK Government has published a series of voluntary safety practices for companies developing frontier AI models
What do the public think about AI?
Understanding public attitudes and how to involve the public in decision-making about AI
New, independent evidence review helps policymakers understand public attitudes about AI and how to involve the public in AI decision-making
The Ada Lovelace Institute has published a new rapid review of evidence on public attitudes about AI and how to involve the public in AI policy.
Foundation models in the public sector
AI foundation models are integrated into commonly used applications and are used informally in the public sector