Search
Browse the Ada Lovelace Institute website.
Filter by:
Post-Summit civil society communique
Civil society attendees of the AI Safety Summit urge prioritising regulation to address well established harms
Emerging processes for frontier AI safety
The UK Government has published a series of voluntary safety practices for companies developing frontier AI models.
What do the public think about AI?
Understanding public attitudes and how to involve the public in decision-making about AI
New, independent evidence review helps policymakers understand public attitudes about AI and how to involve the public in AI decision-making
The Ada Lovelace Institute has published a new rapid review of evidence on public attitudes about AI and how to involve the public in AI policy.
Foundation models in the public sector
AI foundation models are integrated into commonly used applications and are used informally in the public sector
Dame Julie Maxton appointed as Chair of the Ada Lovelace Institute
Her three-year term as Chair will begin in October, succeeding Professor Dame Wendy Hall.
Working it out
Lessons from the New York City algorithmic bias audit law.
Evaluating data-driven COVID-19 technologies through a human-centred approach
What we can learn from missing evidence on digital contact tracing and vaccine passports?
Education and AI
The role of AI and data-driven technologies in primary and secondary education in the UK
Tackling health and social inequalities in data-driven systems
The event follows a three-year long programme of research, conducted in partnership with The Health Foundation.
Access denied?
Socioeconomic inequalities in digital health services
AI regulation and the imperative to learn from history
What can we learn from policy successes and failures, to ensure frontier AI regulations are effective in practice?