Search
Browse the Ada Lovelace Institute website.
Filter by:
Active filters:
Emerging processes for frontier AI safety
The UK Government has published a series of voluntary safety practices for companies developing frontier AI models.
New, independent evidence review helps policymakers understand public attitudes about AI and how to involve the public in AI decision-making
The Ada Lovelace Institute has published a new rapid review of evidence on public attitudes about AI and how to involve the public in AI policy.
Foundation models in the public sector
AI foundation models are integrated into commonly used applications and are used informally in the public sector
AI regulation and the imperative to learn from history
What can we learn from policy successes and failures, to ensure frontier AI regulations are effective in practice?
Seizing the ‘AI moment’: making a success of the AI Safety Summit
Reaching consensus at the AI Safety Summit will not be easy – so what can the Government do to improve its chances of success?
Regulating AI in the UK
Recommendations to strengthen the Government's proposed framework
Keeping an eye on AI
Approaches to government monitoring of the AI landscape
AI assurance?
Assessing and mitigating risks across the AI lifecycle
Regulating AI in the UK
Strengthening the UK's proposals for the benefit of people and society
UK must strengthen its AI regulation proposals to improve legal protections, empower regulators and address urgent risks of cutting-edge models
The Ada Lovelace Institute today published a new report analysing the UK’s proposals for AI regulation.
Regulating AI in the UK: three tests for the Government’s plans
Will the proposed regulatory framework for artificial intelligence enable benefits and protect people from harm?
How do people feel about AI?
A nationally representative survey of the British public