Search
Browse the Ada Lovelace Institute website.
Filter by:
Active filters:
Quality assurance
Exploring the potential for a professionalised AI assurance industry
How can (A)I help?
An exploration of AI assistants.
Under the radar?
Examining the evaluation of foundation models
Code & conduct
How to create third-party auditing regimes for AI systems
Evaluation of foundation models
What kinds of evaluation methods exist for foundation models, and what are their potential limitations?
Safe before sale
Learnings from the FDA’s model of life sciences oversight for foundation models
Going public
Exploring public participation in commercial AI labs
Emerging processes for frontier AI safety
The UK Government has published a series of voluntary safety practices for companies developing frontier AI models.
Foundation models in the public sector
AI foundation models are integrated into commonly used applications and are used informally in the public sector
Working it out
Lessons from the New York City algorithmic bias audit law.
Keeping an eye on AI
Approaches to government monitoring of the AI landscape
AI assurance?
Assessing and mitigating risks across the AI lifecycle