Skip to content
Current project

Quality assurance

Exploring the potential for a professionalised AI assurance industry

Reading time: 3 minutes

Three people (two women and a man) standing in a hallway having a conversation.
Project Status
Current

Project background

We expect the cars, food or medicines that we use as everyday products to be safe. These safety-critical products have benefitted from mandatory independent assessments to assure their safety and to minimise any potential risk to people and society.

AI systems are no different: ensuring AI systems are safe, effective and reliable will require regular assurance assessments of their technical components and the governance practices of companies developing and deploying them.

AI assurance is ‘the process of measuring, evaluating and communicating the trustworthiness of AI systems’.  Other related terms used similarly include external validation, third-party governance or AI accountability.

In all cases, assurance assessments are part of compliance and governance processes for AI that look to manage risk and support safety and ethics goals in AI. Assessments can include a range of different methods at different stages of an AI system’s lifecycle. These can include risk assessments, conformity assessments or red-teaming before deployment, and algorithmic bias audits after deployment.

Recent regulations, such as the EU’s AI Act and Digital Services Act, and New York City’s Local Law 144) have proposed implementing AI assurance assessments, including algorithm audits. Scaling these kinds of assessments will require an ecosystem of certified third-party assessors. Creating this ecosystem is likely to require the professionalisation of AI assurance as an industry.

What is professionalisation?

 

‘Professionalising’ an industry refers to the process of giving a group or occupation professional qualities, usually through training or certifications. Other components can include the creation of codes of conduct and membership bodies, standardised practices, and regular assessments of competence and quality. Some industries designate legally protected titles that demonstrate that a professional is trained or qualified to a particular standard, such as chartered surveyors or accountants.

Project overview

In partnership with the Center for Democracy & Technology, the Ada Lovelace Institute will explore the conditions for, and potential impacts of, professionalising the AI assurance industry. This research will help inform policymakers and industry in the UK and internationally.

We will be conducting interviews with assurance practitioners and experts from AI and other industries, such as financial services.

The key questions we will seek to answer include:

  • What is AI assurance, and what is it setting out to achieve?
  • What role can a professionalised industry of third-party assurance play in the AI governance ecosystem?
  • What is needed to ensure assurance works well? What are the potential unintended consequences of third-party assurance?
  • How has professionalisation of third-party assurance come about in other sectors?

This project builds on existing Ada research:

If you’re interested in hearing more about this project, or about Ada’s other research around AI accountability and assurance, please get in touch with lead researcher, Lara Groves.


Image credit: Goodboy Picture Company