Pioneering framework for assessing the impact of medical AI set to be trialled by NHS in world-first pilot
The Ada Lovelace Institute has designed an algorithmic impact assessment (AIA) for the NHS AI Lab, the first known example within healthcare.
8 February 2022
Reading time: 4 minutes
- The Ada Lovelace Institute has designed an algorithmic impact assessment (AIA) for the NHS AI Lab, the first known example within healthcare.
- New process aims to ensure AI researchers and companies identify and address potential risks, such as algorithmic bias, before getting access to NHS patient data.
- By trialling our proposed process, the NHS is set to be the first health system in the world to adopt this approach to the ethical use of AI.
The Ada Lovelace Institute has published a new proposal for the use of algorithmic impact assessments (AIAs) to maximise the benefits and mitigate the harms of AI technologies in healthcare. By trialling this detailed process, the NHS in England will be the first health system in the world to use this new approach to the ethical use of AI.
The NHS is set to trial the use of this assessment within the context of the NHS AI Lab. The framework will be used in a pilot to support researchers and developers in assessing the possible risks of an algorithmic system before they are granted access to NHS patient data.
It will be trialled across a number of initiatives and used as part of the data access process for the National Covid-19 Chest Imaging Database (NCCID) and the proposed National Medical Imaging Platform (NMIP).
The NCCID is a central database of medical images from hospital patients across the country that supports researchers to better understand COVID-19 and develop technology enabling the best care. The proposed NMIP will expand on the NCCID and enable the training and testing of a wider range of AI systems using medical imaging for screening and diagnostics.
Data-driven technologies (including AI) are increasingly being used in healthcare to help with detection, diagnosis and prognosis. However, there are legitimate concerns that AI could exacerbate health inequalities and entrench social biases (for example, training data biases have resulted in AI systems for diagnosing skin cancer that are less accurate for people of colour).
AIAs are an emerging approach for holding the people and institutions that design and deploy AI systems accountable. They are one way to help pre-empt and identify the potential impact of algorithms on people, society and the environment.
Ada has mapped out a detailed, step-by-step process for using an AIA in a real-world example involving both public and private sector. The research provides a series of practical steps to help the NHS AI Lab develop their AIA process for data access. It is designed to help developers and researchers think through the potential impacts of the technologies developed using NHS data.
Octavia Reeve, Interim Lead, Ada Lovelace Institute, said:
‘Algorithmic impact assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can in turn build public trust in the use of these systems, mitigate risks of harm to people and groups, and maximise their potential for benefit.
‘We hope that this research will generate further considerations for the use of AIAs in other public and private-sector contexts.’
Brhmie Balaram, Head of AI Research and Ethics at the NHS AI Lab, said:
‘Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI. Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.
‘The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.’
ENDS
Contact
George King, Communications Manager
The Ada Lovelace Institute
gking@adalovelaceinstitute.org
+44 20 7323 6274
Notes to Editors
Lara Groves was the lead author of the report, with substantive contributions from Jenny Brennan, Inioluwa Deborah Raji, Aidan Peppin and Andrew Strait.
The Ada Lovelace Institute
The Ada Lovelace Institute (Ada) is an independent research institute with a mission to ensure data and AI work for people and society. Ada works to create a shared vision of a world where AI and data are mobilised for good, to ensure that technology improves people’s lives. They take a sociotechnical, evidence-based approach and use deliberative methods to convene and centre diverse voices. They do this to identify the ways that data and AI reorder power in society, and to highlight tensions between emerging technologies and societal benefit. Find out more: adalovelaceinstitute.org | @adalovelaceinst
The NHS AI Lab
The NHS AI Lab’s mission is to accelerate the safe, ethical and effective use of AI in health and social care. It is currently preparing to develop a National Medical Imaging Platform (NMIP) which will make it possible to collect imaging data on a national scale, and provide this data for the development and testing of safe AI screening technology. The NHS AI Lab is part of the NHS Transformation Directorate.
The Nuffield Foundation
The Ada Lovelace Institute is funded by the Nuffield Foundation, an independent charitable trust with a mission to advance social well-being. The Foundation funds research that informs social policy, primarily in education, welfare and justice. It also provides opportunities for young people to develop skills and confidence in STEM and research. In addition to the Ada Lovelace Institute, the Foundation is also the founder and co-funder of the Nuffield Council on Bioethics and the Nuffield Family Justice Observatory.
Related content
Technical methods for regulatory inspection of algorithmic systems
A survey of auditing methods for use in regulatory inspections of online harms in social media platforms
Examining the Black Box
Identifying common language for algorithm audits and impact assessments