Realising the potential of algorithmic accountability mechanisms
Seven design challenges for the successful implementation of algorithmic impact assessments
16 February 2022
Reading time: 12 minutes
This is a delicate moment for the mainstreaming of AI accountability practices, as AI technologies continue to develop at pace. How will appropriate, robust and tested mechanisms emerge to meet the need for checks and balances?
On the one hand, the use of AI systems is becoming ever-more prevalent, with applications in sectors including education, healthcare and financial services. As relatively new , their potential to introduce or amplify specific benefits or harms for individuals and society is still emerging – and is the subject of much debate.
On the other hand, introducing any accountability mechanism as a standard for good practice or legal requirement means going beyond theoretical proposals.
The AI ecosystem is in the early stages of exploring mechanisms to evaluate or pre-empt the detrimental effects of AI systems on society and individuals. We are still far from having a clear understanding of how to ensure that their deployment is consistent with societal expectations of fairness and justice and how the institutions deploying them can be held accountable.
These areas of enquiry are now well-established in the data and AI ethics space and have contributed to growing calls for the adoption of one mechanism, algorithmic impact assessments (AIAs), as a basis for algorithmic accountability.
Building on our work with the NHS AI Lab to trial AIAs in the context of UK healthcare, we contend that they are a useful tool for entrenching accountability and an important step in moving from principles to practice. This blog post explores what needs to be considered about AIAs before they are implemented on a wider scale.
AIAs are an emerging mechanism for assessing the societal impacts of an algorithmic system before its implementation and throughout its use. They have been proposed as a means of encouraging the holistic assessment of common ethical issues, bolstering public-sector transparency commitments, and complementing other algorithmic accountability processes such as audits.
The discourse around AIAs is emerging against the background of increased interest in the regulation and governance of AI systems. The world’s first attempt at ‘horizontal’ regulation of AI, the proposed EU Artificial Intelligence Act, will be a landmark piece of legislation that could impose mandatory risk assessment requirements. Earlier this month, Democratic US Senators introduced the Algorithmic Accountability Act 2022, an update of an earlier bill, which, if passed, would bring new transparency requirements and mandate companies to use algorithmic impact assessments.
In the UK, the recently published National AI Strategy calls for a robust ‘AI assurance’ ecosystem comprising of mechanisms like AIAs to help manage the use of algorithms. In the third sector, organisations including the Institute for the Future of Work, have made the case for AIAs to be used to study the possible impacts of algorithmic systems on workers.
Further policy recommendations in favour of AIAs and other forms of impact assessments for technology have been raised internationally by a variety of research and advocacy organisations, such as the collaboration between European Centre for Not for Profit Law (ECNL) and Data & Society, as well as European Digital Rights (EDRi), Access Now and Algorithm Watch.
Impact assessment mechanisms as a general framework have an established history of practical use across various sectors. There are relatively established procedures for assessing the impact of a proposed policy or programme of work on equalities (equalities impact assessment), human rights (human rights impact assessments or HRIAs), data protection (data protection impact assessments or DPIAs) and the environment (environmental impact assessments or EIAs). But the exploration of practical use cases for AIAs is still very much in its infancy.
There are many examples of these other kinds of impact assessments being applied to technology, but, at the time of writing, there is only one known and recorded AIA protocol that has been implemented: Canada’s Algorithmic Impact Assessment. Built by the Canadian federal Government under the ‘Directive on Automated Decision-Making’ and introduced in 2020, the Canadian AIA enables federal government departments to manage their use of AI in the delivery of public services. At the time of writing, there are five published examples of a completed Canadian AIA for reference.
At the Ada Lovelace Institute, we have carried out a detailed survey of proposals for AIAs, alongside research into best practices for their implementation. While we consider AIAs to be a potentially valuable tool for supporting ethical practice, our research in partnership with NHS AI Lab suggests that AIAs would benefit from extensive trialling and further investigation, before becoming established practice or being adopted into law. The pace of lawmaking may offer a window of opportunity to trial and investigate AIAs in the interim, if lawmakers are prepared to iterate on the proposals and take the very latest research into account.
Here we formulate seven challenging AIA ‘design issues’.
Scope of ‘impacts’
AIAs are broadly understood as a method for conducting ex ante assessment of impacts, meaning that a system’s impacts are studied before its use. This raises questions about the definition of impact and how it is interpreted.
One study by Data & Society researchers considers ‘impacts’ as proxies or stand-ins for harms that exist in the world. This definition is useful because it provides researchers and policymakers with a means of identifying and measuring these effects. However, it does not address the limitations of AIAs to identify or demarcate which domains (economic rights, environment, civil liberties, etc.) an algorithmic system is likely to affect and, in practical terms, which kinds of impacts the assessment should focus on.
The difficulty in determining on what basis an AIA should establish the impact of an algorithmic system is compounded by the challenge of naming who the system is likely to affect. This leads to the question of who the institutions implementing the system and the AIA should be accountable to.
It is well known that the effects of algorithmic systems are not the same for different people and demographics: for example, medical algorithms have been found to be racially biased, exacerbating discrimination and worsening the quality of care for Black people. Without proper accountability for , established institutions may be empowered to set their own standards for AIAs, turning them into just another tick-box exercise.
It is essential that these questions of what and who is being impacted are analysed critically and addressed prior to the design and implementation of individual AIAs. Thought should be given to engaging different communities who can bring their own lived experience to the AIA process (see ‘Participation’ below).
Mandatory or voluntary application
Recent scholarship from Data & Society has set out ten ‘constitutive components’ that, according to the researchers, all AIAs must have to be effective. One of them is a ‘source of legitimacy’: a mechanism to establish the validity of the AIA. This would entail an institutional body, like a government agency, and a legal or regulatory mandate. Mandating impact assessments is a real possibility, given the evidence from existing legal regimes. The EU’s General Data Protection Regulation (GDPR) already requires the use of data protection impact assessments in cases where data processing is considered ‘high risk’.
Our research with the NHS AI Lab shows that considerable domain-specific expertise and sector-specific customisation will be necessary to effectively implement each instance of an AIA. In practice, this means that AIAs cannot and should not take the form of a single, universal framework, as this would struggle to capture the idiosyncrasies of different sectoral applications.
This does complicate the idea of requiring AIAs by law, without prior clear specification as to the type of AIA to conduct in each case. Further research into the practical implementation of AIAs is needed to support proposals for legal mandates.
This could take the form of ethnographic research into how AIAs interact with, and layer on top of, other governance frameworks and initiatives. This could help chart the progress made in accountability practices in ‘real time,’ allowing the merits of a legally mandated AIA to be assessed.
Public or private sector use case
Most of the current proposals for the implementation of AIAs focus on algorithmic systems used in the public sector. This is due in part to the obligations of public sector authorities to disclose risks and impacts relating to the use of algorithmic systems, and to demonstrate their adherence to principles such as fairness and non-maleficence.
Private companies building AI products do not have these same obligations and may be reticent to share commercial details relating to their algorithms. However, as legal scholar Andrew Selbst has argued, the private sector is likely to be pivotal to ensuring the adoption of AIAs globally. Regulators will have to work with the private sector and argue for the benefits of AIAs in a private sector context.
The AIA research project in collaboration with the NHS AI Lab considered examples at the intersection of the public and private sectors. The NHS AI Lab has committed to trialling our AIA proposal, in which private firms intending to access data held by a public body must complete our AIA.
AIAs could be suitable for adoption in private sector contexts as a compliance mechanism for ethical and responsible data use, alongside existing measures like third-party audits. However, for this to be possible, regulators would have to be able to incentivise the use of AIAs and more work is needed to clarify how that could be done and what those incentives should be.
Participation
There is broad consensus around the ideas that identifying impacts should involve the active participation of a variety of stakeholders. The High-Level Expert Group on AI’s Assessment List for Trustworthy AI’ recommends bringing together experts such as legal and compliance officers, while Moss et al suggest advocacy groups and ‘some form of “the public”’. Public participation in impact assessment owes its origins to the practice of environmental impact assessment. Established under the US National Environmental Policy Act, environmental impact assessment practice considers meaningful community involvement integral to delivering environmental justice.
Despite this provenance, it is still unclear how those conducting AIAs can achieve meaningful engagement and public participation for an assessment process that is itself under-developed. We suggest that some form of participation is better than none and organisations should be looking for ways to ensure that individual people and communities are actively involved in the assessment process.
Seeking diverse perspectives and providing a forum for careful and comfortable deliberation is necessarily time and resource intensive. There are also open questions around participation. Who should take part? How should the deliberative findings feed back into design and development? Who should fund and resource participation in AIAs?
Cadence
Thinking of AIAs as a way of assessing the impacts of an algorithmic system before its implementation leaves open questions about the impact of algorithmic systems already in use. Is there an optimal moment, over the course of the lifecycle of an algorithmic system, to assess impact? If there is, what is it? The answers to these questions are still unclear.
For example, human rights impact assessments have been adopted to retroactively assess the impact of a technology on human rights after it has been implemented, to learn from its failures.
Due to the iterative nature of algorithmic systems, it is reasonable to argue that AIAs should be completed at multiple intervals after established trigger points, such as after a period of significant model redevelopment.
Reiterating the impact assessment process would require accountability mechanisms being in place long-term and it is unclear how that would work in practice. One option would be to reaffirm the role of other governance mechanisms already in use to complement the AIA.
One relevant model we encountered in our research is post-market surveillance. Used in the context of medical devices to monitor the performance of device or system once adopted into a clinical setting, post-market surveillance enables responsive incident reporting to prevent more widespread harm.
Revisiting the AIA process once an algorithmic system is in place would help organisations respond to any impacts that arise ex post and help entrench accountability across the whole development lifecycle. However, establishing a fixed procedure for this type of accountability mechanism is challenging, given the variety and specificity of different use cases.
Publishable details
To promote transparency and build accountability, it is often recommended that the results of AIAS are published at a central location. For instance, the Canadian Directive on Automated Decision-Making already mandates that completed AIAs are published. Impact assessments in other domains, like human rights impact assessments and data protection impact assessments, have drawn criticism for not mandating that the results are published for external scrutiny.
However, there is uncertainty about which details of an AIA process can be made public in a way that wide and varied audiences can meaningfully engage with them. In the example case of the Canadian AIA, it is not clear whether people are able to enter into a dialogue about the publishable details or contest the findings.
The mere act of publishing details about AIAs should not be considered as standing in for accountability. True accountability means delivering the possibility to understand, challenge or deliberate on the findings of an assessment.
Anyone looking to use AIAs should consider how to publish the findings in a way that truly enables critical dialogue with wider audiences, improves the quality of future AIAs and encourages reflexive practice.
Assessment of AIAs
Given the infancy of AIAs, there is little consensus over how to establish the effectiveness of individual AIAs or to cross-compare which kinds of AIA approach may be more effective in different contexts.
In our research with the NHS AI Lab, we explored the possibility of instituting an independent assessor to partly address this problem by offering a level of external scrutiny. This was conceived as a short-term solution and may not be applicable to all contexts.
As new models of AIAs are trialled and added to a public portfolio of reference cases, we envisage that a paradigm of standards which ensures the adequacy of the assessment process will emerge.
By stepping back to consider some of the practical challenges discussed in this blog post, policymakers and researchers will be better equipped to make important decisions on implementation. How will AIAs be used? What should they include and – most importantly – who should they benefit?
To be able to make robust claims about the effectiveness of AIAs, we need policymakers, researchers and developers to work together to test, trial and iterate AIAs across a variety of contexts. This collaborative scoping effort ahead of law-making will help ensure that, instead of offering empty platitudes, AIAs help build and entrench meaningful accountability for individuals and society.
If you would like to find out more about our project with the NHS AI Lab or Ada’s work on algorithmic accountability , please email Jenny Brennan, Senior Researcher.
Related content
Algorithmic impact assessment: a case study in healthcare
This report sets out the first-known detailed proposal for the use of an algorithmic impact assessment for data access in a healthcare context
Pioneering framework for assessing the impact of medical AI set to be trialled by NHS in world-first pilot
The Ada Lovelace Institute has designed an algorithmic impact assessment (AIA) for the NHS AI Lab, the first known example within healthcare.
Technical methods for regulatory inspection of algorithmic systems
A survey of auditing methods for use in regulatory inspections of online harms in social media platforms