‘Globally significant’ AI Act must recognise those affected by AI, says the Ada Lovelace Institute
Ada is recommending amendments to ensure that those affected by AI are recognised and empowered in the AI Act.
31 March 2022
Reading time: 3 minutes
- Ada is recommending amendments to ensure that those affected by AI are recognised and empowered in the AI Act.
- Centring those affected by AI, Ada recommends enshrining legal rights for complaint and collective action and giving civil society a voice within standards setting.
- Ada recommends expanding and reshaping the role of risk in the Act. Risk should be based on ‘reasonably foreseeable’ purpose and extended beyond individual rights and safety, to also include systemic and environmental risks.
The Ada Lovelace Institute, an independent research institute based in the UK and Brussels, has today published a series of proposed amendments to the EU AI Act aimed at recognising and empowering those affected by AI, expanding and reshaping the meaning of ‘risk’ and accurately reflecting the nature of AI systems and their lifecycle.
As the first comprehensive attempt in the world to regulate AI, the Act has the potential to become a global standard in the regulation of AI and serve as inspiration for other legislative initiatives around the world.
Ada recommends empowering people by building ‘affected persons’ into the Act and enshrining their legal rights for complaint and collective action. The voice of civil society should also be increased by building representation for civil society organisations into the EU standards-setting process, which to date has had to tackle technical rather than societal issues.
Risk forms the foundation of the AI Act, and Ada is proposing changes to both how risk is determined in the Act and to its categories of risk. The amendments recommend establishing a process for adding new types of AI to the ‘high risk’ list and assessing risk based on the ‘reasonably foreseeable purpose’ of AI systems, rather than their ‘intended purpose’.
Biometric categorisation and emotion recognition should be added to the ‘unacceptable risk’ list in Article 5. If evidence were put forward to demonstrate benefits of these technologies, Ada says they must pass a ‘reinforced proportionality test’, and if they do pass this, used only in exceptional circumstances.
Risk is primarily understood in the Act in terms of the risks of AI systems to individual rights and safety. However, Ada argues that the Act should include broader systemic and environmental risks, which cannot simply be understood in terms of risks to individuals.
Fundamental to the effectiveness of the Act is the extent to which it captures the nature of complex and adaptable AI systems as they are used in practice. Several of Ada’s recommendations are aimed at ensuring that the Act appropriately reflects and relates to the reality of how AI systems are developed, deployed and adapted.
For example, under the current proposals, high-risk systems only face ex ante requirements, meaning they apply to AI systems before deployment. This reflects a product safety approach to AI, which fails to capture how AI systems are used in the world. To address this Ada recommends that all high-risk systems are subjected to regular ex post ‘impact evaluations’.
Alexandru Circiumaru, European Public Policy Lead, said:
‘Regulating AI is a difficult legal challenge, so the EU should be congratulated for being the first to come out with a comprehensive framework.
‘However, the current proposals can and should be improved, and there is an opportunity for EU policymakers to significantly strengthen the scope and effectiveness of this landmark legislation.
‘We want to see changes to the Act that recognise those affected by AI systems, expand the EU’s concept of ‘risk’ and deal with the reality of AI, which cannot be regulated as a product or service in the traditional sense.’
Imogen Parker, Associate Director, said:
‘The EU AI Act, once adopted, will be the first comprehensive AI regulatory framework in the world. This makes it a globally significant piece of legislation with historic impact far beyond its legal jurisdiction.
‘The stakes for everyone are high with AI, which is why it is so vital the EU gets this right and makes sure the Act truly works for people and society.’
The policy briefing builds on an expert legal opinion commissioned by the Ada Lovelace Institute and authored by Professor Lilian Edwards, a leading academic in the field of internet law, which addresses substantial questions about AI regulation in Europe and looks towards a global standard.
Related content
People, risk and the unique requirements of AI
18 recommendations to strengthen the EU AI Act
Expert opinion: Regulating AI in Europe
Four problems and four solutions
Three proposals to strengthen the EU Artificial Intelligence Act
Recommendations to improve the regulation of AI – in Europe and worldwide