Ada Lovelace Institute responds to General-Purpose AI Code of Practice draft
Gaia Marcus, Director of the Ada Lovelace Institute, has responded to the third draft of the General-Purpose AI Code of Practice.
Reading time: 3 minutes
Responding to the publication of the third draft of the General-Purpose AI Code of Practice, Gaia Marcus, Director of the Ada Lovelace Institute, said:
“The Code of Practice is a crucial mechanism for clarifying and codifying the obligations of general-purpose AI (GPAI) providers that may pose systemic risk under the EU AI Act, and ensuring good social and economic outcomes from the most powerful models. It’s important that it properly reflects the interests of those using and subject to AI products by appropriately managing AI risk.
“GPAI models function as the engines powering countless downstream applications. Just as we regulate car engine safety standards rather than individually certifying each vehicle model built with that engine, the Code provides assurance for the models that form the core of thousands of AI applications.
“It is important to note that the safeguards in this iteration of the Code have been considerably diluted from the first draft. The range of risks in scope has been narrowed and the whistleblower protections seemingly stripped back. However, we welcome improvements from the previous draft, which make it harder for providers to self-exclude from independent external assessment – a critical mechanism to ensure meaningful scrutiny. We also welcome the mechanism for pre-deployment information sharing, which if made more robust will increase the chance of finding and managing risks before deployment.
“The current direction of travel on the Code largely draws on companies existing safety and security practices, and voluntary obligations under international agreements that they should already comply with. There is considerable value in consolidating these practices into a shared standard and ensuring consistency across the industry, but it means the Code will for now function more as a useful compliance tool for companies rather than a means of robustly raising the bar on safety and security. We therefore welcome the recommendation in Appendix 2 that the code is updated regularly, including when necessary to deal with emergency situations. Such mechanisms may allow for an iteratively stronger code to be developed with time.
“Overall, the draft represents significant progress in the challenge of governing AI. The Code will streamline compliance and help ensure a more level playing field, providing clarity for European consumers and businesses procuring , and supporting adoption by providing some reassurance that what they’re purchasing has been safely developed, effectively tested, and with security mitigations in place.
“The mechanism is highly targeted. Analysis carried out by Epoch AI indicates that currently only around 25 models will be covered by the provisions on systemic risk, with the only affected European model provider describing the Act as workable. The drafters say it will only be relevant to 5-15 companies at any given point in time in the future. The compliance costs for testing, monitoring and transparency represent only a small fraction of the cost of compute required to train any model captured by the existing compute threshold.
“While policymakers will need be much more ambitious in future iterations of the Code to address the emerging impacts of these technologies, EU institutions should take the opportunity to capture and operationalise the existing state-of-the-art delivered by this expert-led process.”