Ada Lovelace Institute statement on the UK’s approach to AI regulation
Michael Birtwistle, Associate Director (Law & policy), comments on the UK Government's response to the AI Regulation white paper consultation.
7 February 2024
Reading time: 3 minutes
The UK Government has published its response to the AI Regulation white paper consultation. Commenting on the policy proposals, Michael Birtwistle, Associate Director (Law & policy) at the Ada Lovelace Institute, said:
‘The Government should be given credit for evolving and strengthening its initially light-touch approach to AI regulation in response to the emergence of general-purpose AI systems. Ministers are right to acknowledge that AI is already causing harm in many everyday contexts and poses a broad range of risks to society. The Government’s work to build in-house expertise on AI through the establishment of the central AI risk function and the AI Safety Institute, as well as its development of standards on algorithmic transparency and AI management, are promising first steps. However, much more needs to be done to ensure that AI works in the best interests of the diverse publics who use these technologies.
‘We are concerned that the Government’s approach to AI regulation is ‘all eyes, no hands’: it has equipped itself with significant horizon-scanning capabilities to anticipate and monitor AI risks, but it has not given itself the powers and resources to prevent those risks or even react to them effectively after the fact. While an uplift in regulatory funding is welcome, £10 million falls well short of the hundreds of millions pounds per annum that we allocate to safety in other critical industries.
‘Unless binding legislation is brought forward, the Government’s approach to regulating AI will remain reliant on the goodwill of powerful AI companies like Microsoft, Google and Meta. Voluntary commitments to good practice are not enough: the evidence shows that only hard rules enshrined in law will incentivise developers and deployers of AI to comply and empower regulators to act.
‘The original framework of the Government’s AI White Paper had some major gaps around how its AI principles would be applied and enforced in areas with no dedicated regulator, such as the workplace and in central government contexts like tax and benefits administration, where the Government is actively developing AI-driven services. The consultation response proposes no action to address these considerable gaps, which means that no regulatory actor will be accountable for the impacts of AI in these areas.
‘The Government’s ambitions for safe, trusted AI are at also odds with the deregulatory approach it is pursuing on automated decision-making through the Data Protection and Digital Information Bill. At a time when AI systems are increasingly being used across society to make important decisions, Government should be increasing the available safeguards, not making it easier to use this technology without appropriate human oversight.
‘Ministers say they are now open to the option of bringing forward legislation on AI, but only once further conditions have been met. This risks being too little, too late: these systems are already being integrated into our daily lives, our public services and our economy, posing a range of risks to people and society. It would likely take a year or longer between legislation first being proposed and then being enacted. In a similar period we have seen general-purpose AI systems like ChatGPT go from a niche research area to the most quickly adopted digital product in history.
‘We shouldn’t be waiting for companies to stop cooperating or for a Post Office-style scandal to equip the Government and regulators to react. Ministers should look to capitalise on the momentum of the last year and bring forward binding legislation to prevent and react effectively to the risks of AI as soon as possible.’
Related content
Regulating AI in the UK
Strengthening the UK's proposals for the benefit of people and society
Regulating AI in the UK
This briefing examines the UK’s current plans for AI regulation an sets out recommendations for the Government and the Foundation Model Taskforce