The Ada Lovelace Institute in 2024
Reflections from the last year and a look ahead to 2025 from Ada’s Director Gaia Marcus
17 December 2024
Reading time: 10 minutes
If 2023 was the year when AI took centre stage and hit the news and the policy agenda, then 2024 is the year when – to borrow a phrase – the rubber hit the road.
Indeed, 2024 has been rife with narratives about AI driving progress. But to ensure that these technologies actually work for people and society, we must bring calm, caution and evidence to hype cycles.
AI, society and public services
2024 has brought into sharp relief the societal and democratic impacts of data-driven technologies – from the fallout of the Horizon scandal to the varied worries about and actual use of AI in elections around the world. We have also seen great desire for AI to solve longstanding issues affecting the delivery of public services.
Despite the growing enthusiasm about the promise of AI technologies, we still lack adequate information about their reliability, efficacy, safety and impacts on people. There is a growing need to ask the right questions about these technologies: First, do they work? Secondly, do they work well enough for everyone? And finally, do they work well in context – not just under test conditions, but in the real world, on the street, in the hospital or in the classroom?
At Ada, we have been examining these questions through our work on the intersection of data, AI and public services. We had the rare opportunity to get under the bonnet with Barking & Dagenham, and published an observational study of their early use of the OneView data system and predictive analytics tools. Our research uncovered several prerequisites for data analytics to be used and trusted by frontline workers: required outputs from the system must be clearly specified and understood for all users, tools must be seen by the public as legitimate, and the accuracy of the system must be high enough to be trusted.
We’ve built on that research with two reports on the procurement of AI in local government, the culmination of a year-long research project in which we analysed the complex guidance landscape available to local government procurers and engaged with a variety of stakeholders involved in procurement on the ground. Throughout our research it’s become clear that it’s crucial to get procurement right if we want AI in public services to work well for people and society. With this in mind, we have called for a National Taskforce for the Procurement of AI in Local Government to address the multiple challenges in this area in a joined-up way.
We have also focused on the impacts of AI and data-driven systems that may entrench inequalities in society. We engaged with health data experts, doctors, and transgender and non-binary patients to understand the way gender is coded in the data-driven systems used in primary care in England. Our research highlighted that the way these systems are set up can sometimes have unintended consequences on the people the data represents. Because of this, any move to predictive or AI-driven healthcare must be carefully thought through.
We also wrote to the Home Secretary on the case for biometrics regulation, and were subsequently quoted by the Policing Minister in a recent Westminster Hall debate. We’ve just engaged in a Home Office roundtable on this vital issue and are looking forward to seeing what proposals may be brought forward.
The safety conversation
The safety conversation has continued since the UK AI Safety Summit in 2023. 2024 saw two more global meetings of international policymakers in Seoul and San Francisco. The network of global AI safety institutes has grown from two at Bletchley to more than 10 now, with Kenya, Singapore, Korea, Canada and Australia – along with the EU AI Office – joining the fold.
The focus on safety has largely remained a narrow one – on model evaluations, and on a narrow set of risks such as bioweapons or the prospect that humans will lose control of these systems. However, at Ada, we do not think this is enough. We believe safety should mean keeping people and society safe from the range of risks and harms that AI systems cause, from deepfakes and disinformation to discrimination in hiring or public service provision.
At the Seoul summit and the San Francisco convening, Ada argued for a renewed focus on context-specific evaluations of AI systems in collaboration with sectoral regulators and new statutory powers to replace the existing voluntary approach. We also conducted and published research looking at the evaluation of foundation models, which found that current evaluations are not enough to prevent unsafe products from entering the market.
The governance landscape
2024 saw the passing of EU AI Act, the first comprehensive regulation of AI anywhere in the world. Many of Ada’s key recommendations were included in the Act, such as the establishment of a new AI Office to ensure coordinated regulatory oversight, the inclusion of GPAI models so that accountability is more logically distributed along the value chain, and the requirement for public bodies to undertake fundamental rights impact assessments.
Ada’s work did not stop with the passage of the Act, as preparation for implementation quickly got underway. We supported the EU Code of Practice on General Purpose AI models, which will detail the obligations for GPAI models via a co-regulatory approach – joining four working groups covering transparency and copyright, risk assessment and mitigation, and corporate governance.
Outside of the EU, 2024 also saw some successful and unsuccessful attempts at US state level to pass legislation on AI technologies. After a change in government, the UK introduced a new data bill (on which Ada has been briefing Parliament), began to implement its online safety Act and set out its intention to pass a new bill to regulate frontier models, which we are expecting a consultation on early next year. Given the bill’s likely narrow focus, we have been working with other civil society organisations to identify what else might be needed from this bill to ensure it addresses the vast range of technologies that currently impact people’s lives.
Indeed, debates on governance and regulation often tend toward the ideological – but when it comes down to it, we need evidence on what works and what doesn’t. We have been building this evidence at Ada this year – from examining the effectiveness of a first-of-its-kind third-party AI auditing regime, to publishing a landscape review of the current state of participatory and inclusive data stewardship, to exploring what lessons can be learned for AI regulation from three other regulated sectors.
Listening to the public
Civil society plays a vital role in pushing policymakers to think beyond the ‘art of the possible’. An important part of this is elevating the voices of diverse publics. Listening to people is essential if we want to think about how new technologies are woven into the fabric of society. We still need to understand more about how AI and other data-driven technologies impact different people’s lives, livelihoods, relationships, safety and wellbeing. And we need to have a better sense of what real people want from data and AI, and how they want it to fit into their lives.
In 2024 we commissioned an update of our 2023 survey How do people feel about AI? to be published in March 2025. This vital evidence will help us understand people’s views of technologies from autonomous weapons to cancer-predicting AI tools. It will also enable us to track attitudes over time, and see where legitimacy and trust might be changing.
Public good, or public benefit, is a buzzword in almost every policy conversation right now. In 2024 we began new research asking people what they think is the relationship between AI and public good, so that policymakers can take real people’s views into account early in decision-making, service- and process-building.
Looking ahead to 2025
At Ada, we have spent the last few months speaking to the team, board and wider stakeholders about where it is best to put our energies and focus for the next few years. Looking ahead to 2025, Ada’s independence feels even more vital than ever. It allows us to stand as an ‘honest broker’ or bridge – adding value to ongoing discussions, undertaking research without capture from vested interests in the private or public sectors, and continuing to work on mechanisms for rebalancing power.
The lack of a positive democratic vision for data and AI technologies in society is ever more pressing. These technologies are not value-neutral: the way they are designed and implemented matters, and has profound ripple effects on people and institutions. Despite this, there is a ‘vision gap’ where AI technologies are being developed and deployed without democratic input or a clear sense of what kind of society we are seeking to build.
The focus on public interest AI at the upcoming AI Action Summit in France could be a much-needed intervention in this space – but significant resource and political will are necessary if AI development is truly going to be made to work for people and society. We are seeing more and more policymakers being curious about public participation and bringing public voices and opinions into their work. We want to build on this with knowledge and skills, so policymakers can do this work to not just consult the public, but to involve them meaningfully, to reduce the risk of a backlash or loss of trust.
On our side, Ada will continue our work with different publics to understand the issues that crop up when technology intersects with various aspects of their lives, and to ask what interventions in regulation, policy or practice are needed to make sure technologies align with the public’s idea of what ‘good’ looks like.
Predictions about the fast-moving world of AI are likely something of a fool’s errand, but a lot of people are betting on an unprecedented five years of progress ahead. If AI is to be as consequential to our societies and democracies as is expected, we should expect its governance to match that of similarly consequential systems. This is why Ada argues for robust regulation of AI at the national level, with a need for international alignment.
So what can we expect next year? In terms of international fora, we’ll be keeping an eye on the outcomes of the AI Action Summit – and how it might contribute to international alignment on AI governance. We will also be looking how governance can be strengthened between the OECD and the UN, and where the AI Safety Institute network can cover gaps in this ecosystem.
Regionally and nationally, we can probably expect more regulatory fragmentation from the USA, and more clarity on whether the UK will stay within a narrow focus on life-and-limb risks. We will also see a reckoning around whether the EU AI Office, after a promising start, will have enough resource and capacity to keep up with systemic risk and market developments – and whether the EU can build on the AI Act to lead on the uptake of reliable AI.
In all of this, the thing to watch for will be incentives. Are those actors best able to mitigate the considerable risks of AI given sufficiently strong incentives to do so – particularly in light of the economic drivers at play? Hot-topic mechanisms like assurance, audit and standards will be invaluable parts of good governance, but contain no incentives in and of themselves.
At Ada, we will continue to drive work that asks how and if AI tools are working for us all, evaluating and documenting the real-world impacts of AI and data-driven technologies on communities and society. As new technologies and use cases gain prominence, we will look to interrogate their use and impact – for example, we hope to publish research on AI assistants and to deepen our work on liability and AI agents in the new year.
With an expected acceleration of calls to roll out AI across the public sector, we will double down on our work to bring calm, curiosity and evidence to the table. Our research will look to identify conditions for success and how to best balance the needs of users, professionals, services and society in decisions about the use of AI and other data-driven technologies in public services. We’re keen to engage with different publics and workers to critically examine the values embedded in AI implementation and understand what kinds of uses of AI are effective, seen as publicly legitimate and achieve positive outcomes.
And finally, we will build on our work on public compute and AI industrial policy. We will examine if and how Ada might best both surface how current concentrations of power are impacting people and society, and support the development of credible policy and market levers for rebalancing power, distributing benefits and protecting those hit the hardest.
So if 2024 is the year the rubber hit the road, 2025 will be a year in which we must pay attention to who is in the driver’s seat. A year in which Ada and our colleagues, collaborators and wider ecosystem will continue to drive evidence, deliver analyses and convene diverse voices to ensure that the use of data and AI works for people and society.
We look forward to it.