Skip to content
Press release

7 in 10 say laws and regulations would increase their comfort with AI amid rising public concerns, national survey finds

72% of the UK public say that laws and regulation would increase their comfort with AI, according to a new nationally representative survey.

25 March 2025

Reading time: 4 minutes

72% of the UK public say that laws and regulation would increase their comfort with AI, up from 62% in 2023, according to a new nationally representative survey of UK attitudes to AI published by the Ada Lovelace Institute and the Alan Turing Institute, part of the UKRI-funded Public Voices in AI programme.

The nationally representative survey of 3,513 UK residents provides valuable insights into the public’s awareness and perception of different uses of AI, their experiences of harm and their expectations in relation to governance, regulation and the role of AI in decision-making. It follows a previous survey carried out in 2022, before the release of ChatGPT and other LLM-based chatbots, which was published in 2023.

Public awareness, benefits and concerns

The survey found that public awareness of different AI uses varies widely. While 93% have heard of driverless cars and 90% of facial recognition in policing, only 18% were aware of the use of AI for welfare benefits assessments. It also found that large language models (LLMs) have gone mainstream, with 61% saying they have heard of LLMs and 40% saying they have used them, demonstrating rapid adoption since their release to the public in 2022.

The public do see benefits to specific uses of technology, and perceptions of overall benefits are stable since the 2022/23 survey. The most commonly reported benefits are speed and efficiency improvements. However, levels of concern have increased across all six uses of AI asked about in both surveys, with common concerns being around overreliance on technology, mistakes being made and lack of transparency in decision-making. 

The public have particular concerns about the use of their data and representation in decision-making. 83% of the UK public are concerned about public sector bodies sharing their data with private companies to train AI systems. When asked about the extent to which they felt their views and values are represented in current decisions being made about AI and how it affects their lives, half of the public (50%) said that they do not feel represented.

Exposure to harm and support for regulation

The survey also shows that exposure to harms from AI is widespread. Two thirds of the public (67%) reported they have encountered some form of AI-related harm at least a few times, with false information (61%), financial fraud (58%) and deepfakes (58%) being the most common.

This is accompanied by strong public demand for laws, regulation and action on AI policy. 72% indicate that laws and regulations would increase their comfort with AI – an increase from 62% in 2022/23.

88% of people believe it is important that the government or regulators have the power to stop the use of an AI product if it is deemed to pose a risk of serious harm to the public and over 75% said government or independent regulators, rather than private companies alone, should oversee AI safety.

The survey also found support for the right to appeal against AI-based decisions, and for more transparency. 65% said that procedures for appealing decisions and 61% said getting more information about how AI has been used to make a decision would make them more comfortable. 

Demographic differences

Recognising that much of the existing evidence on public attitudes to AI does not adequately represent marginalised groups, the survey deliberately oversampled three underrepresented demographics: people from low-income backgrounds; digitally excluded people; and people from minoritised ethnic groups, such as Black, Black British, Asian and Asian British people. 

The survey found that attitudes vary between different demographics, with underrepresented populations reporting more concern and perceiving AI as less beneficial. For example, 57% of Black people and 52% of Asian people expressed concern about facial recognition in policing, compared to 39% in the general population. Across all of the AI use cases asked about in the survey, people on lower incomes perceived them as less beneficial than people on higher incomes.

Octavia Field Reid, Associate Director at the Ada Lovelace Institute, said:

‘This new evidence shows that – for AI to be developed and deployed responsibly – it needs to take account of public expectations, concerns and experiences. The Government’s current inaction in legislating to address the potential risks and harms of AI technologies is in direct contrast to public concerns and a growing desire for regulation. This gap between policy and public expectations creates a risk of backlash, particularly from minoritised groups and those most affected by AI harms, which would hinder the adoption of AI and the realisation of its benefits. There will be no greater barrier to delivering on the potential of AI than a lack of public trust.’

Professor Helen Margetts, Programme Director for Public Policy at the Alan Turing Institute, said:

‘To realise the many opportunities and benefits of AI, it will be important to build consideration of public views and experiences into decision-making  about AI. These findings suggest the importance of government’s promise in the AI Action Plan to fund regulators to scale up their AI capabilities and expertise, which should foster public trust. The findings also highlight the need to tackle the differential expectations and experiences of those on lower incomes, so that they gain the same benefits as high income groups from the latest generation of AI.’

Related content